sentences
list | labels
list |
---|---|
[
"Solving math word problems requires deductive reasoning over the quantities in the text.",
"Various recent research efforts mostly relied on sequence-to-sequence or sequence-to-tree models to generate mathematical expressions without explicitly performing relational reasoning between quantities in the given context.",
"While empirically effective, such approaches typically do not provide explanations for the generated expressions.",
"In this work, we view the task as a complex relation extraction problem, proposing a novel approach that presents explainable deductive reasoning steps to iteratively construct target expressions, where each step involves a primitive operation over two quantities defining their relation.",
"Through extensive experiments on four benchmark datasets, we show that the proposed model significantly outperforms existing strong baselines.",
"We further demonstrate that the deductive procedure not only presents more explainable steps but also enables us to make more accurate predictions on questions that require more complex reasoning.",
"1 1 Introduction Math word problem (MWP) solving (Bobrow, 1964) is a task of answering a mathematical question that is described in natural language.",
"Solving MWP requires logical reasoning over the quantities presented in the context (Mukherjee and Garain, 2008) to compute the numerical answer.",
"Various recent research efforts regarded the problem as a generation problem typically, such models focus on generating the complete target mathematical expression, often represented in the form of a linear sequence or a tree structure (Xie and Sun, 2019).",
"Figure 1 (top) depicts a typical approach that attempts to generate the target expression in the 1 Our code and data are released at https://github.",
"Question : In a division sum , the remainder is 8 and the divisor is 6 times the quotient and is obt--ained by adding 3 to the thrice of the remainder.",
"What is the divident?",
"Answer : 129 .",
"5 Expr : (( 8 3 + 3 ) ( 8 3 + 3 ) 6 ) + 8 Tree generation: 7 ops + 8 + 3 + 6 8 3 3 8 3 Our deductive procedure: 5 ops 8 3 = 24 1 24 + 3 = 27 2 27 6 = 4 .",
"form of a tree structure, which is adopted in recent research efforts (Xie and Sun, 2019; Zhang et al., 2020; Patel et al., 2021; Wu et al., 2021).",
"Specifically, the output is an expression that can be obtained from such a generated structure.",
"We note that, however, there are several limitations with such a structure generation approach.",
"First, such a process typically involves a particular order when generating the structure.",
"In the example, given the complexity of the problem, the decision of generating the addition ( + ) operation as the very first step could be counter-intuitive and does not provide adequate explanations that show the reasoning process when being presented to a human learner.",
"Furthermore, the resulting tree contains identical sub-trees ( 8 3 + 3 ) as highlighted in blue dashed boxes.",
"Unless a certain specifically designed mechanism is introduced for reusing the already generated intermediate expression, the approach would need to repeat the same effort in its process for generating the same sub-expression.",
"Solving math problems generally requires deductive reasoning, which is also regarded as one of the important abilities in children's cognitive development (Piaget, 1952).",
"In this work, we propose a novel approach that explicitly presents deductive reasoning steps.",
"We make a key observation that MWP solving fundamentally can be viewed as a complex relation extraction problem the task of identifying the complex relations among the quantities that appear in the given problem text.",
"Each primitive arithmetic operation (such as addition , subtraction ) essentially defines a different type of relation.",
"Drawing on the success of some recent models for relation extraction in the literature (Zhong and Chen, 2021), our proposed approach involves a process that repeatedly performs relation extraction between two chosen quantities (includ-ing newly generated quantities).",
"As shown in Figure 1, our approach directly extracts the relation ( multiplication , or ) between 8 and 3 , which come from the contexts remainder is 8 and thrice of the remainder .",
"In addition, it allows us to reuse the results from the intermediate expression in the fourth step.",
"This process naturally yields a deductive reasoning procedure that iteratively derives new knowledge from existing ones.",
"Designing such a complex relation extraction system presents several practical challenges.",
"For example, some quantities may be irrelevant to the question while some others may need to be used multiple times.",
"The model also needs to learn how to properly handle the new quantities that emerge from the intermediate expressions.",
"Learning how to effectively search for the optimal sequence of operations (relations) and when to stop the deductive process is also important.",
"In this work, we tackle the above challenges and make the following major contributions: We formulate MWP solving as a complex relation extraction task, where we aim to repeatedly identify the basic relations between different quantities.",
"To the best of our knowledge, this is the first effort that successfully tackles MWP solving from such a new perspective.",
"Our model is able to automatically produce explainable steps that lead to the final answer, presenting a deductive reasoning process.",
"Our experimental results on four standard datasets across two languages show that our model significantly outperforms existing strong baselines.",
"We further show that the model performs better on problems with more complex equations than previous approaches.",
"Early efforts focused on solving MWP using probabilistic models with handcrafted features (Liguda and Pfeiffer, 2012).",
"Kushman et al. (2014) and Roy and Roth (2018) designed templates to find the alignments between the declarative language and equations.",
"Most recent works solve the problem by using sequence or tree generation models.",
"Wang et al. (2017) proposed the Math23k dataset and presented a sequence-to-sequence (seq2seq) approach to generate the mathematical expression (Chiang and Chen, 2019).",
"Other approaches improve the seq2seq model with reinforcement learning (Huang et al., 2018), template-based methods (Wang et al., 2019), and group attention mechanism (Li et al., 2019).",
"Xie and Sun (2019) proposed a goal-driven tree-structured (GTS) model to generate the expression tree.",
"This sequence-to-tree approach significantly improved the performance over the traditional seq2seq approaches.",
"Some follow-up works incorporated external knowledge such as syntactic dependency (Shen and Jin, 2020; Lin et al., 2021) or commonsense knowledge (Wu et al., 2020).",
"Cao et al. (2021) modeled the equations as a directed acyclic graph to obtain the expression.",
"Zhang et al. (2020) and Li et al. (2020) adopted a graph-to-tree approach to model the quantity relations using the graph convolutional networks (GCN) (Kipf and Welling, 2017).",
"Applying pre-trained language models such as BERT (Devlin et al., 2019) was shown to significantly benefit the tree expression generation (Lan et al., 2021; Tan et al., 2021; Liang et al., 2021; Li et al., 2021; Shen et al., 2021).",
"Different from the tree-based generation models, our work is related to deductive systems (Shieber et al., 1995; Nederhof, 2003) where we aim to obtain step-by-step expressions.",
"Recent efforts have also been working towards this direction.",
"Ling et al. (2017) constructed a dataset to provide explanations for expressions at each step.",
"Amini et al. (2019) created the MathQA dataset annotated with step-by-step operations.",
"The annotations present the expression at each intermediate step during problem-solving.",
"Our deductive process (Figure 1) attempts to automatically obtain the expression in an incremental, step-by-step manner.",
"the field of information extraction that is focused on identifying the relationships between a pair of entities.",
"Recently, Zhong and Chen (2021) designed a simple and effective approach to directly model the relations on the span pair representations.",
"In this work, we treat the operation between a pair of quantities as the relation at each step in our deductive reasoning process.",
"Traditional methods (Liang et al., 2018) applied rule-based approaches to extract the mathematical relations.",
"MWP solving is typically regarded as one of the system 2 tasks (Kahneman, 2011; Bengio et al., 2021), and our current approach to this problem is related to neural symbolic reasoning (Besold et al., 2017).",
"We design differentiable modules (Andreas et al., 2016; Gupta et al., 2020) in our model (3.2) to perform reasoning among the quantities.",
"The math word problem solving task can be defined as follows.",
"Given a problem description S = { w 1 , w 2 , , w n } that consists of a list of n words and QS = { q 1 , q 2 , , q m } , a list of m quantities that appear in S , our task is to solve the problem and return the numerical answer.",
"Ideally, the answer shall be computed through a mathematical reasoning process over a series of primitive mathematical operations (Amini et al., 2019) as shown in Figure",
"1. Such operations may include + ( addition ), ( subtraction ), ( multiplication ), ( division ), and ( exponentiation ).",
"2 In our view, each of the primitive mathematical operations above can essentially be used for describing a specific relation between quantities.",
"Fundamentally, solving a math word problem is a problem of complex relation extraction , which requires us to repeatedly identify the relations between quantities (including those appearing in the text and those intermediate ones created by rela-tions).",
"The overall solving procedure requires in-2 While we consider binary operators, extending our approach to support unary or ternary operators is possible (4.3).",
"In practice, some questions cannot be answered without relying on certain predefined constants (such as and 1 ) that may not have appeared in the given problem description.",
"We therefore also consider a set of constants C = { c 1 , c 2 , , c C } .",
"Such constants are also regarded as quantities (i.e., they would be regarded as { q m + 1 , q m + 2 , . . . , q m + C } ) which may play useful roles when forming the final answer expression.",
"As shown in Figure 1, applying the mathematical relation (e.g., + ) between two quantities yields an intermediate expression e .",
"In general, at step t , the resulting expression e ( t ) (after evaluation) becomes a newly created quantity that is added to the list of candidate quantities and is ready for participating in the remaining deductive reasoning process from step t + 1 onward.",
"This process can be mathematically denoted as follows: Initialization: Q ( 0 ) = QS C At step t : e ( t ) i,j,op = q i op q j q i , q j Q ( t 1 ) Q ( t ) = Q ( t 1 ) { e ( t ) i,j,op } q Q ( t ) = e ( t ) i,j,op where e ( t ) i,j,op represents the expression after applying the relation op to the ordered pair ( q i , q j ) .",
"Following the standard deduction systems (Shieber et al., 1995; Nederhof, 2003), the reasoning process can be formulated in Figure",
"2. We start with an axiom with the list of quantities in Q ( 0 ) .",
"The inference rule is q i op q j as described above to obtain the expression as a new quantity at step t .",
"Reasoner Figure 3 shows the deductive reasoning procedure in our model for an example that involves 3 quantities.",
"We first convert the quantities (e.g., 2 , 088 ) into a general quantity token <quant> .",
"We next adopt a pre-trained language model such as BERT (Devlin et al., 2019) or Roberta (Cui et al., 2019; Liu et al., 2019) to obtain the quantity representation q for each quantity q .",
"Given the quantity representations, we consider all the possible quantity pairs, ( q i , q j ) .",
"Similar to Lee et al. (2017), we can obtain the representation of each pair by concatenating the two quantity representations and the element-wise product between them.",
"As shown in Figure 3, we apply a non-linear feed-forward network (FFN) on top of the pair representation to get the representation of the newly created expression.",
"The above procedure can be mathematically written as: e i,j,op = FFN op ([ q i , q j , q i q j ]) , i j (1) where e i,j,op is the representation of the intermediate expression e and op is the operation (e.g., + , ) applied to the ordered pair ( q i , q j ).",
"FFN op is an operation-specific network that gives the expression representation under the particular operation op .",
"Note that we have the constraint i j .",
"As a result we also consider the reverse operation for division and subtraction (Roy and Roth, 2015).",
"As shown in Figure 3, the expression e 1 , 2 , will be regarded as a new quantity with representation q 4 at t = 1 .",
"In general, we can assign a score to a single reasoning step that yields the expression e ( t ) i,j,op from q i and q j with operation op .",
"Such a score can be calculated by summing over the scores defined over the representations of the two quantities and the score defined over the expression: s ( e ( t ) i,j,op ) = s q ( q i ) + s q ( q j ) + s e ( e i,j,op ) (2) where we have: s q ( q i ) = w q FFN ( q i ) s e ( e i,j,op ) = w e e i,j,op (3) Rationalizer Mechanism Multi-head Self-Attention Attention ( Q = [ q i , e ] ,K = [ q i , e ] ,V = [ q i , e ]) GRU cell GRU_Cell ( input = q i , previous hidden = e ) Table 1: The mechanism in different rationalizers.",
"where s q ( ) and s e ( ) are the scores assigned to the quantity and the expression, respectively, and w q and w e are the corresponding learnable parameters.",
"Our goal is to find the optimal expression sequence [ e ( 1 ) , e ( 2 ) , , e ( T ) ] that enables us to compute the final numerical answer, where T is the total number of steps required for this deductive process.",
"Terminator Our model also has a mechanism that decides whether the deductive procedure is ready to terminate at any given time.",
"We introduce a binary label , where 1 means the procedure stops here, and 0 otherwise.",
"The final score of the expression e at time step t can be calculated as: S ( e ( t ) i,j,op , ) = s ( e ( t ) i,j,op ) + w FFN ( e i,j,op ) (4) where w is the parameter vector for scoring the .",
"Rationalizer Once we obtain a new intermediate expression at step t , it is crucial to update the representations for the existing quantities.",
"We call this step rationalization because it could potentially give us the rationale that explains an outcome (Lei et al., 2016).",
"As shown in Figure 4, the intermediate expression e serves as the rationale that explains how the quantity changes from q to q .",
"Without this step, there is a potential shortcoming for the model.",
"That is, because if the quantity representations do not get updated as we continue the deductive reasoning process, those expressions that were initially highly ranked (say, at the first step) would always be preferred over those lowly ranked ones throughout the process.",
"3 We rationalize the quantity representation using the current intermediate expression e ( t ) , so that the quantity is aware of the generated expressions when its representation gets updated.",
"This procedure can be mathematically formulated as follows: q i = Rationalizer ( q i , e ( t ) ) 1 i Q (5) 3 See the supplementary material for more details on this.",
"Two well-known techniques we can adopt as rationalizers are multi-head self-attention (Vaswani et al., 2017) and a gated recurrent unit (GRU) (Cho et al., 2014) cell, which allow us to update the quantity representation, given the intermediate expression representation.",
"Table 1 shows the mechanism in two different rationalizers.",
"For the first approach, we essentially construct a sentence with two token representations quantity q i and the previous expression e to perform self-attention.",
"In the second approach, we use q i as the input state and e as the previous hidden state in a GRU cell.",
"Similar to training sequence-to-sequence models (Luong et al., 2015), we adopt the teacher-forcing strategy (Williams and Zipser, 1989) to guide the model with gold expressions during training.",
"The loss 4 can be written as: L ( ) = T t = 1 ( max ( i,j,op ) H ( t ) , [ S ( e ( t ) i,j,op , )] S ( e ( t ) i ,j ,op , )) + 2 (6) where includes all parameters in the deductive reasoner and H ( t ) contains all the possible choices of quantity pairs and relations available at time step t .",
"is the hyperparameter for the L 2 regularization term.",
"The set H ( t ) grows as new expressions are constructed and become new quantities during the deductive reasoning process.",
"The overall loss is computed by summing over the loss at each time step (assuming totally T steps).",
"During inference, we set a maximum time step T max and find the best expression e that has the highest score at each time step.",
"Once we see = 1 is chosen, we stop constructing new expressions 4 Actually, one might have noticed that this loss comes with a trivial solution at = 0 .",
"In practice, however, our model and training process would prevent us from reaching such a degenerate solution with proper initialization (Goodfellow et al., 2016).",
"This is similar to the training of a structured perceptron (Collins, 2002), where a similar situation is also involved.",
"and terminate the process.",
"The overall expression (formed by the resulting expression sequence) will be used for computing the final numerical answer.",
"Declarative Constraints Our model repeatedly relies on existing quantities to construct new quantities, which results in a structure showing the deductive reasoning process.",
"One advantage of such an approach is that it allows certain declarative knowledge to be conveniently incorporated.",
"For example, as we can see in Equation 6, the default approach considers all the possible combinations among the quantities during the maximization step.",
"We can easily impose constraints to avoid considering certain combinations.",
"In practice, we found in certain datasets such as SVAMP, there does not exist any expression that involve operations applied to the same quantity (such as 9 + 9 or 9 9 , where 9 is from the same quantity in the text).",
"Besides, we also observe that the intermediate results would not be negative.",
"We can simply exclude such cases in the maximization process, effectively reducing the search space during both training and inference.",
"We show that adding such declarative constraints can help improve the performance.",
"Datasets We conduct experiments on four datasets across two different languages: MAWPS (Koncel-Kedziorski et al., 2016), Math23k (Wang et al., 2017), MathQA (Amini et al., 2019), and SVAMP (Patel et al., 2021).",
"The dataset statistics can be found in Table",
"2. For MathQA 5 , we follow Tan et al. (2021) 6 to 5 The original MathQA (Amini et al., 2019) dataset contains a certain number of instances that have annotated equations which cannot lead to the correct numerical answer.",
"6 Our dataset size is not exactly the same as Tan et al. (2021) as they included some instances that are wrongly annotated.",
"We only kept the part that has correct annotations.",
"We con-5948 Model Val Acc.",
"adapt the dataset to filter out some questions that are unsolvable.",
"We consider the operations addition , subtraction , multiplication , and division for MAWPS and SVAMP, and an extra exponentiation for MathQA and Math23k.",
"The number of operations involved in each question can be one of the indicators to help us gauge the difficulty of a dataset.",
"Figure 5 shows the percentage distribution of the number of operations involved in each question.",
"The MathQA dataset generally contains larger portions of questions that involve more operations, while 97% of the questions in MAWPS can be answered with only one or two operations.",
"More than 60% of the instances in MathQA have three or more operations, which likely makes their problems harder to solve.",
"Furthermore, MathQA (Amini et al., 2019) contains GRE questions in many domains including physics, geometry, probability, etc., while Math23k questions are from primary school.",
"Different from other datasets, SVAMP (Patel et al., 2021) 7 is a challenging set that is manually created to evaluate a model's robustness.",
"They applied variations over the instances sampled from MAWPS.",
"Such variations could be: adding extra quantities, swapping the positions between noun phrases, etc.",
"Baselines The baseline approaches can be broadly categorized into sequence-to-sequence (S2S), sequence-to-tree (S2T) and graph-to-tree (G2T) models.",
"GroupAttn (Li et al., 2019) designed several types of attention mechanisms such as question or quantity related attentions in the seq2seq model.",
"Tan et al. (2021) uses multilingual firmed such information with the authors of Tan et al. (2021), and make our version of this dataset publicly available.",
"7 There is no test split for this dataset.",
"BERT with an LSTM decoder ( mBERT-LSTM ).",
"Lan et al. (2021) presented two seq2seq models that use BERT/Roberta as both encoder and decoder, namely, BERT-BERT and Roberta-Roberta .",
"Sequence-to-tree models mainly use a tree-based decoder with GRU ( GTS ) (Xie and Sun, 2019) or BERT as the encoder ( BERT-Tree ) (Liang et al., 2021; Li et al., 2021).",
"NUMS2T (Wu et al., 2020) and NeuralSymbolic (Qin et al., 2021) solver incorporate external knowledge in the S2T architectures.",
"Graph2Tree (Zhang et al., 2020) models the quantity relations using GCN.",
"Training Details We adopt BERT (Devlin et al., 2019) and Roberta (Liu et al., 2019) for the English datasets.",
"Chinese BERT and Chinese Roberta (Cui et al., 2019) are used for Math23k.",
"We use the GRU cell as the rationalizer.",
"We also conduct experiments with multilingual BERT and XLM-Roberta (Conneau et al., 2020).",
"The pre-trained models are initialized from HuggingFace's Transformers (Wolf et al., 2020).",
"We optimize the loss with the Adam optimizer (Kingma and Ba, 2014; Loshchilov and Hutter, 2019).",
"We use a learning rate of 2 e 5 and a batch size of 30 .",
"The regularization coefficient is set to 0 .",
"01 .",
"We run our models with 5 random seeds and report the average results (with standard deviation).",
"Following most previous works, we mainly report the value accuracy (percentage) in our experiments.",
"In other words, a prediction is considered correct if the predicted expression leads to the same value as the gold expression.",
"Following previous practice (Zhang et al., 2020; Tan et al., 2021; Patel et al., 2021), we report 5949 Model Val Acc.",
"5 -fold cross-validation results on both MAWPS 8 and Math23k, and also report the test set performance for Math23k, MathQA and SVAMP.",
"MAWPS and Math23k We first discuss the results on MAWPS and Math23k, two datasets that are commonly used in previous research.",
"Table 3 and 4 show the main results of the proposed models with different pre-trained language models.",
"We compare with previous works that have reported results on these datasets.",
"Among all the encoders for our model DEDUCTREASONER , the Roberta encoder achieves the best performance.",
"In addition, DEDUCTREASONER significantly outperforms all the baselines regardless of the choice of encoder.",
"The performance on the best S2S model (Roberta-Roberta) is on par with the best S2T model (Roberta-Graph2Tree) on MAWPS.",
"Overall, the accuracy of Roberta-based DEDUCTREASONER is more than 3 points higher than Roberta-Graph2Tree ( p < 0 . 001 ) 9 on MAWPS, and more than 2 points higher than BERT-Tree ( p < 0 . 005 ) on Math23k.",
"The comparisons show that our deductive reasoner is robust across different languages and datasets of different sizes.",
"MathQA and SVAMP As mentioned before, MathQA and SVAMP are more challenging the former consists of more complex questions and the latter consists of specifically designed challenging questions.",
"Table 5 and 6 show the performance comparisons.",
"We are able to outperform the best baseline mBERT-LSTM 10 by 1.5 points in accuracy on MathQA.",
"Different from other three datasets, the performance between different language models shows larger gaps on SVAMP.",
"As we can see 8 All previous efforts combine training/dev/test sets and perform 5 -fold cross validation, which we follow.",
"from baselines and our models, the choice of encoder appear to be important for solving questions in SVAMP the results on using Roberta as the encoder are particularly striking.",
"Our best variant ROBERTA-DEDUCTREASONER achieves an accuracy score of 47.3 and is able to outperfrom the best baseline (Roberta-Graph2Tree) by 3.5 points ( p < 0 . 01 ).",
"By incorporating the constraints from our prior knowledge (as discussed in 3.3), we observe significant improvements for all variants up to 7 .",
"0 points for our BERT-DEDUCTREASONER .",
"Overall, these results show that our model is more robust as compared to previous approaches on such challenging datasets.",
"Fine-grained Analysis We further perform fine-grained performance analysis based on questions with different numbers of operations.",
"Table 7 shows the accuracy scores for questions that involve different numbers of operations.",
"It also shows the equation accuracy on all datasets 11 .",
"We compared our ROBERTADEDUCTREASONER with the best performing baselines in Table 3 (Roberta-Graph2Tree), 4 (BERT-Tree), 5 (mBERT+LSTM) and 6 (Roberta-Graph2Tree).",
"On MAWPS and Math23k, our ROBERTA-DEDUCTREASONER model consistently yields higher results than baselines.",
"On MathQA, our model also performs better on questions that involve 2, 3, and 4 operations.",
"For the other more challenging dataset SVAMP, our model 11 Equ Acc: we regard an equation as correct if and only if it matches with the reference equation (up to reordering of sub-expressions due to commutative operations, namley + and ).",
"unused quantities.",
"The second row shows the percentage of instances that have unused quantities.",
": may not be representative as there are only 3 instances.",
"has comparable performance with the baseline on 1 -step questions, but achieves significantly better results (+ 14 . 3 points) on questions that involve 2 steps.",
"Such comparisons on MathQA and SVAMP show that our model has a robust reasoning capability on more complex questions.",
"We observe that all models (including ours and existing models) are achieving much lower accuracy scores on SVAMP, as compared to other datasets.",
"We further investigate the reason for this.",
"Patel et al. (2021) added irrelevant information such as extra quantities in the question to confuse the models.",
"We quantify the effect by counting the percentage of instances which have quantities unused in the equations.",
"As we can see in Table 8, SVAMP has the largest proportion (i.e., 44 . 5 %) of instances whose gold equations do not fully utilize all the quantities in the problem text.",
"The performance also significantly drops on those questions with more than one unused quantity on all datasets.",
"The analysis suggests that our model still suffer from extra irrelevant information in the question and the performance is severely affected when such irrelevant information appears more frequently.",
"Effect of Rationalizer Table 9 shows the performance comparison with different rationalizers.",
"As described in 3.2, the rationalizer is used to update the quantity representations at each step, so as to better prepare them for the subsequent reasoning process given the new context.",
"We believe this step is crucial for achieving good performance, especially for complex MWP solving.",
"As shown in Table 9, the performance drops by 7 .",
"3 points in value Rationalizer MAWPS Math23k Equ Acc.",
"accuracy for Math23k without rationalization, con-firming the importance of rationalization in solving more complex problems that involve more steps.",
"As most of the questions in MAWPS involve only 1 -step questions, the significance of using rationalizer is not fully revealed on this dataset.",
"It can be seen that using self-attention achieves worse performance than the GRU unit.",
"We believe the lower performance by using multi-head attention as rationalizer may be attributed to two reasons.",
"First, GRU comes with sophisticated internal gating mechanisms, which may allow richer representations for the quantities.",
"Second, attention, often interpreted as a mechanism for measuring similarities (Katharopoulos et al., 2020), may be inherently biased when being used for updating quantity representations.",
"This is because when measuring the similarity between quantities and a specific expression (Figure 4), those quantities that have just participated in the construction of the expression may receive a higher degree of similarity.",
"Explainability of Output Figure 6 presents an example prediction from Math23k.",
"In this question, the gold deductive process first obtains the speed difference by 5 ( 5 + 3 ) 3 ( 5 + 3 ) and the final answer is 1400 divided by this difference.",
"On the other hand, the predicted deductive process offers a slightly different understanding in speed difference.",
"Assuming speed can be measured by some abstract units, the predicted deductive 5951 Question : There are 255 apple trees in the orchard.",
"Planting another 35 pear trees makes the number exactly the same as the apple trees.",
"If every 20 pear trees are planted in a row, how many rows can be planted in total?",
"Gold Expr : ( 255 35 ) 20 Answer : 11 Predicted Expr : ( 255 + 35 ) 20 Predicted : 14 .",
"5 Deductive Scores: 255 + 35 = 260",
"Prob.: 0 .",
"068 > 255 35 = 220",
"Prob.: 0 .",
"062 Perturbed Question : There are 255 apple trees in the orchard.",
"The number of pear trees are 35 fewer than the apple trees.",
"If every 20 pear trees are planted in a row, how many rows can be planted in total?",
"255 + 35 = 260",
"Prob.: 0 .",
"061 < 255 35 = 220",
"Prob.: 0 .",
"067 Figure 7: Question perturbation in deductive reasoning.",
"process first performs subtraction between 5 and 3 , which gives us 2 units of speed difference.",
"Next, we can obtain the number of words associated with each speed unit ( 1400 2 ).",
"Finally, we can arrive at the total number of words by multiplying the number of words per unit ( 700 ) and the total number of units ( 8 ).",
"12 Through such an example we can see that our deductive reasoner is able to produce explainable steps to understand the answers.",
"Question Perturbation The model predictions also give us guidance to understand the errors.",
"Figure 7 shows how we can perturb a question given the error prediction (taken from Math23k).",
"As we can see, the first step is incorrectly predicted with the + relation between 255 and 35 .",
"Because the first step involves the two quantities in the first two sentences, where we can locate the possible cause for the error.",
"The gold step has a probability of 0 .",
"062 which is somewhat lower than the incorrect prediction.",
"We believe that the second sentence (marked in red) may convey semantics that can be challenging for the model to digest, resulting in the incorrect prediction.",
"Thus, we perturb the second sentence to make it semantically more straightforward (marked below in blue).",
"The probability for the sub-expression 225 35 becomes higher after the purtubation, leading to a correct prediction (the relation).",
"Such an analysis demonstrates the strong interpretability of our deductive reasoner, and highlights the important connection between math word problem solving and reading comprehension, a topic that has been studied in educational psychology (Vilenius-Tuohimaa et al., 2008).",
"We discuss some practical issues with the current model in this section.",
"Similar to most previous re-12 Interestingly, when we presented this question to 3 human solvers, 2 of them used the first approach and 1 of them arrived at the second approach.",
"search efforts (Li et al., 2019; Xie and Sun, 2019), our work needs to maintain a list of constants (e.g., 1 and ) as additional candidate quantities.",
"However, a large number of quantities could lead to a large search space of expressions (i.e., H ).",
"In practice, we could select some top-scoring quantities and build expressions on top of them (Lee et al., 2018).",
"Another assumption of our model, as shown in Figure 3, is that only binary operators are considered.",
"Actually, extending it to support unary or ternary operators can be straightforward.",
"Handling unary operators would require the introduction of some unary rules, and a ternary operator can be defined as a composition of two binary operators.",
"Our current model performs the greedy search in the training and inference process, which could be improved with a beam search process.",
"One challenge with designing the beam search algorithm is that the search space H ( t ) is expanding at each step t (Equation 6).",
"We empirically found the model tends to favor outputs that involve fewer reasoning steps.",
"In fact, better understanding the behavior and effect of beam search in seq2seq models remains an active research topic (Cohen and Beck, 2019; Koehn and Knowles, 2017; Hokamp and Liu, 2017), and we believe how to perform effective beam search in our setup could be an interesting research question that is worth exploring further.",
"We provide a new perspective to the task of MWP solving and argue that it can be fundamentally regarded as a complex relation extraction problem.",
"Based on this observation, and motivated by the deductive reasoning process, we propose an end-to-end deductive reasoner to obtain the answer expression in a step-by-step manner.",
"At each step, our model performs iterative mathematical relation extraction between quantities.",
"Thorough experiments on four standard datasets demonstrate that our deductive reasoner is robust and able to yield new state-of-the-art performance.",
"The model achieves particularly better performance for complex questions that involve a larger number of operations.",
"It offers us the flexibility in interpreting the results, thanks to the deductive nature of our model.",
"Future directions that we would like to explore include how to effectively incorporate commonsense knowledge into the deductive reasoning process, and how to facilitate counterfactual reasoning (Richards and Sanderson, 1999).",
"We would like to thank the anonymous reviewers and our ARR action editor for their constructive comments, and Hang Li for helpful discussions and comments on this work.",
"This work was done when Jierui Li was working as a research assistant at SUTD, and when Wei Lu was serving as a consultant at ByteDance AI Lab."
]
| [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"objective",
"objective",
"method",
"objective",
"abstain",
"result",
"objective",
"other",
"other"
]
|
[
"For task-oriented dialog systems, training a Reinforcement Learning (RL) based Dialog Management module suffers from low sample efficiency and slow convergence speed due to the sparse rewards in RL.",
"To solve this problem, many strategies have been proposed to give proper rewards when training RL, but their rewards lack interpretability and cannot accurately estimate the distribution of state-action pairs in real dialogs.",
"In this paper, we propose a multi-level reward modeling approach that factorizes a reward into a three-level hierarchy: domain, act, and slot.",
"Based on inverse adversarial reinforcement learning, our designed reward model can provide more accurate and explainable reward signals for state-action pairs.",
"Extensive evaluations show that our approach can be applied to a wide range of reinforcement learning-based dialog systems and significantly improves both the performance and the speed of convergence.",
"I would like to leave on Wednesday from Stevenage.",
"I show a train leaving Stevenage new street at 17:40 and arriving at 20:23 on Wednesday.",
"Will this work?",
"Train Inform Leave 17:40 Train Inform Day Weds That will.",
"Please make a booking for 5 people please.",
"Okay, your reference number is A9NH Cool, I am also looking for restaurant with 4 stars and has free wifi.",
"Train Inform Arrive 20:23 Train Inform Day Weds Train Inform People 5 Train Book RefA9NH Hotel InformInternetyes Hotel InformStars4 I would like to leave on Wednesday from Stevenage.",
"Task-oriented dialog systems have become a focal point in both academic and industrial research and have been playing a key role in conversational assistants such as Amazon Alexa and Ap-ple's Siri.",
"(Budzianowski et al., 2018; Wei et al., 2018; Chen et al., 2019b)",
"Existing research on task-oriented dialog systems mainly includes pipeline and end-to-end methods (Zhang et al., 2020).",
"For pipeline-based systems, usually could be divided into four components: Natural Language Understanding (NLU), Dialog State Tracking (DST), Dialog Management (DM), and Natural Language Generation (NLG).",
"The modular structure makes the systems more interpretable and stable than end-to-end systems, which directly take natural language context as input and output a response.",
"rent belief state (structured information about the current situation) and deciding the next action.",
"Due to its non-differentiable nature, many researchers resort to Reinforcement Learning (RL) to learn a DM (Peng et al., 2017; Casanueva et al., 2018; Chen et al., 2019a; Vu, 2019).",
"However, RL suffers from the problems of low data efficiency and slow convergence speed in dialog management due to reward sparsity and huge action space (Takanobu et al., 2019; Liu and Lane, 2018; Fang et al., 2019).",
"To solve these problems, existing research designs reward models to estimate how similar a state-action pair is to an expert trajectory.",
"Liu and Lane (2018) and Takanobu et al. (2019) combines a Generative Adversarial Network (GAN) with RL to acquire a reward model which could give rewards in turn/dialog level.",
"However, introducing GAN will bring other problems like model instability.",
"To solve this, Li et al. (2020) proposed to train a discriminator and directly use it as a fixed reward estimator during RL training.",
"However, there is no thorough evaluation to analyze the performance of their reward estimator.",
"Besides, researchers ignore the huge action space of RL that has a huge impact on RL's converging speed.",
"In this paper, we propose to interpret a state-action pair from a multi-level and sequential perspective, instead of simply classifying them as right or wrong.",
"Fig. 1 shows an example of multi-domain (Train and Hotel) task-oriented dialog to illustrate our idea.",
"For each utterance, we infer the domain, act, and slot of it to form a three-level hierarchy, leading to more accurate and interpretable modeling of the dialog agent's actions.",
"For example, for the utterance Okay, your reference number is A9NH, the RL agent books a train ticket of A9NH.",
"We infer the domain of the action book is train, and the slot is Ref (reference number) with slot value A9NH.",
"An RL agent will get rewards from the action only if it belongs to an appropriate domain, and will get slot rewards only if it takes suitable action.",
"For example, if the RL agent in Fig. 1 chooses the wrong action Train-Book-Ref: A9NH at the first turn (i.e., direct booking without confirmation from the user), it will only receive rewards for domain, since it should take the action inform instead of book.",
"To construct a multi-level reward model, we propose the following designs.",
"First, we utilize a disentangled autoencoder to factorize and encode a dialog state into three independent latent sub-states, characterizing domain, act, and slot, respectively.",
"Correspondingly, we also design an action decomposer to decompose an action into three sub-actions, taking the first user action in Figure 1 as an example, the decomposer will decompose action \"Train-Inform-Day:Weds\" into \"Train\", \"In-form\" and \"Day\".",
"Second, we learn a multi-level generator-discriminator framework.",
"The generator generates sub-states and sub-actions from noises, and the discriminator learns to classify whether a sub state-action pair is real or generated.",
"In this way, the learned discriminator can give rewards to a state-action pair in terms of domain, act, and slot.",
"Lastly, we impose Markov property to our multi-level rewards by only rewarding an act/slot if the prior domain/act is appropriate.",
"Such design also alleviates the problem of huge action decision space in RL, as the domain-act-slot hierarchy restricts the choice of act/slot when the domain/act has been decided.",
"We run extensive evaluations to test our multilevel sequential reward model by incorporating it into a variety of RL-based agents.",
"The experimental results demonstrate that our reward model can significantly improve the performance of RL agents and accelerate the speed of convergence.",
"Dialog reward modeling aims to give a proper reward to the action made by an RL agent.",
"Traditional hand-crafted rule-based reward modeling requires expert knowledge and cannot handle unseen actions or situations.",
"Su et al. (2015) proposes a reward shaping method to speed up online policy learning, which models the sum of all rewards in turn level.",
"After that, most researchers tend to exploit GAN by considering an RL agent as a generator and a reward function as a discriminator.",
"Liu and Lane (2018) first introduces the adversarial method for reward computation.",
"It learns a discriminator which can give the probability of authenticity in the dialog level.",
"Takanobu et al. (2019) further expands the adversarial method by inferring a user's goal and giving a proper reward in turn level.",
"However, adding adversarial training to RL will bring potential drawbacks, as training RL is different from normal GAN training whose dataset is fixed, which needs training with the environment and simulated user which is changing all the time.",
"Thus RL and discriminator are training with a moving target rather than a fixed object.",
"It is hard to supervise the adversarial training of generator and discriminator due to no solid feedback.",
"Besides, as claimed in (Li et al., 2020), such adversarial training is only suitable for policy gradient-based methods like Proximal Policy Optimization (PPO), but not working for value-based RL algorithm like Deep Q-Network (DQN).",
"Recently, Li et al. (2020) utilizes a generator to approximate the distribution of expert state-action pairs and trains a discriminator to distinguish them from expert state-action pairs.",
"By introducing a pretraining method, this approach can be extended to both on-policy and off-policy RL methods.",
"However, it is still confused that whether this reward model could give correct rewards.",
"Different from the aforementioned methods, in this paper, our model generates rewards in a more accurate sequential and multi-level manner.",
"We propose a novel multi-level sequential reward estimator and learn it under an Inverse Reinforcement Learning (IRL) framework.",
"IRL aims to learn a reward estimator based on expert data, which are state-action pairs ( S e , A e ) from expert policy.",
"IRL could be formally defined as: E ( A e |S ) R ( A e , S ) E ( A|S ) R ( A , S ) (1) where the goal is to find an optimal reward function R , such that based on the same states S , expert dialog policy will obtain equal or higher rewards than the agent's policy .",
"We denote expert action and agent action as A e and A , respectively.",
"Our objective is approximating R by capturing the distribution of expert dialog f e and estimating how likely a state-action pair is from f e as the reward.",
"To accurately model the expert distribution f e , we disentangle f e into three levels: domain distribution f ed , action distribution f ea , and slot distribution f es .",
"Fig. 2 shows the framework of our multi-level reward estimator.",
"Given a state-action pair from the input and output of a DM module in a pipeline-based system, we combine three components to estimate its quality.",
"First, we acquire sub-states and sub-actions by utilizing a Disentangled Auto-Encoder (DAE) to encode states and a rule-based decomposer to decompose actions.",
"Second, we learn different sub-generators to generate sub-states and sub-actions from noises.",
"Third, we train different sub-discriminators to classify whether a state-action pair is from expert data or agent policy.",
"Besides, we sequentially connect the three discriminators, imposing Markov property to the multilevel rewards, as well as alleviating the problem of huge action space in RL.",
"Finally, the discriminators can serve as reward estimators for domian, action, and slot during inference.",
"Algorithm 1 summarizes the training process of our model components.",
"We introduce more details in the following.",
"We first decompose an action into sub-actions and learn to decompose and encode a state into sub-states from domain, act, and slot level.",
"For action decomposition, we decompose an action A by rules based on how the action vector is defined.",
"Such a rule-based decomposer can be easily implemented by first defining an assignment matrix M , then multiply M with A and select three sub-spans of A to form three sub-actions a d , a a and a s , which are all one-hot vectors.",
"For state decomposition and representation, we decompose a discrete state S into sub-states of domain, act, and slot, and learn a continuous representation of them by DAE.",
"As shown in Fig. 2, the DAE contains three encoders E d , E a and E s to extract and encode the sub-states from S : [ h d ; h a ; h s ] = Encoder ( S ) .",
"To enforce each encoder learn the sub-state corresponding to domain, act and slot respectively, we adopt three auxilary classifiers ( C d , C a and C s ) which classify each sub-state representation ( s d , s a",
"and s s ) with the corresponding sub-action ( a d , a a and a s ) as label.",
"To enhance model generalization, we inject data-dependent noises into latent variables h d , h a and h s .",
"In particular, a noise variance 2 is obtained via the Multilayer Perceptron (MLP) transformation from state S : log 2 = MLP ( S ) .",
"Then we sample noise z from a Gaussian distribution N ( h, 2 I ) .",
"The reparameterization trick (Kingma and Welling, 2013) is further exploited to achieve end-to-end training: [ s d ; s a ; s s ] = h + (cid:15), (cid:15) N (0 , I ) .",
"In this way, the sub-state representations are different for every training time of input state S , and thus the model is provided with additional flexibil-ity to explore various trade-offs between noise and environment.",
"Next, we reconstruct the state via a decoder: S = Decoder ([ s d ; s a ; s s ]) .",
"After that, since the state S is a discrete vector, we can learn DAE by minimizing a binary cross entropy loss:",
"The loss for the auxilary classifiers are:",
"where W i { W d , W a , W s } is the learnable parameters of the classifiers and A i is the action space of the corresponding action level.",
"Therefore, the overall loss for training DAE is given by: LDAE = L dec + (cid:88) i [ d,a,s ] L ienc .",
"Different from previous adversarial training methods in which generator (policy) and discriminator (reward estimator) are trained alternatively when interacting with a simulated user, our GAN network (Goodfellow et al., 2014) is trained offline without the need of simulated users.",
"As shown in Fig. 2, our discriminator D is composed of Algorithm 1 Reward Estimator Training Require: Expert dialog [ S e : A e ] repeat Training DAE by Eq.",
"three sub-discriminators { D d , D a , D s } , our generator G consists a set of parallel sub-generators { G d , G a , G s , G act } with the same Gaussian noise Z as input to generate sub-states { s zd , s za , s zs } and actioin A .",
"Then A is decomposed into { a zd , a za , a zs } by the same rule-based decomposer we described in Sec. 3.1.",
"As a true action A is discrete, we use Straight-Through Gumbel Softmax (Jang et al., 2016) to approximate the sampling process.",
"The generators aim to approximate the distribution of expert dialog ( S e , A e ) by learning distributions { f ed , f ea , f es } of sub state-actions with { G d , G a , G s } and G act .",
"We train the generators by the following loss: LG ( ) = E z p ( z ) (log(1 D ( G ( z )))) , (8) where represents the parameters of generator G .",
"For discriminator, it consists of three paralleled and independent MLP networks with a sigmoid output layer.",
"The discriminator outputs three scores { y d , y a , y s } that respectively denote the probability a sub state-action pair is from a true expert distribution.",
"The traininig loss could be written as: LD i = [ E ( s i ,a i ) f ei log D i ( s i , a i ) + E z p ( z ) (1 log D i ( s zi , a zi ))] , i [ d, a, s ] .",
"Reward shaping provides an RL agent with extra rewards in addition to the original sparse rewards r ori in a task to alleviate the problem of reward sparsity.",
"We follow the same assumption of (Liu and Lane, 2018; Paek and Pieraccini, 2008), in which state-action pairs similar to expert data will receive higher rewards.",
"The rewards from our discriminators are calculated as: R d = y d , R a = y a Sigmoid ( ( R d + b )) , R s = y s Sigmoid ( ( R a + b )) , (10) where { y d , y a , y s } are the outputs of discriminators { D d , D a , D s } in Fig. 2. Note that we impose Markov property into multi-level reward calculation by taking the reward of domain/act level into account when calculating the reward of act/slot level.",
"An agent will receive a low reward when it chooses a wrong domain even if y a or y s is high.",
"We accomplish this by the sigmoid functions in Eq.",
"10.",
"and b are two hyper-parameters controlling the shape of the sigmoid function.",
"A smaller will introduce a softer penalty given by prior-level reward.",
"After getting the three-level rewards, we propose two reward integration strategies.",
"The first strategy we denote as R SeqPrd is simply using RS from Eq.",
"10 as the combined reward.",
"This strategy will bring reward to nearly 1 or 0.",
"The second strategy we denote as R SeqAvg is computing the mean of the three rewards { R d , R a , R s } as the final reward.",
"Finally, we augment the original reward r ori by adding R SeqPrd or R SeqAvg for reward shaping.",
"For the Disentangled Auto-Encoder, the input of its encoder is binary states S .",
"We use three paralleled MLP layers with same hidden size 64 as the sub-encoders to get hidden states { h d , h a , h s } , which are the same with the architecture of the MLP network for generating noise variance 2 .",
"We train the encoder, decoder, and classifier network simultaneously.",
"For the generator part, we utilize four independent and parallel MLP layers.",
"All layers share the same gaussian noise as input.",
"The first three aim to capture the distribution in the field of d , a , s with output size = 64.",
"The output size of G 4 is 300 with an output layer of ST-Gumbel Softmax.",
"To make the output of generators be similar to the encoding representation of DAE and bring noise to the discriminator as well, we further add two MLP networks separately after generation layer to simulate the sampling process of mean and variance.",
"We add weight regularization in a form of l 2 norm to avoid overfitting.",
"In our experiments, the generator is weaker compared to the discriminator, therefore we set the training frequency ratio of generator and discriminator to be 5:1.",
"For the discriminator part, we utilize three parallel MLP layers followed by a sigmoid function as the output layer.",
"Training a multi adversarial network is not easy.",
"Three discriminators will be insensitive to their own field if training all G and D jointly.",
"Thus, we train G and D in the following way.",
"D takes all outputs from G as input, but only chosen sub-generator and sub-discriminator pairs have gradient backpropagation, and others are frozen.",
"During the experiment, we found start training from one pair to two pairs than to all pairs brings good results.",
"Dataset We run evaluations based on the Mul-tiWOZ dataset (Budzianowski et al., 2018) 1 .",
"It is a multi-domain dialog dataset that constructed from human dialog records, mainly ranging from restaurant booking to hotel recommending scenarios.",
"There are 3,406 single-domain dialogs and 7,032 multi-domain dialogs in total.",
"The average number of turns is 8 .",
"93 and 15 .",
"39 for single and multi-domain dialogs, respectively.",
"Platform We implement our methods and baselines based on the Convlab platform (Lee et al., 2019) 2 .",
"It is a multi-domain dialog system platform supporting end-to-end system evaluation, which integrates several RL algorithms.",
"Implementation Details For fair comparisons, we follow the same experiment settings in (Li et al., 2020).",
"Specifically, an agenda-based user simulator (Schatzmann et al., 2007) is embedded and exploited to interact with dialog agent.",
"We set the training environment to a dialog-act to dialog-act (DA-to-DA) level, where the agent interacts with a simulated user in a dialog act way rather than an utterance way.",
"We use a rule-based dialog state 1 https://github.com/budzianowski/multiwoz 2 https://github.com/sherlock1987/SeqReward tracker (DST) to track 100% of the user goals.",
"We train on millions of frames (user-system turn pairs) with 392 -dimensional state vectors as inputs and 300 -dimensional action vectors as outputs.",
"For all the RL networks, we use a hidden layer of 100 dimensions and ReLU activation function.",
"Evaluation metrics During the evaluation, the simulated user will generate a random goal first for each conversation and then complete the session successfully if the dialog agent has accomplished all user requirements.",
"We exploit average turn , success rate and reward score to evaluate the efficiency of proposed reward model.",
"In particular, the reward score metric is defined as reward score = (cid:26) T + 80 , if success T 40 , if fail (11) where T denotes the number of system-user turns in each conversation session.",
"The performances averaged over 10 times with different random seeds are reported as the final results.",
"Besides, we evaluate our RL models in every 1 , 000 frame (system-user turn) by using 1 , 000 dialogs interacting with a simulated user.",
"We evaluate the proposed reward estimator via two classical RL algorithms:",
"i) Deep Q-Network (DQN) (Mnih et al., 2015), which is a value-based RL algorithm;",
"ii) Proximal Policy Optimization (PPO) (Mnih et al., 2015), which is a policy-based RL algorithm.",
"In terms of the DQN-based methods, we compare our method DQN SeqAvg and DQN SeqPrd (corresponding to R SeqAvg and R SeqPrd , respectively) with DQN vanilla , whose reward function is defined in Eq.",
"11, and DQN offgan (Li et al., 2020), which also pretrains an reward function to achieve performance gains.",
"Similarly, we also evaluate on Warm-up DQN (WDQN) with different reward function, named WDQN vanilla , WDQN offgan , WDQN SeqAvg and WDQN SeqPrd , respectively.",
"For the implementation details of DQN-based agents, we use (cid:15) greedy action exploration and set a linear decay from 0 .",
"1 in the beginning to 0.01 after 500 k frames.",
"We train DQN on 500 batches of size 16 every 200 frames.",
"Besides, we use a relay buffer of size 50 , 000 to stabilize training.",
"In terms of the PPO-based methods, we pick up two adversarial methods:",
"Learning (AIRL) (Takanobu et al., 2019); and",
"ii) Generative Adversarial Imitation Learning (GAIL) (Ho and Ermon, 2016).",
"AIRL works on turn level and gives reward scores based on state-action-state triple ( s t , a t , s t +1 ) .",
"For GAIL, it works on dialog level and gives rewards after dialog ends.",
"Similar to DQNs, we also compare our methods with PPO vanilla and PPO offgan (Li et al., 2020).",
"There is one extra hyperparameter named training epoch for GAIL and AIRL, which represents the training ratio of discriminator and PPO models.",
"Here we set it to 4 .",
"Apart from these, all the other hyperparameters stay the same.",
"Different from the settings for DQN, the (cid:15) greedy stays 0 .",
"001 without decay.",
"Besides, we set val-loss-coef to be 1 , meaning no discount for value loss.",
"We also set the training frequency to be 500 frames.",
"From Fig. 3",
"(a), DQN SeqPrd achieves the best performance with a success rate of 0.990 and converges after 130K, which speeds up the training process by almost 300 % compared to DQN vanilla .",
"Compared with DQN vanilla , the methods using pre-trained reward functions R offgan , R SeqArg , R SeqPrd are better than vanilla in terms of both convergence speed and success rate.",
"This phenomenon suggests that these three reward estimators could speed up dialog policy training.",
"Different from DQN offgan , whose reward function is also learned by adversarial training, we further apply disentangled learning and multi-view discriminator to obtain fine-grained rewards.",
"The performance of DQN SeqPrd and WDQN SeqPrd gains received in convergence speed and final performance of our methods confirm the superiority of the hierarchical reward.",
"For WDQN agent, since first warmed up with human dialogs, the WDQN-based methods share a similar success rate (around 6%) before training and consistently converge faster than DQN-based models.",
"However, the usage of warm-up operation will mislead the model to local optimum and deteriorate the final success rate.",
"This phenomenon can be found in the last 100 frames, the performances of WDQN vanilla and WDQN offgan drop significantly.",
"Another attractive property of our method, compared with WDQN vanilla and WDQN offgan , is the variance of success rate is obviously small, which strongly supports the remarkable benefit of exploiting disentangled representation to learn prof",
"For the policy gradient-based agents, we compare our models with two other strong baselines, i.e., GAIL and AIRL, whose reward functions are updated during RL training.",
"Similar to DQN-based methods, we employ PPO algorithms to train dialog agents with different reward functions.",
"Before training a PPO agent, we perform imitation learning with human dialogs to warm-up PPO agents, achieving around 33% success rate.",
"For fair comparisons, we also pretrain the discriminator in GAIL and reward model in AIRL by feeding positive samples and negative samples from pretrain process of dialog agents.",
"As demonstrated in Fig. 3",
"(c), although AIRL rises faster than others during the first 50 frames, it converges to a worse result, compared with PPO SeqAvg .",
"An interesting observation is that PPO vanilla even performs better than AIRL.",
"This Model Acc Prec Rec F1 JSR offgan 0.79 0.84 0.76 0.80 1.39 R d 0.86 0.97 0.80 0.88 0.69 R a 0.71 0.91 0.65 0.76 0.14 R s 0.77 0.95 0.69 0.80 0.33 R SeqAvg 0.87 0.91 0.85 0.88 1.00 R SeqPrd 0.87 0.87 0.87 0.87 3.73 Table 2: The accuracy, precision, recall, F1 and JS divergence scores on test dataset with equal number of positive and negative samples.",
"may be due to the fact that adversarial learning is extremely unstable in RL.",
"Therefore, we aim to learn an off-line reward function to guide the evolution of agents, as we motivate in the introduction.",
"In the comparison between PPO offgan and PPO SeqAvg , the performance gains obtained by our model verifies the advantage of exploiting multi-level reward signals.",
"Moreover, it can be seen that, in the PPO-based RL algorithm, the performance of the agent with the reward function R SeqPrd is worse than that of R SeqAvg , but the opposite is true in the DQN and WDQN-based methods.",
"This may be caused by that the multiplicative reward ( i.e., R SeqPrd ) may cause the gradient to be very steep, which makes the training of the policy gradient-based model unstable.",
"However, in the value-based RL method, an average reward ( i.e., R SeqAvg ) might degenerate the performance, as a hierarchical reward is more general and intuitive, which has access to precise intermediate reward signals.",
"The performances of the last frame in terms of success rate , reward score and average turn are shown in Table 1, in which we could claim again that our method PPO SeqAvg outperforms all baseline models by a substantial margin.",
"To visualize the model performance and what benefits a sequential reward will bring, we view the evaluation as a binary classification and distribution distance problem.",
"we use accuracy, precision, recall, and F1 to find out how good this binary model is, and use JS divergence to evaluate the ability of the reward model to divide positive and negative distributions, the larger the better.",
"We construct a test dataset with equal numbers ( 7 , 372 ) of positive and negative samples from the test dataset.",
"All positive samples are original state-action pairs.",
"For negative samples, we fix states and randomly pick actions from those with different domains.",
"We evaluate three reward models separately in Table 2. R d is the best one among the three with the highest five scores.",
"This is pretty straightforward since domain is the first identity to divide action space into groups.",
"And for R a and R s , the JS divergence is lower, this is because some actions could have different domains with the same action-slot.",
"For example, action Train-Inform-Arrive and Hotel-Inform-Arrive have the same action-slot with different domains.",
"Thus, R a and R s will only give an ambiguous decision boundary.",
"But from a sequential view, we make a new combination of R SeqAvg and R SeqPrd , which gives good results.",
"R offgan could give the right rewards to some extent, but from Fig. 4",
"(c), there is a large intersection between fake and real distributions among three, which means it wrongly classifies fake action as right.",
"And this is the reason why its F1 score is lower.",
"Besides, this reward model is a biased model, its ratio of true negative and true positive samples is 0.89 thus it tends to give fake results.",
"For both of our model, there is little bias, R SeqPrd is 0.99 and R SeqAvg is 0.98, which benefits from sequential combination.",
"For R SeqPrd , R SeqAvg and R offgan , R SeqPrd perform the best, no matter from the view of binary classification or JS divergence.",
"And the distribution is much sharper than R SeqAvg with prediction score centering at 0 or 1. For R SeqAvg , the distribution is softer than R SeqPrd as shown in Fig. 4.",
"Although there is no exact evaluation to say how bad one action is, from the good results of PPO SeqAvg , nearly the same binary classification score with R SeqPrd as well as lower JS divergence, we could get the conclusion that it is the most accurate rewards among the three.",
"We propose a multi-level and sequential reward modeling mechanism that models expert state-action pairs in terms of domain, act, and slot.",
"Our approach combines a disentangled auto-encoder and a generator-discriminator framework to model the distribution of expert state-action pairs.",
"The learned discriminators can thereby serve as a multilevel reward estimator.",
"Experimental results show that our three-level modeling mechanism gives more accurate and explainable reward estimations and significantly boosts the performance of a variety of RL-based dialog agents, as well as accelerating the convergence speed of training.",
"This work was supported by the Tencent Jarvis Lab.",
"We thank Ziming Li, Zhanpeng Huang for the helpful discussions and insightful comments.",
"Thank Kallirroi for leading me into the field of dialogue systems."
]
| [
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"abstain",
"result",
"other",
"other",
"other"
]
|
[
"Missing sentence generation (or sentence infilling) fosters a wide range of applications in natural language generation, such as document auto-completion and meeting note expansion.",
"This task asks the model to generate intermediate missing sentences that can syntactically and semantically bridge the surrounding context.",
"Solving the sentence infilling task requires techniques in natural language processing ranging from understanding to discourse-level planning to generation.",
"In this paper, we propose a framework to decouple the challenge and address these three aspects respectively, leveraging the power of existing large-scale pre-trained models such as BERT and GPT-2.",
"We empirically demonstrate the effectiveness of our model in learning a sentence representation for generation and further generating a missing sentence that fits the context.",
"Generating a span of missing tokens in a text chunk, known as text infilling, has attracted many attentions recently (Zhu et al., 2019; Song et al., 2019; Liu et al., 2019; Ippolito et al., 2019; Joshi et al., 2020).",
"Here we study the related but somewhat different task of sentence infilling.",
"Specifically, as illustrated in Figure 1, intermediate sentences (or chunks of text) are removed from long-form text (e.g., paragraphs, documents), and the task is to generate the missing pieces that can smoothly blend into and fit the context both syntactically and semantically .",
"The generation can be either based only on context, or based on both context and side information such as constraint keywords.",
"Compared with text infilling, sentence infilling requires the model to handle inter-sentential correlation and to reason about missing semantic information.",
"Developing models for sentence infilling can potentially These authors contributed equally to this work.",
"facilitate many text generation applications.",
"Possible scenarios include, but are not limited to: document auto-completion by detecting and suggesting missing bridging sentences in the surrounding context; collaborative document writing by modifying and unifying different writing styles from multiple authors; meeting note expansion by extending a set of keywords (lexical constraints) to a full sentence, leveraging the surrounding context.",
"There are many challenges associated with this long-form sentence infilling task, which is typically a one-to-many problem in that the possible outputs can be diverse.",
"As the generated sentence should connect separate text pieces in a syntactically and semantically smooth and coherent manner, the task requires a wide range of understanding , planning , and generation techniques.",
"Large-scale pre-trained language models such as BERT (Devlin et al., 2019) and GPT-2 (Radford et al., 2019) have dramatically enhanced the understanding and generation modules.",
"However, how to integrate them in a holistic manner, and to analyze and establish the long-range dependence structure by high-level semantic planning is still challenging and yet to explore, as semantic appropriateness is usually subtler than syntactic appropriateness, which can be well characterized by autoregressive language models.",
"Several works have been done in this direction.",
"MASS (Song et al., 2019) obtains sentence representations by predicting a span of missing tokens.",
"It can be used to generate missing text, but the missing span length needs to be pre-specified.",
"Other related works (Liu et al., 2019; Joshi et al., 2020) also require knowledge of the span length as an input to their models, and thus are different from our work.",
"Text infilling (Zhu et al., 2019) sequentially generates tokens for the missing part of a sentence until an end-of-blank token is generated.",
"Its generation can be of arbitrary length.",
"By design, all these previous approaches operate at the token level, and thus arguably focus more on lexical appropriateness than the global semantics.",
"In this paper, we propose INter-SEntential Transformer (INSET), a novel approach to sentence infilling.",
"Our model first produces sentence-level semantic features that capsulate the missing high-level information.",
"Then, grounded on the predicted semantic features, the model generates the syntactic and lexical features to embody the predicted sentence.",
"Specifically, understanding , planning , and generation are handled by three modules in a synergistic manner: a BERT-based encoder to map each sentence to the latent semantic space.",
"a sentence-level semantic planner to infer the missing information that can bridge the semantics of preceding and following context.",
"a GPT-based generator (decoder) to map semantic features back to the text domain.",
"We study the task of sentence infilling, which requires the model to handle inter-sentential correlation and to predict missing semantic information.",
"This goes beyond text infilling (Zhu et al., 2019), which asks the model to fill in the missing part of a single sentence.",
"Our approach decouples understanding, planning, generation, and leverages existing large-scale pre-trained understanding and generation models (BERT, GPT-2).",
"The components of our model can be separately examined and improved with additional data.",
"Our model predicts a feature vector in the latent semantic space for the missing sentence and maps the vector to text.",
"Thus, it takes care of semantic smoothness and appropriateness.",
"Our model allows the generation to be of arbitrary length, as in (Zhu et al., 2019).",
"Compared with directly processing text, our approach significantly reduces computation time and memory usage during training, as (after pre-computing sentence features) the sequence length is the number of sentences rather than that of tokens.",
"Pre-Trained Language Model.",
"Language models pre-trained on a large corpus improve natural language understanding and generation through transferable contextualized word representations (Devlin et al., 2019; Lample et al., 2019) and models (Howard and Ruder, 2018).",
"Large transformer models (Vaswani et al., 2017) like GPT-2 (Rad-ford et al., 2019), Megatron ( https://github. com/NVIDIA/Megatron-LM ), and T5 (Raffel et al., 2019) can achieve state-of-the-art results without training on any particular language modeling benchmark.",
"(Keskar et al., 2019) proposes a conditional generation model, trained to condition on control codes that govern style, content, and other task-specific properties.",
"Different from them, our model builds sentence representations via autoencoding with a pair of BERT and GPT-2.",
"Context-Aware Text Generation.",
"There are some related works on context-aware text generation (Mikolov and Zweig, 2012; Tang et al., 2016; Mangrulkar et al., 2018).",
"Most previous works on language modeling with contextual information (Wang and Cho, 2016; Wang et al., 2018; Sor-doni et al., 2015b; Wen et al., 2015; Vinyals and Le, 2015) treat the preceding sentences as context.",
"Compared with these sequential generation tasks, our task is constrained by bidirectional context, and is more challenging.",
"Text infilling (Zhu et al., 2019) aims at filling in the missing part, given the rest of a sentence.",
"(Liu et al., 2019) proposes an iterative inference algorithm based on gradient search for text infilling.",
"For story infilling, (Ippolito et al., 2019) first predicts rare words in the missing span, and then generates text conditioned on these words.",
"SpanBERT (Joshi et al., 2020) masks random contiguous spans and (pre-)trains a language model to predict tokens in the span.",
"XL-Editor (Shih et al., 2019) adapts XL-Net (Yang et al., 2019) to text infilling and other editing tasks.",
"(Kang and Hovy, 2019) models logic connections between sentences and generates intermediate sentences grounded on inter-sentential flow. (Bhagavatula et al., 2020) formulates abductive commonsense reasoning as a natural language inference task to decide the appropriate reason that could explain the observation in one sentence given the background described by another sentence.",
"(Cheng et al., 2020) proposes a text style transfer task to translate a sentence in the context of a paragraph into the desired style.",
"These three works study generation tasks that address inter-sentential relationship, and thus may be conceptually related to our motivation.",
"Compared with (Zhu et al., 2019; Liu et al., 2019; Ippolito et al., 2019; Joshi et al., 2020; Shih et al., 2019; Kang and Hovy, 2019; Bhagavatula et al., 2020; Cheng et al., 2020), our approach is clearly different.",
"We fully exploit existing large-scale pre-trained models BERT and GPT-2 to learn smooth sentence embeddings in the latent semantic space, and then process sentence-level information in this space.",
"Hierarchical Text Generation.",
"Hierarchical text generation with high-level semantic planning has been studied in many previous works.",
"(Sor-doni et al., 2015a) presents a hierarchical recurrent encoder-decoder architecture for context-aware query suggestion.",
"(Zhang et al., 2019) proposes a framework to infer semantic features for response generation using self-supervised learning.",
"Previous works have used multi-level LSTM encoders (Yang et al., 2016; Hu et al., 2020) and hierarchical autoencoders (Li et al., 2015) to learn hierarchical representations for long text.",
"(Shen et al., 2019) uses a variational autoencoder to encode an entire paragraph into a single latent variable, from which the paragraph can be generated hierarchically.",
"In comparison, our task is to generate intermediate sentences in the surrounding context.",
"paragraphs { p ( k ) } Nk =1 .",
"Each paragraph p ( k ) = ( s ( k ) 1 , s ( k ) 2 , . . . , s ( k ) M k ) consists of M k consecutive sentences.",
"For each k , we are given a positive integer m k M k and the context ( s ( k ) 1 , s ( k ) 2 , . . . , s ( k ) m k 1 , s ( k ) m k +1 , . . . , s ( k ) M k ) , but the m k 'th sentence s ( k ) m k is missing.",
"The task is to generate a sentence s ( k ) m k in the missing position such that it fits the context.",
"For simplicity and without any confusion, we drop the index k from now on (note that M and m may depend on k ).",
"The criteria for successful generation are: The sentence s m is fluent and meaningful.",
"Inserting the generated sentence into the context, we obtain a semantically coherent paragraph ( s 1 , s 2 , . . . , s m 1 , s m , s m +1 , . . . , s M ) .",
"s m is written in the same style as contextual sentences { s j } j (cid:54) = m .",
"Since there could be multiple semantically different sentences that fit the same context well, it is not necessary for s m to be close to the ground truth s m .",
"Rather, s m is considered successful as long as it satisfies the criteria above.",
"Model Overview.",
"At a high level, our model consists of two components: a (denoising) autoencoder and a sentence-level transformer .",
"The former maps each sentence to a fixed-length feature vector in the latent semantic space, and reconstructs the sentence from the representation.",
"The latter predicts the semantic features of the missing sentence from those of contextual sentences.",
"We call our model INter-SEntential Transformer (INSET).",
"Formally, let ( E , D ) be an autoencoder, where E ( D ) is the encoder (decoder) such that E D and D E are supposed to be identity maps.",
"Let T be a sentence-level transformer with positional encoding P .",
"The transformer T takes the contextual information as input and outputs a hypothetical representation of the missing sentence.",
"Specifically, s m = D (cid:0) T ( f 1 + P (1) , f 2 + P (2) , . . . , f m 1 + P ( m 1) ,(cid:126) 0 + P ( m ) , f m +1 + P ( m + 1) , . . . , f M + P ( M ))[ m ] (cid:1) , (1) where f j = E s j is the encoding of the sentence s j , (cid:126) 0 is the zero vector representing the missing sentence, and T ( )[ m ] is output of the transformer T in the missing position m .",
"former on individual sentences.",
"Then, we precompute and save the feature vectors of all sentences.",
"While training the latter, it is not necessary to load the former.",
"This makes training more efficient.",
"Sentence Representation Learning via Denoising Autoencoding.",
"Large-scale pre-training approaches (e.g., BERT) lead to superior performance in many language understanding tasks related to sentence representation learning (Reimers and Gurevych, 2019).",
"However, the features learned by BERT (or fine-tuned on downstream tasks) cannot be directly used for generation tasks, as the masked language model objective of BERT does not enforce the reconstruction of the original sentence from the extracted features.",
"Instead of directly using BERT features, we learn sentence representations via autoencoding.",
"This naturally integrates BERT and GPT-2, and combines sentence representation learning and generation.",
"As shown in the left panel of Figure 2, we pad the [CLS] token at the beginning of each sentence s j .",
"We initialize the encoder E with BERT, and extract the output f j corresponding to the [CLS] token as the embedding of s j .",
"We initialize the decoder D with GPT-2, and feed f j as the embedding of the zeroth token.",
"Then, we have D generate a sequence of tokens in the hope that the sequence matches s j (padded with special tokens [SOS] at the beginning and [EOS] at the end).",
"To train the autoencoder, we use teacher forcing and minimize the negative log-likelihood loss by (fine-)tuning the parameters of E and D jointly.",
"An autoencoder embeds sentences into vectors in the latent space.",
"We hope that the embedding is smooth in the sense that semantically similar sentences are mapped to vectors that are close to each other.",
"In particular, interpolation between two points in the latent space should correspond to a smooth semantic transition in the text domain.",
"To this end, we use the following two tricks.",
"First, we employ a denoising autoencoder, which is known to yield a smoother embedding (Vincent et al., 2008).",
"To add noise, we randomly mask each token in s j with probability 15% by replacing the masked tokens with a special token [MASK].",
"During training, we use the noisy s j with masks as input to the encoder, and use the clean s j without masks to compute the loss function.",
"Of course, one could try more sophisticated noise-adding strategies (Lewis et al., 2019).",
"Second, we use early stopping.",
"In our experiments, we observe that as training proceeds, the validation loss of the autoencoder keeps decreasing.",
"In the absence of masks, presumably it would eventually decay to zero so that the autoencoder perfectly reconstructs every sentence.",
"However, this does not necessarily imply that the embedding is smooth.",
"On the contrary, an overtrained autoencoder often tries to remember every individual token and thus fails to achieve smoothness in the latent semantic space.",
"Moreover, it can catastrophically forget some of the information in the initial pre-trained model (GPT-2) and partially lose the power of generating fluent sentences.",
"In practice, we select a checkpoint by monitoring its validation performance on sentence interpolation.",
"Some examples of sentence interpolation are shown in Table 1.",
"sentence-level transformer T to predict the feature vector of the missing sentence from those of contextual sentences.",
"This is analogous to the task of predicting masked tokens for (pre-)training BERT (Devlin et al., 2019), but now it is at the sentence level.",
"Indeed, sentence feature vectors in T correspond to token embeddings in BERT, and sentence position ID in T corresponds to position ID in BERT.",
"We train the transformer T with the objective L SentTrans = 1 cos( f m , T ( )[ m ]) , (2) where cos( ) is the cosine similarity between the ground truth sentence feature vector f m and the prediction T ( )[ m ] in Eq.",
"(1).",
"Note that cos( ) is a good similarity measure only when its arguments are unit vectors.",
"This is guaranteed by the technical trick of fixing the parameters of the last LayerNorm of the transformers E and T , i.e., do not compute the gradients of these parameters in backpropagation.",
"Generating Sentences from Features.",
"At test time, we use the decoder D to generate the missing sentence by mapping the predicted feature vector to the text domain.",
"Note that standard generation schemes such as topk sampling, beam search, and nucleus sampling (Holtzman et al., 2020) can be used without additional modeling effort.",
"Computational Efficiency.",
"Compared with vanilla GPT-2, our model can process and analyze a document containing many sentences at the discourse level with dramatically lower time and space complexity.",
"To estimate quantitatively, suppose that a document contains N s sentences, each of which has N t tokens.",
"Then, the time complexity is reduced from O ( N 2 s N 2 t ) to O ( N 2 s + N s N 2 t ) .",
"Moreover, sentence features can be precomputed once and then reused for every epoch or even in other tasks on the same dataset.",
"If sentence features have been precomputed and are already directly available, the time complexity is further reduced to O ( N 2 s ) .",
"We further introduce a related task called sentence infilling with lexical constraints , which is the same as sentence infilling except that now we are given some keywords of the missing sentence as an additional input to hint the generation.",
"The keywords are treated as soft constraints (aka priming): The generated sentence is not directly enforced to contain the exact keywords.",
"It may contain a synonym or share some semantics with the keywords.",
"We expect that the presence of keyword constraints makes the task more difficult rather than easier, although incorporating keywords can significantly improve the BLEU score of the generation with respect to the ground truth.",
"Intuitively, keywords force the model to speculate the semantics of the ground truth sentence, and significantly reduce the number of possible solutions.",
"In the absence of keywords, the model has the freedom of completing the task according to its own way of thinking.",
"To handle keyword constraints, we introduce a new component called the constraint feature encoder to our architecture.",
"It is a transformer encoder K that maps a set S of keywords to a feature vector that lives in the same latent space of sentence embeddings.",
"We train K with knowledge distillation (Kim and Rush, 2016).",
"The teacher model is the sentence encoder E , which maps a sentence containing the keywords in S to a feature vector.",
"We use the cosine similarity loss between these two feature vectors to teach the student model K .",
"For implementation details, suppose we have two keywords w 1 and w 2 .",
"Then, the input to K is three tokens ( [CLS] , w 1 , w 2 ) .",
"We replace the zero vector in Eq.",
"(1), which represents the missing sentence, with the output of K above the [CLS] token.",
"We do not use positional encoding in K because keywords do not have order.",
"We evaluate our model on two datasets (TripAdvi-sor and Recipe).",
"We have released the source code to facilitate future research ( https://github. com/dreasysnail/INSET ).",
"Dataset and Preprocessing.",
"We conduct experiments on the TripAdvisor and Recipe datasets.",
"For the TripAdvisor dataset of hotel reviews (Wang et al., 2010), we partially follow the preprocessing in (Cho et al., 2019).",
"Our preprocessing includes, but is not limited to:",
"(i) discarding reviews containing non-English tokens;",
"(ii) removing duplicate reviews so that only one copy is retained.",
"We set the maximum number of tokens in a sentence to be 32 and the minimum number of sentences in a review to be 7 (so that the context is not too short).",
"Any review with longer sentences or having a smaller number of sentences is discarded.",
"We use the following strategy to mask sentences.",
"For a paragraph consisting of M 7 consecutive sentences, we split it into M 6 data points, each of which has exactly 7 sentences.",
"The j 'th data point spans from the j 'th to the ( j + 6) 'th sentence (in-clusive) of the paragraph, for j = 1 , 2 , . . . , M 6 .",
"We mask the middle (i.e., 4 th) sentence for each data point so that the masking rate is 1 / 7 14 .",
"3% , which is close to that ( 15% ) of BERT.",
"After preprocessing, the size of the dataset (training, validation, test) is (1108134, 62543, 533) data points.",
"Our strategy of always masking the middle sentence out of 7 sentences is not only the simplest but also without loss of generality.",
"Our model is directly applicable to the situation where we randomly mask, e.g., 3 out of 20 sentences.",
"However, the quality of human evaluation may be affected because the patience and attention of human evaluators may decrease as the context length increases.",
"For the effectiveness of human evaluation, we use the simplest strategy to mask sentences.",
"The Recipe dataset is obtained from ( https: //commoncrawl.org ), where the metadata is formatted according to Schema.org ( https:// schema.org/Recipe ).",
"We use the same preprocessing as that of the TripAdvisor dataset except that instructions with less than 5 sentences are discarded.",
"After preprocessing, the size of the dataset (training, validation, test) is (1073886, 56055, 500) data points.",
"Recipe instructions usually describe a time-ordered procedure, and thus are ideal for testing the reasoning capability of the model.",
"Evaluation Metrics.",
"Following (Galley et al., 2019; Zhang et al., 2020), we perform automatic evaluation using standard machine translation metrics, including BLEU (Papineni et al., 2002), NIST (Doddington, 2002), and METEOR (Lavie and Agarwal, 2007).",
"As a variant of BLEU, NIST weights n -gram matches by their information gain, and thus penalizes uninformative n -grams.",
"We also use Entropy (Zhang et al., 2018) and Distn (Li et al., 2016) to evaluate lexical diversity.",
"See (Galley et al., 2019) for more details.",
"BLEU, NIST, and METEOR measure the similarity between the generated sentence and the ground truth.",
"They are not ideal scores for our task because a sentence that is semantically very different from the ground truth could possibly fit the context perfectly.",
"However, it may still be helpful to compute these commonly used scores.",
"It is an important and challenging open problem to design an automatic score that faithfully measures the overall quality of the generation in our task.",
"Baseline.",
"Our baseline is the self-attention model for text infilling (Zhu et al., 2019).",
"It is a transformer language model with novel positional encoding.",
"The traditional approach of encoding the absolute position of each token is not directly applicable to our task because we do not know in advance the absolute positions of contextual tokens after the missing sentence.",
"To resolve this issue, (Zhu et al., 2019) divides the text into segments.",
"In the case of only one masked sentence, the first (third) segment consists of contextual tokens before (after) the mask, and the second corresponds to the mask.",
"Then, each token is indexed by its segment ID and its position ID within the segment.",
"The missing tokens are sequentially generated from these IDs and the current surrounding context.",
"Training the baseline model on our dataset, we use the same set of hyperparameters as in the original reference except that the batch size is set to 250 (it is 400 in (Zhu et al., 2019)).",
"This avoids out-of-memory errors.",
"Note that we are handling much longer sequences (usually > 100 tokens) than (Zhu et al., 2019), in which the maximum number of tokens in a sequence is only 16 .",
"The baseline model is trained for a sufficient number ( 30 ) of epochs until the validation (negative log-likelihood) loss and perplexity clearly saturate.",
"We report the results of the checkpoint with the smallest validation loss and perplexity.",
"Note that we observe that other checkpoints in the saturation regime behave very similarly on the test set.",
"Keyword Extraction.",
"In the task of sentence infilling with lexical constraints, we need to extract keywords from the masked sentence.",
"Keyword extraction is a classical problem in information retrieval.",
"Standard methods include, but are not limited to, tf-idf (term frequencyinverse document frequency) (Ramos, 2003).",
"We have tried tf-idf, but it does not work well for the TripAdvisor dataset of hotel reviews.",
"One reason is that this dataset has quite a few typos, and unfortunately tf-idf favors them because typos occur less frequently than normal words.",
"This issue can be resolved by manually filtering out all typos.",
"After the fix, however, we observe that the quality of extracted keywords remains unsatisfactory.",
"We use the following strategy to extract keywords.",
"We first define a list of stop words.",
"To this end, we use the stop word list from NLTK (Bird et al., 2009) and manually add a number of words (e.g., hotel) that are not very informative for the particular dataset of hotel reviews.",
"For each sentence, we select non-stop words that appear most frequently in the entire dataset.",
"We usually select two keywords per sentence, but occasionally select one or even zero if few words remain after filtering out stop words and typos.",
"We observe that the keywords extracted with this strategy can pivot the gist of most sentences well.",
"Model Size and Hyperparameters.",
"Our architecture has several components.",
"The encoder E and the sentence-level transformer T have the same size as BERTBASE .",
"The decoder D has the same size as GPT-2 (117M).",
"In the presence of lexical constraints, the constraint feature encoder K has the same size as BERTBASE .",
"During decoding, we use beam search with beam size 5 .",
"Sentence Representation Learning.",
"We first qualitatively evaluate the smoothness of the latent-space sentence embeddings learned via denoising autoencoding.",
"Table 1 shows two examples of sentence interpolation on the TripAdvisor dataset.",
"In each example, the first and last sentences are inputs by hand, and the 3 intermediate ones are interpolations generated by our (denoising) autoencoder.",
"We observe that the interpolations not only combine words from input sentences, but are readable, meaningful, and show a smooth semantic transition from the first to the last sentence.",
"We speculate that the power of generating fluent and semantically coherent sentence interpolations is derived from BERT and GPT-2.",
"Inherited from these large-scale pre-trained models, the latent-space sentence embedding is reasonably smooth as our sentence interpolation results show.",
"Automatic Evaluation.",
"Table 2 shows the BLEU, NIST, METEOR, Entropy, Distn scores, and the average length (number of words) of the generated sentences.",
"For the TripAdvisor dataset, we also present results in the presence of keyword constrains.",
"Table 2 compares the baseline (Zhu et al., 2019), our results, and the ground truth.",
"In the absence of keyword constraints, INSET outperforms the baseline in terms of all scores on both datasets.",
"This indicates that our results are semantically closer example 1 A The pool area was nice and sunbathing was great.",
"In terms of the average generation length, our results are much closer to the ground truth than the baseline is.",
"Table 2 also presents two ablation studies.",
"The first shows the performance decrease with less context.",
"Recall that each data point in the TripAdvisor dataset has 6 contextual sentences (full context).",
"We train INSET on the same set of data points but truncate the context to 4 sentences (less context).",
"The second ablation study shows the effect of context in the presence of keywords.",
"We compare two models.",
"The first (INSET w/ context) is the model described in Subsection 3.3.",
"Its generation is based on both keywords and context.",
"The second model (INSET w/o context) is D K , which directly decodes the output of the constraint feature encoder K using the decoder D .",
"Its generation is only based on keywords but not context.",
"We observe that the scores of the first model are higher than those of the second.",
"Both ablation studies show that our model can make full use of context to improve the generation.",
"Human Evaluation.",
"We performed human evaluation of our method on the TripAdvisor dataset.",
"We used a crowd evaluation platform to compare two systems and assess their fluency, informativeness, and relevance to the surrounding context (co-herence) on 500 random samples from the test set.",
"Following recommended best practices, each sample was evaluated by five judges.",
"We performed simple spam detection by excluding judges that were too fast or performed too low on a gold set.",
"To avoid bias, we randomized the position of each system while asking judges to compare our systems (with and without keywords) with the ground truth Dataset NIST BLEU MET-Ent.",
"Table 3 shows the human evaluation results.",
"The judges strongly prefer our results (without keywords) to the baseline in all aspects: coherence, fluency, and informativeness.",
"They also strongly prefer the ground truth to our results.",
"Moreover, our results with keywords and context are compared with three other systems:",
"(i) the ground truth;",
"(ii) our results with keywords but not context;",
"(iii) our results with context but not keywords.",
"The second comparison shows that in the presence of keywords, our model can use context to improve all aspects of the generation.",
"The third comparison shows that the presence of keywords reduces the performance of our model, probably because keywords are constraints that the model must take care of.",
"Generated Examples.",
"To qualitatively demonstrate the effectiveness of our model, Table 4 shows some examples from the TripAdvisor and Recipe datasets.",
"We observe that the baseline (Zhu et al., 2019) tends to generate generic sentences, while our results (either with or without keywords) are more informative and can fit the surrounding context reasonably well.",
"Table 5 shows examples generated by our model in the same context but with different keywords.",
"Our model can extend keywords to a full sentence, adapting to the context.",
"More examples generated by our model on both datasets are given in Appendix A. 5 Conclusions and Outlook We study the task of sentence infilling, which is analogous to the masked language modeling task for (pre-)training BERT, but now it is at the sen-example from the TripAdvisor dataset example from the TripAdvisor dataset example from the Recipe dataset precedingcontext It was such a pleasure to see somthing new every night.",
"preceding context My room was a very good size.",
"Tiled floors and woodchip painted walls.",
"The tv did not work so what.",
"following context Great places to eat close by and very reasonable.",
"No air con -so summer could be sticky.",
"My concern is the left luggage room not supervised.",
"human oracle The location is terrific beside Sevilla metro stn so only 2 to get by metro all the way to airport.",
"+ (walk, shopping) Walking distance to shopping mall and Circular Quay.",
"+ (internet, $) Internet cost $20.00 per day.",
"tence level.",
"Sentence infilling requires the model to handle long-range inter-sentential correlation and to process high-level semantic information.",
"It is complementary to (token-level) masked language modeling, which focuses more on syntactic appropriateness and short-range correlation.",
"We propose a framework called INSET to decouple three aspects of the task (understanding, planning, and generation) and address them in a unified manner.",
"We demonstrate the effectiveness of our approach using automatic and human evaluation.",
"Our approach can be modified or extended in some ways.",
"(i) We use a denoising autoencoder to obtain sentence embeddings.",
"One can try to use a variational autoencoder (Kingma and Welling, 2014) instead.",
"A large-scale pre-trained variational autoencoder (Li et al., 2020) could possibly improve the smoothness of sentence embeddings.",
"(ii) Our model predicts a feature vector for the missing sentence.",
"This vector can be fed into and serve as a guide to token-level models including the baseline (Zhu et al., 2019).",
"Since sentence infilling is analogous to masked language modeling, we expect that it can also be used as a pre-training task.",
"For example, in machine translation of long texts, it is often the case that sentences are translated independently from each other.",
"This sometimes leads to incoherence or even inconsistency between the translated sentences.",
"A post-editor to fix the issue (Voita et al., 2019) should be able to understand inter-sentential relationship and to generate fluent sentences in the surrounding context, both of which can be learned from sentence infilling.",
"Note.",
"After this paper was posted on arXiv, some related works appeared.",
"(Shen et al., 2020) proposes Blank Language Model for text infilling and other tasks.",
"(Donahue et al., 2020) trains (fine-tunes) a language model (GPT-2) for text and sentence infilling.",
"(Li et al., 2020) pre-trains a large-scale variational autoencoder with a pair of BERT and GPT-2.",
"(Ippolito et al., 2020) uses a sentence-level language model, which operates on sentence embeddings obtained from BERT, to predict story endings.",
"We thank Bill Dolan, Chris Quirk, and Jingjing Liu for helpful discussions and suggestions."
]
| [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"method",
"result",
"method",
"objective",
"abstain",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
]
|
[
"Speech Act Classification determining the communicative intent of an utterance has been investigated widely over the years as a standalone task.",
"This holds true for discussion in any fora including social media platform such as Twitter.",
"But the emotional state of the tweeter which has a considerable effect on the communication has not received the attention it deserves.",
"Closely related to emotion is sentiment, and understanding of one helps understand the other.",
"In this work, we firstly create a new multi-modal, emotion-TA ('TA' means tweet act, i.e., speech act in Twitter) dataset called EmoTA collected from open-source Twitter dataset.",
"We propose a Dyadic Attention Mechanism (DAM) based multi-modal, adversarial multi-tasking framework.",
"DAM incorporates intra-modal and inter-modal attention to fuse multiple modalities and learns generalized features across all the tasks.",
"Experimental results indicate that the proposed framework boosts the performance of the primary task, i.e., TA classification (TAC) by benefitting from the two secondary tasks, i.e., Sentiment and Emotion Analysis compared to its uni-modal and single task TAC (tweet act classification) variants.",
"Identification of speech acts is one of the preliminary means of determining the communicative intent or pragmatics of a speaker (for example, statement, request, question etc.).",
"This is true for dialogue system, speech transcription, social media such as Twitter, MySpace etc.",
"Twitter is one of the leading micro-blogging services.",
"By 2019, 330 million users were active monthly and 500 million tweets were sent per day 1 .",
"Identification of tweet acts (TAsspeech acts in Twitter) is highly beneficial for Twitter as well as tweeters.",
"For Twitter, it helps decipher a particular subject in terms 1 https://www.omnicoreagency.com/twitter-statistics/ of speech acts and discrepancy identification.",
"It also helps in social media monitoring by analysing topic alteration or spamming.",
"It assists the followers in monitoring and scanning the subject with the most advantageous speech acts based on their needs.",
"This helps reduce their search space and encourages them to obtain useful information from out of millions of tweets.",
"It gives the tweeter a greater sense of the content, mood and trend.",
"A person's emotional state and sentiment greatly impacts its intended content (Barrett et al., 1993).",
"Often sentiment and emotion are treated as two different problems (Do et al., 2019), (Soleymani et al., 2017), (Albanie et al., 2018), (Hossain and Muhammad, 2019), (Majumder et al., 2019).",
"However, sentiment and emotion are are closely related.",
"For example, emotions such as happy and joy are inherently related to a positive sentiment.",
"But emotion is much more nuanced and fine-grained compared to sentiment (Kumar et al., 2019).",
"Emotion along with sentiment provides better understanding of the state of mind of the tweeter.",
"For example, a question or statement is associated with anticipation .",
"An opinion is many times associated with anger or disgust .",
"The close association between emotion and sentiment motivates considering tweeter's sentiment along with emotion while deciphering the tweet acts.",
"For expressive TAs such as ex-pression\", request\", threat\" etc., the tweeter's sentiment and emotion can aid in classifying true communicative intent and vice-versa.",
"Additionally, multi-modal inputs, i.e., the combination of text and other nonverbal cues (emojis in tweets) (Felbo et al., 2017) help create reliable classification models aiding the identification of emotional state and sentiment of the tweeter which in turn help in determining correct TAs.",
"In this paper, we leverage the relationships as delineated above to predict TAs of tweets in a multimodal framework.",
"In this multi-task framework, TAC is treated as the primary task and Sentiment Analysis (SA) and Emotion Recognition (ER) as auxiliary (i.e., secondary) tasks.",
"Contributions of this paper are as follows : i.",
"We create a new dataset called EmoTA consisting of tweets with high-quality annotations of TAs, including emotionally aided and multi-modal cues; ii.",
"We establish the need for considering the sentiment and emotional state of the tweeter while identifying TAs.",
"iii.",
"We propose a Dyadic Attention Mechanism (DAM) based multi-task adversarial learning framework for multi-modal TAC, SA and ER.",
"In DAM, we incorporate intra-modal and inter-modal attention to integrate information across multiple modalities and learn generalized features across multiple tasks; iv.",
"We illustrate performance gains by jointly optimizing TAC, SA and ER.",
"Multi-modal and multi-task TAC performs significantly better than its uni-modal and single task TAC variants.",
"2 Related Works There exist plenty of works which address the task of TAC as a standalone problem.",
"In (Zhang et al., 2011), (Vosoughi and Roy, 2016), authors proposed Machine Learning based approaches for TAC namely Support Vector Machines (SVM), Logistic Regression etc.",
"In (Saha et al., 2020a), authors proposed a first ever public dataset for the identification of speech acts in Twitter followed by a capsule based network built on top of BERT for TAC.",
"In (Vosoughi, 2015), authors highlighted the importance of identification of tweet acts and established it to be one of the elementary steps for detection of rumours in Twitter.",
"In (Saha et al., 2020c), authors proposed an attention based model built on top of the Transformer for predicting TAs.",
"In (Saha et al., 2020a), authors proposed a capsule based network built on top of BERT for TAC.",
"All these works utilized only the textual modality to identify TAs without any sentiment or emotional correlation of the tweeter.",
"In (Cerisara et al., 2018), authors proposed a LSTM based study for jointly optimizing SA and TAC in a decentralized social media platform called Mastodon.",
"However, they modelled their task as a multi-party conversation pretty different in essence to that of Twitter analysis.",
"In (Jeong et al., 2009), authors presented a semi-supervised approach to identify speech acts in emails and different forums.",
"These works, however, use datasets that comprise of face-to-face or telephone data that can not directly aid in advancing work on endless data in electronic mode such as micro-blogging networks, instant-messaging, etc.",
"Apart from these, identification of speech acts has been studied extensively for dialogue conversations starting from early 2000's with (Stolcke et al., 2000) being one of the benchmark works where the authors presented varieties of approaches such as Hidden Markov Models, Neural Networks and Decision Trees to identify dialogue acts on a benchmark dialogue data known as the Switchboard (SWBD) (Godfrey et al., 1992) dataset.",
"In (Saha et al., 2021), authors studied the role of emotion in identifying dialogue acts for a dyadic conversation by considering thee textual and the audio modality of the utterances in the conversation.",
"In (Saha et al., 2020b), authors proposed studying the role of emotion in determining dialogue acts on a dyadic and multi-party conversational dataset in a multi-modal framework (incorporating text, audio and video).",
"However, tweets are unstructured and noisy communications with spelling mistakes, random coinages with limitations in expression because of character constraint per tweet.",
"This makes it very different from face-to-face or other conversations.",
"Here, we discuss the details of the newly created dataset, EmoTA .",
"To begin with, we scanned the literature for the latest SA and ER dataset for Twitter in order to gather potentially emotionally rich tweets to explore its impact on TAC.",
"Initially, we came across several SA and ER datasets for Twitter such as (Oleri and Karagoz, 2016), (Mohammad and Kiritchenko, 2018), SemEval-2018 (Mohammad et al., 2018), BTD (Wang et al., 2012), TEC (Mohammad, 2012), CBET (Shahraki and Zaiane, 2017), STS-Gold (Mo-hammad and Turney, 2013), STS (Go et al., 2009), SS-Twitter (Thelwall et al., 2012) etc.",
"However, we chose to use SemEval-2018 dataset for further investigation of our task at hand.",
"The reason behind this choice was that most of the ER datasets were annotated with only six Eckman's (Ekman, 1999) or eight Plutchik's (Plutchik, 1980) emotion categories.",
"Whereas SemEval-2018 dataset contains tweets annotated with multi-label 11 emotion categories which aids the diversity of the problem statement.",
"Intuitively, it was indeed possible to go the other way round and search for Twitter dataset annotated with TAs such as (Zhang et al., 2011), Tweet TA Emotion Sentiment And it pisses me off more they killed people who surrendered.",
"(Vosoughi and Roy, 2016), (Saha et al., 2020a) etc.",
"However, the tweets in these datasets were devoid of nonverbal cues such as emojis which are quite excessively used in Twitter.",
"To the best of our knowledge, we were unaware of any sizable and open sourced Twitter dataset annotated for its TA and emotion labels.",
"Hence, the SemEval-2018 dataset has been manually annotated for its TA categories.",
"Unlike dialogic conversations, there isn't a standard TA tag-set available for annotating tweets.",
"However, we made use of 7 TA categories of (Saha et al., 2020a) for annotating SemEval-2018 dataset as opposed to 5 and 6 TA categories of (Zhang et al., 2011) and (Vosoughi and Roy, 2016), respectively.",
"The 7 TA tags are State-ment (sta), Expression (exp), Question (que), Request (req), Suggestion (sug), Threat (tht) and Others (oth).",
"For the current work, we selected a subset of SemEval-2018 dataset amounting to 6810 tweets to create EmoTA dataset.",
"Three annotators who were graduate in English linguistics were accredited to annotate the tweets with the appropriate TA tags.",
"They were asked to annotate these tweets individually by only viewing the tweet available without the information of the pre-annotated emotion tags.",
"This was done so as to assure that the dataset does not get biased by specific TA-emotion pairs.",
"The conflicting annotations were resolved through discussions and mutual agreements.",
"The inter-annotator score over 80% was considered as reliable agreement.",
"It was determined based on the count that for a given tweet more than two annotators agreed on a particular tag.",
"For annotating the dataset with sentiment labels, we followed a semi-supervised approach instead of manual annotation which is cost intensive.",
"We used the IBM Watson Sentiment Classifier 2 , an open-sourced API readily available for obtaining silver standard sentiment label of the tweets categorized into 3 tags namely Positive, Negative and Neutral.",
"The EmoTA dataset 3 now comprises of 6810 tweets with the corresponding gold standard TA and multi-label emotion tags.",
"Each of the tweet contains its Tweet ID and two modalities: text and emoji.",
"Few sample tweets along with the corresponding TA, sentiment and emotion labels from the proposed dataset are shown in Table 1.",
"Distributions of TA, 2 https://cloud.ibm.com/apidocs/natural-language-understanding#sentiment 3 The dataset with its TA and emotion tags will be made publicly available to the research community.",
"Below, we analyze using some samples from the dataset that require sentiment-emotion aided and multi-modal reasoning.",
"Role of Sentiment and Emotion.",
"In Figure 3b, we demonstrate using two examples from the dataset to establish our hypothesis that sentiment and emotional states of the tweeter can aid the identification of TAs.",
"In the first instance, the tweeter questions about the impending doom supposedly because of a pessimistic expectation arising due to the negative sentiment.",
"Similarly, in the second instance, because of a joyous emotion emerging due to positive sentiment, the tweeter shares an optimistic suggestion with the readers.",
"The above examples highlight the need for incorporating these additional user behavior, i.e., sentiment and emotion while reasoning about TAs.",
"Thus, stressing the requirement of addressing such synergy amongst TAC, SA and ER.",
"Role of Multi-modality.",
"In Figure 3a, we present two examples from the dataset to highlight the importance of including other nonverbal features such as emoji present in the tweet along with the text for several tweet analysis tasks.",
"In the first example tweet, the text represents an overall negative sentiment with emotion such as anger and disgust.",
"However, the presence of an emoji face with tears of joy gives it an emotion of joy along with the other emotions.",
"Similarly, in the second example tweet, the text represents the emotional state of the tweeter as sad, whereas the ok, celebration and heart emojis depict the feeling of joy.",
"These instances show that the presence of complementary information in the form of emojis aids the process of any twitter analysis task including TAC.",
"The proposed multi-tasking, multi-modal approach and implementation details are outlined in this section.",
"Textual Features.",
"To extract textual features of a tweet U having n u number of words, the representation of each of the words, w 1 , ..., w u , where w i R d u and w i 's are obtained from BERT (De-vlin et al., 2019) which is a multi-layered attention aided bidirectional Transformer Encoder model based on the original Transformer model (Vaswani et al., 2017) where d u = 768 .",
"Emoji Features.",
"To extract emoji features from a tweet, we use emoji , a python based library for eliciting the pictorial image of an emoji (primarily that of a face, object or symbols).",
"A total of 1816 kind of emojis are available along with its different types.",
"We then use emoji2vec (Eisner et al., 2016), which provides d v = 300 dimensional vector representation for each of the emojis present in the tweet.",
"Let's say a tweet contains n v number of emoji.",
"Thus, we obtain the final emoji representation V for a tweet as V R n v d v .",
"The proposed network consists of four main components :",
"(i) Modality Encoders (ME) produces respective modality encodings by taking as input the uni-modal features extracted above,",
"(ii) Dyadic Attention Mechanism (DAM) that comprises dual attention mechanisms such as intra-modal and inter-modal attentions,",
"(iii) Adversarial Loss to make the feature spaces of task-specific and shared layers of each task mutually exclusive,",
"(iv) Classification Layer that contains output channels for the three tasks at hand (TAC, SA and ER) to learn generalized representations across all the tasks.",
"Text and Emoji Modalities.",
"The features U and V obtained from each of the modalities corresponding to a tweet (discussed above) are then passed through two discrete Bi-directional LSTMs (Bi-LSTMs) (Hochreiter and Schmidhuber, 1997) to sequentially encode these representations and learn complementary semantic dependency based features into hidden states from these modalities.",
"In case of textual modality (say), the final hidden state matrix of a tweet is obtained as H u R n u 2 d l .",
"d l represents the number of hidden units in each LSTM and n u is the sequence length.",
"In the similar way, a representation of corresponding emoji modality encoding as H v R n v 2 d l is obtained.",
"The number of representations from modality encoders vary depending on the variant of the multitask learning framework used (e.g., fully shared (FS) or shared-private model (SP)).",
"In a FS variant, two representations are obtained one for text and another for emoji cumulatively for optimizing all the three tasks.",
"However, for a SP model, six encoding representations are obtained.",
"Three for text and the remaining for emoji forming a pair of text-emoji representations for each of the three tasks.",
"We use a similar concept as in (Vaswani et al., 2017), where the authors proposed to compute",
"Thus, the IA scores for individual modalities are calculated as : IA j = softmax ( Q j KT j ) V j (1) 4 Subscript 1 , 2 and 3 represent TAC, ER and SA task, respectively.",
"attention as mapping a query and a set of key-value pairs to an output.",
"So, the representations obtained from the modality encoders above are passed through three fully-connected layers each termed as queries and keys of dimension d k = d f and values of dimension d v = d f .",
"For a FS model, we have two triplets of ( Q, K, V ) as : ( Q u , K u , V u ) and ( Q v , K v , V v ) .",
"Similarly for a SP model, we have six such triplets as : ( Q u 1 , K u 1 , V u 1 ) , ( Q v 1 , K v 1 , V v 1 ) , ( Q u 2 , K u 2 , V u 2 ) , ( Q v 2 , K v 2 , V v 2 ) , ( Q u 3 , K u 3 , V u 3 ) , ( Q v 3 , K v 3 , V v 3 ) where pair of two triplets are from the textual and emoji modality encoders for each of the tasks 4 .",
"These triplets are then used to compute attention values for different purposes in various combinations which include intra attention and inter-modal attention.",
"Intra-modal Attention.",
"We compute intra-modal attention (IA) for all these individual modalities in order to learn the interdependence between the current words and the preceding part of the tweet.",
"In a way, we aim to relate different positions of a single sequence to estimate a final representation of the same sequence for individual modalities (Vaswani et al., 2017).",
"where IA R n u d f for IA u , IA R n v d f for IA v for FS model and six such IA scores for SP model.",
"Inter-modal Attention.",
"The IA scores obtained above are then used to compute inter-modal attention (IRA).",
"We re-iterate the same process (ex-plained above) to now form triplets of ( Q, K, V ) for these IA scores and then compute IRA scores amongst triplets of all IA scores by computing the matrix multiplication of combination of queries and keys of different IA modality scores using Equation 1.",
"In this manner, we obtain one IRA score as IRA uv R n u d f for FS variant and three IRA scores for SP model as IRA uv 1 , IRA uv 2 and IRA uv 3 .",
"This is done to distinguish important contributions between various modalities to achieve optimal representation of a tweet.",
"Attention Fusion.",
"Next, we concatenate each of these computed IA and IRA vectors as : C = concat ( IRA uv , IA u , IA v ) , for FS (2) C 1 = concat ( IRA uv 1 , IA u 1 , IA v 1 ) , for SP (3) C 2 = concat ( IRA uv 2 , IA u 2 , IA v 2 ) , for SP (4) C 3 = concat ( IRA uv 3 , IA u 3 , IA v 3 ) , for SP (5) Next, we obtain mean of these three different concatenated attention vectors for the SP variant or directly use the obtained C attention vector for the FS variant to obtain the final representation of a tweet.",
"M = mean ( C 1 , C 2 , C 3 ) (6) Shared Layer.",
"Additionally, for the SP model, other than having task-specific layers, we allow a shared layer to learn task invariant features.",
"Here, the shared layer is in the form of a fully-connected layer of dimension d f .",
"The inputs to the shared layer are the hidden representations of three IRA vectors : IRA uv 1 , IRA uv 2 and IRA uv 3 .",
"Thus for a given tweet, the loss of the shared layer is minimized if the model correctly classifies the tasks of each of the tweets in the input.",
"This helps learn domain invariant feature space for different tasks.",
"Adversarial Loss.",
"The goal of this adversarial loss function is to tune the weights of the shared layer so that it learns a representation that misleads the task discriminator.",
"The adversarial loss l adv , aims to make the feature space of shared and task-specific layers to be mutually exclusive (Liu et al., 2017).",
"We follow the similar strategy as that of (Liu et al., 2017), where a task discriminator D (say) maps the shared feature to its original task.",
"Thus, on a correct prediction when the loss at the shared layer decreases, the adversarial loss increases and vice-versa.",
"Alternatively, the shared layer is tuned to work in an adversarial way, thereby prohibiting the discriminator to predict one of the three tasks.",
"The adversarial loss is computed as : l adv = min F (max D ( N (cid:88) n =1 K (cid:88) k =1 d nk log[ D ( F ( x nk ))])) (7) where d nk represents the true label amongst the type of the tasks, N , and x nk is the k th example for task n .",
"The min-max optimization problem is addressed by the gradient reversal layer (Ganin and Lempitsky, 2015).",
"The final representation of the tweet obtained from the DAM module is shared across three channels pertaining to the three tasks, i.e., TAC, SA and ER (for FS model) and three DAM representations for three individual tasks are subjected to individual output layer (for SP model).",
"The task-specific loss ( l t ), shared loss ( l s ) and adversarial loss ( l adv ) are used as l f = l t + l s + l adv , for SP model (8) l f = l s + l adv , for FS model (9) where and are hyper-parameters.",
"Hyper-parameters.",
"80% of the tweets of the EmoTA dataset were used for training and the remaining 20% were used for testing the models.",
"The same training and testing data were used for all the experiments in order to ensure fair comparison of models.",
"To encode different modalities, a Bi-LSTM layer with 100 memory cells was used.",
"Dense layers of dimensions 100 were used for d f .",
"The three channels contain 7, 3 and 11 output neurons, for TA, sentiment and emotion tags, respectively.",
"Categorical crossentropy loss is used for TA and sentiment channels and Binary crossentropy loss function is used for emotion channel.",
"A learning rate of 0.01 and Adam optimizer were used in the final experimental setting.",
"All these values of the parameters were selected after a careful sensitivity analysis.",
"Pre-processing.",
"We employ NLTK based Tweet-Tokenizer to tokenize tweets.",
"Urls were removed.",
"User mentions were replaced by < user > token.",
"Numbers occurring in the tweet were replaced by Model TAC+SA TAC+ER TAC+SA+ER Five-Class Seven-Class Five-Class Seven-Class Five-Class Seven-Class Text Text+Emoji Text Text+Emoji Text Text+Emoji Text Text+Emoji Text Text+Emoji Text Text+Emoji Acc.",
"F1 Acc.",
"F1 Acc.",
"F1 Acc.",
"F1 Acc.",
"F1 Acc.",
"F1 Acc.",
"F1 Acc.",
"F1 Acc.",
"F1 Acc.",
"F1 Acc.",
"F1 Acc.",
"F1 FS 72.06 69.87 74.73 72.02 62.25 59.66 66.85 64.35 73.72 71.05 76.60 74.32 63.58 61.00 68.73 66.20 78.01 75.85 78.16 76.01 71.29 68.85 75.62 73.20 FS+Adv 73.92 71.05 75.61 73.32 63.67 61.27 69.54 67.03 75.57 73.05 77.35 75.00 65.11 62.80 71.24 69.02 80.01 77.59 81.34 79.08 72.90 70.51 76.21 73.95 SP 73.41 70.91 76.81 74.52 62.71 60.25 67.62 65.28 75.05 72.85 77.12 74.93 64.63 62.35 69.30 67.02 78.41 76.00 80.68 78.28 72.02 69.90 76.50 74.33 SP+Adv (withoutDAM) 74.73 72.06 75.86 73.33 64.13 61.75 70.32 68.04 76.11 73.80 77.57 75.20 65.80 63.16 71.86 69.60 80.32 78.00 81.49 79.14 73.24 70.90 77.60 75.28 SP+Adv(Glove) 73.82 71.22 77.27 75.00 66.71 64.46 69.94 67.61 75.61 73.28 78.42 76.05 68.81 66.36 72.26 69.83 79.35 77.15 81.79 79.46 73.31 70.90 78.17 76.00 SP+Adv(onlyIA) 76.21 73.85 78.62 76.35 69.73 67.30 71.75 69.50 77.64 75.21 80.68 78.37 71.07 68.95 73.05 71.00 81.64 79.27 83.04 81.16 75.62 73.35 79.95 77.62 SP+Adv(onlyIRA) -78.75 76.30 -72.17 70.05 -80.82 78.55 -73.59 71.29 -83.49 81.15 -80.10 78.02 SP+Adv(withDAM) 76.21 73.85 79.37 77.01 69.73 67.30 72.90 70.63 77.64 75.21 80.97 78.70 71.07 68.95 74.08 72.00 81.64 79.27 84.08 81.85 75.62 73.35 80.32 78.16 Table 2: Results of all the baselines and the proposed multi-task models in terms of accuracy and weighted F1-score.",
"< number > token.",
"Ekphrasis (Baziotis et al., 2017) was used to extract hashtags by segmenting long string into its constituent words.",
"All the characters of the tweet were lower-cased.",
"Since the dataset is under-represented for most of the TA tags, we over-sample 80% of the tweets used for training as : the mediocrely represented tags (e.g., sug, que and oth) are over-sampled to be equally represented as the most represented tags (e.g., sta and exp).",
"Similarly, the highly under-represented classes (e.g., req and tht) are over-sampled to be equally represented as the mediocrely represented tags in the EmoTA dataset.",
"All the results reported below are on the 20% test data without any over-sampling.",
"A series of experiments were conducted for evaluating the proposed approach.",
"Experiments were conducted for single task and several combinations of multi-task framework with TAC being the pivotal task along with varying modalities.",
"A thorough ablation study is performed to analyze the importance of each of the attention mechanisms of the proposed architectural framework along with several variations of multi-task learning (e.g., FS, SP etc.).",
"Note that we aim to enhance the performance Figure 5: The visualization of the learned weights for a tweet from IA u layeru 1 : I lost a couple niggas I want revenge so put me in coach.\" for single task TAC (baseline), multi-task TAC+SA+ER (proposed) models Model SA & ER Five-Class Seven-Class Text Text+Emoji Text Text+Emoji Acc. F1 Acc. F1 Acc. F1 Acc. F1 Single Task SA 87.26 86.05 88.52 87.20 88.85 87.30 90.10 89.00 Single Task ER 81.57 60.77 84.63 64.32 80.07 73.51 81.58 76.63 SA + TAC 89.31 87.85 90.74 89.09 89.60 88.75 91.55 90.35 ER + TAC 83.52 65.86 86.09 67.02 81.37 75.21 84.21 78.30 SA + ER (for SA) 92.30 91.06 93.02 92.00 90.33 88.65 92.73 90.37 SA + ER (for ER) 84.61 68.77 87.37 70.19 82.72 70.00 85.30 72.04 SA + ER + TAC (for SA) 92.06 91.13 93.19 92.38 92.49 90.53 93.68 91.82 SA + ER + TAC (for ER) 85.39 70.04 88.31 72.77 83.26 79.66 86.01 81.05 Table 4: Results of the proposed model for the single and multi-task SA and ER of TAC with the help of other two auxiliary tasks. Following this, we report results and analysis with TAC strictly being the pivotal task in all the task combinations. Since, the dataset is unbalanced for all the task categories, we report results for different dimensions of TAC in the following set-up: Five-class Classification : This includes the top 5 highly occurring TA tags namely sta, exp, que, sug and oth. Seven-class Classification : This includes all the 7 categories of TAs used in the annotation process. Table 3 and 2 illustrate the results of single task TAC and varying combinations of multi-task proposed models for different set-up (as mentioned Tweet True TAC TAC+SA TAC+ER TAC+SA+ER @BreezyWeekes hey breezy, you wanna give me some of that coffee you posted on your snap?? please req exp req req req We're going to get City in the next round for a revenge tht sta exp tht tht @voguemagazine, did you not learn from @FreePeople 's viral insult to ballet? Stop trying to wrongfully stick models into pointe shoes sug que exp exp sug I wonder if Corey will vote for Nicole?? #snacole #bb18 #paulsgonnawin #finale #halfamill que que que que que Table 5: Sample tweets from the EmoTA dataset with its corresponding ground truth and predicted labels for different single and multi-task models Model Acc. F1 JointDAS (TAC + SA) (Cerisara et al., 2018) 59.05 57.60 CNN-SVM (TAC) (Saha et al., 2019) 61.32 59.75 Transformer (TAC) (Saha et al., 2020c) 65.46 63.65 Bert-Caps (TAC) (Saha et al., 2020a) 67.10 65.00 Proposed (TAC) 68.57 66.15 Table 6: Comparative Analysis with the state of the art models above). As evident, the addition of non-verbal cues in the form of emojis improves the uni-modal textual baseline consistently. This improvement implies that the proposed architecture utilizes the interaction among the input modalities very effectively. This highlights the importance of incorporating multi-modal features for different Twitter analysis tasks. We also report result for utilizing emoji as textual feature instead of treating it as a different modality in the single task TAC framework. Also, the five-class set-up gave better results than the seven-class set up. This is pretty obvious, as with 5-class set-up, the model needs to distinguish and identify lesser fine-grained features compared to the 7-class set-up. Additionally, the underrepresentation of two tags in the EmoTA dataset for the 7-class set-up also effects its performance. As seen in Table 2, the multi-task framework with all the three tasks (i.e., TAC + SA + ER) consistently gave better results as compared to single task TAC. In the bi-task variant, TAC+SA, shows little improvement in different metrics as opposed to TAC+ER over and above the single task TAC. This gain is rather intuitive as sentiment alone is sometimes unable to convey complete information of the tweeter's state of mind. E.g., a negative sentiment can occur because of various emotions such as disgust , fear , sadness etc. Similarly, a positive sentiment can take place because of emotions such as happiness , surprise etc. Thus, with sentiment alone, sometimes this discreteness or fine differences in the state of mind cannot be completely determined and conveyed. To illustrate this, in Figure 5, we provide a visualization of the learned weights of a tweet from the IA u layer (as this layer contains word-wise attention scores). For this particular tweet, its true TA label is tht . With the multi-task framework, the importance of warning bearing words are learnt well such as lost, revenge compared to the single-task TAC where attention is laid on expression bearing word such as put me . Additionally, we also report results for cases where sentiment and emotion were directly used as features in the single task TAC models to leverage from instead of deploying a multi-task based approach in Table 3. As stated above, we treat SA and ER as auxiliary tasks aiding the primary task, i.e., TAC. However, we report the performance of SA and ER tasks on the proposed model for single as well as multi-task frameworks in Table 4 for further investigations. However, we do not make any explicit effort to enhance their performance. Comparison amongst Different Multi-task Architecture. In terms of varying ways of multitasking such as FS, SP along with adversarial loss (adv), it was observed that SP model gave better results compared to FS model. Additionally, incorporating adversarial loss further boosted the performance of different multi-task models. Intuitively, as TAC shares lesser amount of correlation with SA and ER compared to SA and ER themselves, FS model was not sufficient enough to learn diverse features across different tasks. This observation is in conformity with the existing literature. We also demonstrate the importance of different attentions used for the best performing multi-task model, i.e., SP+Adv. Furthermore, we also report results by replacing BERT model to extract textual representation with Glove embeddings (Pennington et al., 2014). Results indicate that each of these aspects contributed significantly to aid the performance of the proposed multi-tasking framework. All the reported results here are statistically significant (Welch, 1947). Comparison with the State of the Art Models. We also compare our proposed approach with the recent state of the art models for single task TAC as we are unaware of any other work which jointly optimized tweet act, emotion and sentiment in Twitter. In Table 6, we report the results for the same by re-implementing those on the EmoTA dataset. As evident, the proposed model outperformed these SOTA approaches. Error Analysis. An in-depth analysis revealed several scenarios as to why the proposed model faltered which are as follows : i. Imbalanced Dataset : As visible in Figure 1a, except for sta and exp tags, all the classes are under-represented in the EmoTA dataset. Even though we apply over-sampling to partially counter this issue but still the tags such as req and tht contain very little tweets for the model to learn fine differences amongst different categories. In accordance with this, we observe that five-class performs exceptionally better than the seven-class classification set-up; ii. Fine-grained tags : It was also observed that the tweets which were mis-classified were subset of each other. For instance, tweet such as don't get discouraged! it's early on; it can get overwhelming. keep reading; use cue cards it'll get better!! is wrongly predicted as exp rather than sug which in the superficial way is a subset of the former tag; iii. Miscellaneous : Tweets belonging to oth tag was also majorly mis-classified as there was no fixed pattern of tweets belonging to this category. To counter this, even more fine-grained categories of TAs needs to be identified and modelled. Sample utterances for the error analysis are shown in Table 5. 6 Conclusion and Future Work In this paper, we studied the role of sentiment and emotion in speech act classification in Twitter. We curate a novel dataset EmoTA , that contains pre-annotated tweets with emotions collected from open-source dataset and annotated with TAs and sentiment categories. We propose a Dyadic Attention Mechanism based multi-modal (emojis and text), adversarial multi-task framework for joint optimization of TAs, sentiment and emotions. The DAM (dyadic attention mechanism) module employs intra-modal and inter-modal attention to fuse multiple modalities and learn generalized features across all the tasks. Results show that multi-modality and multi-tasking boosted the performance of TA identification compared to its uni-modal and single task TAC variants. In future, attempts will be made to predict TAs with more precision by incorporating fine-grained modality encodings and also identifying which other NLP tasks (e.g., named entity recognition) might assist TAC as a task. Acknowledgments Dr. Sriparna Saha gratefully acknowledges the Young Faculty Research Fellowship (YFRF) Award, supported by Visvesvaraya Ph.D. Scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia) for carrying out this research. References Samuel Albanie, Arsha Nagrani, Andrea Vedaldi, and Andrew Zisserman. 2018. Emotion recognition in speech using cross-modal transfer in the wild. In Proceedings of the 26th ACM international conference on Multimedia , pages 292301. Lisa Feldman. Barrett, Michael Lewis, and Jeannette M. Haviland-Jones. 1993. Handbook of emotions . The Guilford Press. Christos Baziotis, Nikos Pelekis, and Christos Doulk-eridis. 2017. Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017) , pages 747754, Vancouver, Canada. Association for Computational Linguistics. Christophe Cerisara, Somayeh Jafaritazehjani, Ade-dayo Oluokun, and Hoa Le. 2018. Multi-task dialog act and sentiment recognition on mastodon. arXiv preprint arXiv:1807.05013 . Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers) , pages 41714186. Hai Ha Do, PWC Prasad, Angelika Maag, and Abeer Alsadoon. 2019. Deep learning for aspect-based sentiment analysis: a comparative review. Expert Systems with Applications , 118:272299. Ben Eisner, Tim Rocktschel, Isabelle Augenstein, Matko Bosnjak, and Sebastian Riedel. 2016. emoji2vec: Learning emoji representations from their description. In Proceedings of The Fourth International Workshop on Natural Language Processing for Social Media, SocialNLP@EMNLP 2016, Austin, TX, USA, November 1, 2016 , pages 4854. Association for Computational Linguistics. Paul Ekman. 1999. Basic emotions. Handbook of cognition and emotion , 98(45-60):16. Bjarke Felbo, Alan Mislove, Anders Sgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017 , pages 16151625. Association for Computational Linguistics. Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In International conference on machine learning , pages 11801189. PMLR. Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant supervision. CS224N project report, Stanford , 1(12):2009. John J Godfrey, Edward C Holliman, and Jane McDaniel. 1992. Switchboard: Telephone speech corpus for research and development. In Acoustics, Speech, and Signal Processing, 1992. ICASSP-92., 1992 IEEE International Conference on , volume 1, pages 517520. IEEE. Sepp Hochreiter and Jrgen Schmidhuber. 1997. Long short-term memory. Neural computation , 9(8):17351780. M Shamim Hossain and Ghulam Muhammad. 2019. Emotion recognition using deep learning approach from audiovisual emotional big data. Information Fusion , 49:6978. Minwoo Jeong, Chin-Yew Lin, and Gary Geunbae Lee. 2009. Semi-supervised speech act recognition in emails and forums. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3-Volume 3 , pages 12501259. Association for Computational Linguistics. Abhishek Kumar, Asif Ekbal, Daisuke Kawahra, and Sadao Kurohashi. 2019. Emotion helps sentiment: A multi-task model for sentiment and emotion analysis. In 2019 International Joint Conference on Neural Networks (IJCNN) , pages 18. IEEE. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-task learning for text classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 August 4, Volume 1: Long Papers , pages 110. Association for Computational Linguistics. Navonil Majumder, Soujanya Poria, Devamanyu Haz-arika, Rada Mihalcea, Alexander Gelbukh, and Erik Cambria. 2019. Dialoguernn: An attentive rnn for emotion detection in conversations. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 33, pages 68186825. Saif Mohammad. 2012. # emotional tweets. In * SEM 2012: The First Joint Conference on Lexical and Computational SemanticsVolume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012) , pages 246255. Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 task 1: Affect in tweets. In Proceedings of the 12th international workshop on semantic evaluation , pages 117. Saif Mohammad and Svetlana Kiritchenko. 2018. Understanding emotions: A dataset of tweets to study interactions between affect categories. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) . Saif M Mohammad and Peter D Turney. 2013. Crowd-sourcing a wordemotion association lexicon. Computational Intelligence , 29(3):436465. Obrahim Oleri and Pinar Karagoz. 2016. Detecting user emotions in twitter through collective classification. In KDIR , pages 205212. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) , pages 15321543. Robert Plutchik. 1980. A general psychoevolutionary theory of emotion. In Theories of emotion , pages 333. Elsevier. Tulika Saha, Dhawal Gupta, Sriparna Saha, and Pushpak Bhattacharyya. 2021. Emotion aided dialogue act classification for task-independent conversations in a multi-modal framework. Cognitive Computation , 13(2):277289. Tulika Saha, Srivatsa Ramesh Jayashree, Sriparna Saha, and Pushpak Bhattacharyya. 2020a. Bert-caps: A transformer-based capsule network for tweet act classification. IEEE Transactions on Computational Social Systems , 7(5):11681179. Tulika Saha, Aditya Patra, Sriparna Saha, and Pushpak Bhattacharyya. 2020b. Towards emotion-aided multi-modal dialogue act classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 43614372. Tulika Saha, Aditya Prakash Patra, Sriparna Saha, and Pushpak Bhattacharyya. 2020c. A transformer based approach for identification of tweet acts. In 2020 International Joint Conference on Neural Networks (IJCNN) , pages 18. IEEE. Tulika Saha, Sriparna Saha, and Pushpak Bhattacharyya. 2019. Tweet act classification: A deep learning based classifier for recognizing speech acts in twitter. In 2019 International Joint Conference on Neural Networks (IJCNN) , pages 18. IEEE. Ameneh Gholipour Shahraki and Osmar R Zaiane. 2017. Lexical and learning-based emotion mining from text. In Proceedings of the International Conference on Computational Linguistics and Intelligent Text Processing , volume 9, pages 2455. Mohammad Soleymani, David Garcia, Brendan Jou, Bjrn Schuller, Shih-Fu Chang, and Maja Pantic. 2017. A survey of multimodal sentiment analysis. Image and Vision Computing , 65:314. Andreas Stolcke, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics , 26(3):339373. Mike Thelwall, Kevan Buckley, and Georgios Pal-toglou. 2012. Sentiment strength detection for the social web. Journal of the American Society for Information Science and Technology , 63(1):163173. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA , pages 59986008. Soroush Vosoughi. 2015. Automatic detection and ver-ification of rumors on Twitter . Ph.D. thesis, Massachusetts Institute of Technology. Soroush Vosoughi and Deb Roy. 2016. Tweet acts: A speech act classifier for twitter. In Tenth International AAAI Conference on Web and Social Media . Wenbo Wang, Lu Chen, Krishnaprasad Thirunarayan, and Amit P Sheth. 2012. Harnessing twitter\" big data\" for automatic emotion identification."
]
| [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"method",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
]
|
[
"We present a novel iterative, edit-based approach to unsupervised sentence simplification.",
"Our model is guided by a scoring function involving fluency, simplicity, and meaning preservation.",
"Then, we iteratively perform word and phrase-level edits on the complex sentence.",
"Compared with previous approaches, our model does not require a parallel training set, but is more controllable and interpretable.",
"Experiments on Newsela and WikiLarge datasets show that our approach is nearly as effective as state-of-the-art supervised approaches.",
"1 1 Introduction Sentence simplification is the task of rewriting text to make it easier to read, while preserving its main meaning and important information.",
"Sentence simplification is relevant in various real-world and downstream applications.",
"For instance, it can bene-fit people with autism (Evans et al., 2014), dyslexia (Rello et al., 2013), and low-literacy skills (Watan-abe et al., 2009).",
"It can also serve as a preprocessing step to improve parsers (Chandrasekar et al., 1996) and summarization systems (Klebanov et al., 2004).",
"Recent efforts in sentence simplification have been influenced by the success of machine translation.",
"In fact, the simplification task is often treated as monolingual translation, where a complex sentence is translated to a simple one.",
"Such simplification systems are typically trained in a supervised way by either phrase-based machine translation (PBMT, Wubben et al., 2012; Narayan and Gardent, 2014; Xu et al., 2016) or neural machine translation (NMT, Zhang and Lapata, 2017; Guo et al., 2018; Kriz et al., 2019).",
"Recently, sequence-to-sequence 1 Code is released at https://github.com/ ddhruvkr/Edit-Unsup-TS (Seq2Seq)-based NMT systems are shown to be more successful and serve as the state of the art.",
"However, supervised Seq2Seq models have two shortcomings.",
"First, they give little insight into the simplification operations, and provide little control or adaptability to different aspects of simplification (e.g., lexical vs. syntactical simplification).",
"Second, they require a large number of complex-simple aligned sentence pairs, which in turn require considerable human effort to obtain.",
"In previous work, researchers have addressed some of the above issues.",
"For example, Alva-Manchego et al. (2017) and Dong et al. (2019) explicitly model simplification operators such as word insertion and deletion.",
"Although these approaches are more controllable and interpretable than standard Seq2Seq models, they still require large volumes of aligned data to learn these operations.",
"To deal with the second issue, Surya et al. (2019) recently proposed an unsupervised neural text simplification approach based on the paradigm of style transfer.",
"However, their model is hard to interpret and control, like other neural network-based models.",
"Narayan and Gardent (2016) attempted to address both issues using a pipeline of lexical substitution, sentence splitting, and word/phrase deletion.",
"However, these operations can only be executed in a fixed order.",
"In this paper, we propose an iterative, edit-based unsupervised sentence simplification approach, motivated by the shortcomings of existing work.",
"We first design a scoring function that measures the quality of a candidate sentence based on the key characteristics of the simplification task, namely, fluency, simplicity, and meaning preservation.",
"Then, we generate simplified candidate sentences by iteratively editing the given complex sentence using three simplification operations (lex-ical simplification, phrase extraction, deletion and reordering).",
"Our model seeks the best simplified Figure 1: An example of three edit operations on a given sentence.",
"candidate sentence according to the scoring function.",
"Compared with Narayan and Gardent (2016), the order of our simplification operations is not fixed and is decided by the model.",
"Figure 1 illustrates an example in which our model first chooses to delete a sentence fragment, followed by reordering the remaining fragments and replacing a word with a simpler synonym.",
"We evaluate our approach on the Newsela (Xu et al., 2015) and WikiLarge (Zhang and Lapata, 2017) corpora.",
"Experiments show that our approach outperforms previous unsupervised methods and even performs competitively with state-of-the-art supervised ones, in both automatic metrics and human evaluations.",
"We also demonstrate the interpretability and controllability of our approach, even without parallel training data.",
"Early work used handcrafted rules for text simplification, at both the syntactic level (Siddharthan, 2002) and the lexicon level (Carroll et al., 1999).",
"Later, researchers adopted machine learning methods for text simplification, modeling it as monolingual phrase-based machine translation (Wubben et al., 2012; Xu et al., 2016).",
"Further, syntactic information was also considered in the PBMT framework, for example, constituency trees (Zhu et al., 2010) and dependency trees (Bingel and Sgaard, 2016).",
"Narayan and Gardent (2014) performed probabilistic sentence splitting and deletion, followed by MT-based paraphrasing.",
"Nisioi et al. (2017) employed neural machine translation (NMT) for text simplification, using a sequence-to-sequence (Seq2Seq) model (Sutskever et al., 2014).",
"Zhang and Lapata (2017) used reinforcement learning to optimize a reward based on simplicity, fluency, and relevance.",
"Zhao et al. (2018a) integrated the transformer architecture and paraphrasing rules to guide simplification learning.",
"Kriz et al. (2019) produced diverse simplifications by generating and re-ranking candidates by fluency, adequacy, and simplicity.",
"Guo et al. (2018) showed that simplification benefits from multi-task learning with paraphrase and entailment generation.",
"Martin et al. (2019) enhanced the transformer architecture with conditioning parameters such as length, lexical and syntactic complexity.",
"Recently, edit-based techniques have been developed for text simplification.",
"Alva-Manchego et al. (2017) trained a model to predict three simplification operators (keep, replace, and delete) from aligned pairs.",
"Dong et al. (2019) employed a similar approach but in an end-to-end trainable manner with neural networks.",
"However, these approaches are supervised and require large volumes of parallel training data; also, their edits are only at the word level.",
"By contrast, our method works at both word and phrase levels in an unsupervised manner.",
"For unsupervised sentence simplification, Surya et al. (2019) adopted style-transfer techniques, using adversarial and denoising auxiliary losses for content reduction and lexical simplification.",
"However, their model is based on a Seq2Seq network, which is less interpretable and controllable.",
"They cannot perform syntactic simplification since syntax typically does not change in style-transfer tasks.",
"Narayan and Gardent (2016) built a pipeline-based unsupervised framework with lexical simplification, sentence splitting, and phrase deletion.",
"However, these operations are separate components in the pipeline, and can only be executed in a fixed order.",
"Unsupervised edit-based approaches have recently been explored for natural language generation tasks, such as style transfer, paraphrasing, and sentence error correction.",
"Li et al. (2018) proposed edit-based style transfer without parallel supervision.",
"They replaced style-specific phrases with those in the target style, which are retrieved from the training corpus.",
"Miao et al. (2019) used MetropolisHastings sampling for constrained sentence generation.",
"In this paper, we model text generation as a search algorithm, and design search objective and search actions specifically for text simplification.",
"Concurrent work further shows the success of search-based unsupervised text generation for paraphrasing (Liu et al., 2020) and summarization (Schumann et al., 2020).",
"In this section, we first provide an overview of our approach, followed by a detailed description of each component, namely, the scoring function, the edit operations, and the stopping criteria.",
"We first define a scoring function as our search objective.",
"It allows us to impose both hard and soft constraints, balancing the fluency, simplicity, and adequacy of candidate simplified sentences (Section 3.2).",
"Our approach iteratively generates multiple candidate sentences by performing a sequence of lexical and syntactic operations.",
"It starts from the input sentence; in each iteration, it performs phrase and word edits to generate simplified candidate sentences (Section 3.3).",
"Then, a candidate sentence is selected according to certain criteria.",
"This process is repeated until none of the candidates improve the score of the source sentence by a threshold value.",
"The last candidate is returned as the simplified sentence (Section 3.4).",
"Our scoring function is the product of several individual scores that evaluate various aspects of a candidate simplified sentence.",
"This is also known as the product-of-experts model (Hinton, 2002).",
"SLOR score from a syntax-aware language model ( f eslor ).",
"This measures the language fluency and structural simplicity of a candidate sentence.",
"A probabilistic language model (LM) is often used as an estimate of sentence fluency (Miao et al., 2019).",
"In our work, we make two important modi-fications to a plain LM.",
"First, we replace an LM's estimated sentence probability with the syntactic log-odds ratio (SLOR, Pauls and Klein, 2012), to better measure fluency and human acceptability.",
"According to Lau et al. (2017), SLOR shows the best correlation to human acceptability of a sentence, among many sentence probability-based scoring functions.",
"SLOR was also shown to be effective in unsupervised text compression (Kann et al., 2018).",
"where PLM is the sentence probability given by the language model, PU ( s ) = (cid:81) w s P ( w ) is the product of the unigram probability of a word w in the sentence, and | s | is the sentence length.",
"SLOR essentially penalizes a plain LM's probability by unigram likelihood and the length.",
"It ensures that the fluency score of a sentence is not penalized by the presence of rare words.",
"Consider two sentences, I went to England for vacation and I went to Senegal for vacation .",
"Even though both sentences are equally fluent, a standard LM will give a higher score to the former, since the word England is more likely to occur than Sene-gal.",
"In simplification, SLOR is preferred for preserving rare words such as named entities.",
"2 Second, we use a syntax-aware LM, i.e., in addition to words, we use part-of-speech (POS) and dependency tags as inputs to the LM (Zhao et al., 2018b).",
"For a word w i , the input to the syntax-aware LM is [ e ( w i ); p ( w i ); d ( w i )] , where e ( w i ) is the word embedding, p ( w i ) is the POS tag embedding, and d ( w i ) is the dependency tag embedding.",
"Note that our LM is trained on simple sentences.",
"Thus, the syntax-aware LM prefers a syntactically simple sentence.",
"It also helps to identify sentences that are structurally ungrammatical.",
"Cosine Similarity ( f cos ).",
"Cosine similarity is an important measure of meaning preservation.",
"We compute the cosine value between sentence embeddings of the original complex sentence ( c ) and the generated candidate sentence ( s ), where our sentence embeddings are calculated as the idf weighted average of individual word embeddings.",
"Our sentence similarity measure acts as a hard filter, i.e., f cos ( s ) = 1 if cos( c , s ) > , or f cos ( s ) = 0 otherwise, for some threshold .",
"Entity Score ( f entity ).",
"Entities help identify the key information of a sentence and therefore are also useful in measuring meaning preservation.",
"Thus, we count the number of entities in the sentence as part of the scoring function, where entities are detected by a third-party tagger.",
"Length ( f len ).",
"This score is proportional to the inverse of the sentence length.",
"It forces the model to generate shorter and simpler sentences.",
"However, we reject sentences shorter than a specified length ( 6 tokens) to prevent over-shortening.",
"2 Note that we do not use SLOR to evaluate lexicon simplicity, which will later be evaluated by the Flesch reading ease (FRE) score.",
"The SLOR score, in fact, preserves rare words, so that we can better design dictionary-based word substitution for lexical simplification (Section 3.3).",
"FRE ( f fre ).",
"The Flesch Reading Ease (FRE) score (Kincaid et al., 1975) measures the ease of readability in text.",
"It is based on text features such as the average sentence length and the average number of syllables per word.",
"A higher scores indicate that the text is simpler to read.",
"We compute the overall scoring function as the product of individual scores.",
"where the weights , , , and balance the relative importance of the different scores.",
"Recall that the cosine similarity measure does not require a weight since it is a hard indicator function.",
"In Section 4.5, we will experimentally show that the weights defined for different scores affect different characteristics of simplification and thus provide more adaptability and controllability.",
"We generate candidate sentences by editing words and phrases.",
"We use a third-party parser to obtain the constituency tree of a source sentence.",
"Each clauseand phrase-level constituent (e.g., S, VP, and NP) is considered as a phrase.",
"Since a constituent can occur at any depth in the parse tree, we can deal with both long and short phrases at different granularities.",
"In Figure 2, for example, both good (ADJP) and tasted good (VP) are constituents and thus considered as phrases, whereas tasted is considered as a single word.",
"For each phrase, we generate a candidate sentence using the edit operations explained below, with Figure 1 being a running example.",
"Removal.",
"For each phrase detected by the parser, this operation generates a new candidate sentence by removing that phrase from the source sentence.",
"In Figure 1, our algorithm can drop the phrase according to a Seattle based reporter , which is not the main clause of the sentence.",
"The removal operation allows us to remove peripheral information in a sentence for content reduction.",
"Extraction.",
"This operation simply extracts a selected phrase (including a clause) as the candidate sentence.",
"This allows us to select the main clause in a sentence and remove remaining peripheral information.",
"Reordering.",
"For each phrase in a sentence, we generate candidate sentences by moving the phrase before or after another phrase (identified by clause-and phrase-level constituent tags).",
"In the running example, the phrase In 2016 alone is moved between the phrases 12 billion dollars and on constructing theme parks .",
"As seen, the reordering operation is able to perform syntactic simplification.",
"Substitution.",
"In each phrase, we identify the most complex word as the rarest one according to the idf score.",
"For the selected complex word, we generate possible substitutes using a two-step strategy.",
"First, we obtain candidate synonyms by taking the union of the WordNet synonym set (Miller, 1995) and the closest words from GloVe (Penning-ton et al., 2014) and Word2Vec (Mikolov et al., 2013) embeddings (where embedding closeness is measured by Euclidean distance).",
"Second, a candidate synonym is determined to be an appropriate simple substitute if it satisfies the following conditions:",
"a) it has a lower idf score than the complex word, where the scores are computed from the target simple sentences,",
"b) it is not a morphological inflection of the complex word,",
"c) its word embedding exceeds a cosine similarity threshold to the complex word, and,",
"d) it is has the same part-of-speech and dependency tags in the sentence as the complex word.",
"We then generate candidate sentences by replacing the complex word with all qualified lexical substitutes.",
"Notably, we do not replace entity words identified by entity taggers.",
"In our example sentence, consider the phrase constructing theme parks .",
"The word construct-ing is chosen as the word to be simplified, and is replaced with building.",
"As seen, this operation performs lexical simplification.",
"In each iteration, we consider all the operations (i.e., removal, extraction, reordering, and substitu-tion).",
"Each operation may generate multiple candidates (e.g., multiple words for substitution); we filter out a candidate sentence if the improvement does not pass an operation-specific threshold.",
"We choose the highest-scoring sentence from those that are not filtered out.",
"Our algorithm terminates if no edit passes the threshold, and the final candidate is our generated simplified sentence.",
"Our algorithm includes a filtering step for each operation.",
"We only keep a candidate sentence if it is better than the previous one by a multiplicative factor, i.e., f ( c ) /f ( s ) > r op (3) where s is the sentence given by the previous iteration, and c is a candidate generated by operator op from s .",
"Notably, we allow different thresholds for each operation.",
"This provides control over different aspects of simplification, namely, lexicon simplification, syntactic simplification, and content reduction.",
"A lower threshold for substitution, for example, encourages the model to perform more lexical simplification.",
"We use the Newsela (Xu et al., 2015) and the WikiLarge datasets (Zhang and Lapata, 2017) for evaluating our model.",
"Newsela is a collection of 1,840 news articles written by professional editors at 5 reading levels for children.",
"We use the standard split and exclude simple-complex sentence pairs that are one reading level apart, following Zhang and Lapata (2017).",
"This gives 95,208 training, 1,129 validation, and 1,077 test sentences.",
"The WikiLarge dataset is currently the largest text simplification corpus.",
"It contains 296,402, 2,000, and 359 complex-simple sentence pairs for training, validation, and testing, respectively.",
"The training set of WikiLarge consists of automatically aligned sentence pairs from the normal and simple Wikipedia versions.",
"The validation and test sets contain multiple human-written references, against which we evaluate our algorithm.",
"For each corpus, we only use its training set to learn a language model of simplified sentences.",
"For the WikiLarge dataset, we also train a Word2Vec embedding model from scratch on its source and target training sentences.",
"These embeddings are used to obtain candidate synonyms in the substitution operation.",
"For the LM, we use a two-layer, 256-dimensional recurrent neural network (RNN) with the gated recurrent unit (GRU, Chung et al., 2014).",
"We initialize word embeddings using 300-dimensional GloVe (Pennington et al., 2014); out-of-vocabulary words are treated as UNK , initialized uniformly in the range of 0 .",
"05 .",
"Embeddings for POS tags and dependency tags are 150-dimensional, also initialized randomly.",
"We fine-tune all embeddings during training.",
"We use the Averaged Stochastic Gradient Descent (ASGD) algorithm (Polyak and Juditsky, 1992) to train the LM, with 0 .",
"4 as the dropout and 32 as the batch size.",
"For the Newsela dataset, the thresholds r op in the scoring function are set to 1 .",
"25 for all the edit operations.",
"All the weights in our scoring function ( , , , ) are set to 1 .",
"For the WikiLarge dataset, the thresholds are set as 1 .",
"25 for the removal and reordering operations, 0 .",
"8 for substitution, and 5 .",
"0 for extraction.",
"The weights in the scoring function ( , , , ) are set to 0 .",
"5 , 1 .",
"0 , 0 .",
"25 and 1 .",
"0 , respectively.",
"We use CoreNLP (Manning et al., 2014) to construct the constituency tree and Spacy 3 to generate part-of-speech and dependency tags.",
"We first consider the reference to obtain an upper-bound for a given evaluation metric.",
"We also consider the complex sentence itself as a trivial baseline, denoted by Complex .",
"Next, we develop a simple heuristic that removes rare words occurring 250 times in the simple sentences of the training corpus, denoted by Reduce-250 .",
"As discussed in Section 4.4, this simple heuristic demonstrates the importance of balancing different automatic evaluation metrics.",
"For unsupervised competing methods, we compare with Surya et al. (2019), which is inspired by unsupervised neural machine translation.",
"They proposed two variants, UNMT and UNTS , but their results are only available for WikiLarge.",
"We also compare our model with supervised methods.",
"First, we consider non-neural phrase-based machine translation (PBMT) methods: PBMT-R (Wubben et al., 2012), which re-ranks sentences generated by PBMT for diverse simplifications; SBMT-SARI (Xu et al., 2016), which uses an external paraphrasing database; and Hybrid (Narayan and Gardent, 2014), which uses a combination of PBMT and discourse representation structures.",
"Next, we compare our method with neural machine translation (NMT) systems: EncDecA , which is a vanilla Seq2Seq model with attention (Nisioi et al., 2017); Dress and Dress-Ls , which are based on deep reinforcement learning (Zhang and Lapata, 2017); DMass (Zhao et al., 2018a), which is a transformer-based model with external simplification rules; EncDecP , which is an encoder-decoder model with a pointer-mechanism; EntPar , which is based on multi-task learning (Guo et al., 2018); S2S-All-FA , which a reranking based model focussing on lexical simplification (Kriz et al., 2019); and Access , which is based on the transformer architecture (Martin et al., 2019).",
"Finally, we compare with a supervised edit-based neural model, Edit-NTS (Dong et al., 2019).",
"We evaluate our model with a different subset of operations, i.e., removal ( RM ), extraction ( EX ), reordering ( RO ), and lexical substitution ( LS ).",
"In our experiments, we test the following variants: RM+EX , RM+EX+LS , RM+EX+RO , and RM+EX+LS+RO .",
"Tables 1 and 2 present the results of the automatic evaluation on the Newsela and WikiLarge datasets,",
"respectively.",
"We use the SARI metric (Xu et al., 2016) to measure the simplicity of the generated sentences.",
"SARI computes the arithmetic mean of the n -gram F1 scores of three rewrite operations: adding, deleting, and keeping.",
"The individual F1-scores of these operations are reported in the columns Add, Delete, and Keep.",
"We also compute the BLEU score (Papineni et al., 2002) to measure the closeness between a candidate and a reference.",
"Xu et al. (2016) and Sulem et al. (2018) show that BLEU correlates with human judgement on fluency and meaning preservation for text simplification.",
"4 4 This does not hold when sentence splitting is involved.",
"In our datasets, however, sentence splitting is rare, for example, 0.18% in the Newsela validation set).",
"In addition, we include a few intrinsic measures (without reference) to evaluate the quality of a candidate sentence: the FleschKincaid grade level (FKGL) evaluating the ease of reading, as well as the average length of the sentence.",
"A few recent text simplification studies (Dong et al., 2019; Kriz et al., 2019) did not use BLEU for evaluation, noticing that the complex sentence itself achieves a high BLEU score (albeit a low SARI score), since the complex sentence is indeed fluent and preserves meaning.",
"This is also shown by our Complex baseline.",
"For the Newsela dataset, however, we notice that the major contribution to the SARI score is from the deletion operation.",
"By analyzing previous work such as EntPar , we find that it reduces the sentence length to a large extent, and achieves high SARI due to the extremely high F1 score of Delete.",
"However, its BLEU score is low, showing the lack of fluency and meaning.",
"This is also seen from the high SARI of ( Reduce-250 ) in Table 1.",
"Ideally, we want both high SARI and high BLEU, and thus, we calculate the geometric mean (GM) of them as the main evaluation metric for the Newsela dataset.",
"On the other hand, this is not the case for WikiLarge, since none of the models can achieve high SARI by using only one operation among Add, Delete, and Keep.",
"Moreover, the complex sentence itself yields an almost perfect BLEU score (partially due to the multi-reference nature of Wik-iLarge).",
"Thus, we do not use GM, and for this dataset, SARI is our main evaluation metric.",
"Overall results on Newsela.",
"Table 1 shows the results on Newsela.",
"By default (without ), validation is performed using the GM score.",
"Still, our unsupervised text simplification achieves a SARI score around 2627, outperforming quite a few supervised methods.",
"Further, we experiment with SARI-based validation (denoted by ), following the setting of most previous work (Dong et al., 2019; Guo et al., 2018).",
"We achieve 30.44 SARI, which is competitive with state-of-the-art supervised methods.",
"Our model also achieves high BLEU scores.",
"As seen, all our variants, if validated by GM (with-out ), outperform competing methods in BLEU.",
"One of the reasons is that our model performs text simplification by making edits on the original sentence instead of rewriting it from scratch.",
"In terms of the geometric mean (GM), our unsupervised approach outperforms all previous work, showing a good balance between simplicity and content preservation.",
"The readability of our generated sentences is further confirmed by the intrinsic FKGL score.",
"Overall results on WikiLarge.",
"For the Wikilarge experiments in Table 2, we perform validation on SARI, which is the main metric in this experiment.",
"Our model outperforms existing unsupervised methods, and is also competitive with state-of-the-art supervised methods.",
"We observe that lexical simplification ( LS ) is important in this dataset, as its improvement is large compared with the Newsela experiment in Table 1.",
"Additionally, reordering ( RO ) does not improve performance, as it is known that WikiLarge does not focus on syntactic simplification (Xu et al., 2016).",
"The best performance for this experiment is obtained by the RM+EX+LS model.",
"We now perform a detailed analysis of the scoring function described in Section 3.2 to understand the effect on different aspects of simplification.",
"We use the RM+EX+LS+RO variant and the Newsela corpus as the testbed.",
"analyze our syntax-aware SLOR score in the search objective.",
"First, we remove the SLOR score and use the standard sentence probability.",
"We observe that SLOR helps preserve rare words, which may be entities.",
"As a result, the readability score (FKGL) becomes better (i.e., lower), but the BLEU score decreases.",
"We then evaluate the importance of using a structural LM instead of a standard LM.",
"We see a decrease in both SARI and BLEU scores.",
"In both cases, the GM score decreases.",
"Threshold values and relative weights.",
"Table 4 analyzes the effect of the hyperparameters of our model, namely, the threshold in the stopping criteria and the relative weights in the scoring function.",
"As discussed in Section 3.4, we use a threshold as the stopping criteria for our iterative search algorithm.",
"For each operation, we require that a new candidate should be better than the previous iteration by a multiplicative threshold r op in Equation (3).",
"In this analysis, we set the same threshold for all operations for simplicity.",
"As seen in Table 4, increasing the threshold leads to better meaning preservation since the model is more conservative (making fewer edits).",
"This is shown by the higher BLEU and lower SARI scores.",
"Regarding the weights for each individual scoring function, we find that increasing the weight for the FRE readability score makes sentences shorter, more readable, and thus simpler.",
"This is also indicated by higher SARI values.",
"When sentences are rewarded for being short (with large ), SARI increases but BLEU decreases, showing less meaning preservation.",
"The readability scores initially increase with the reduction in length, but then decrease.",
"Finally, if we increase the weight for the entity score, the sentences become longer and more complex since the model is penalized more for deleting entities.",
"In summary, the above analysis shows the controllability of our approach in terms of different simplification aspects, such as simplicity, meaning preservation, and readability.",
"We conducted a human evaluation on the Newsela dataset since automated metrics may be insufficient for evaluating text generation.",
"We chose 30 sentences from the test set for annotation and considered a subset of baselines.",
"For our model variants, we chose RM+EX+LS+RO , considering both validation settings (GM and SARI).",
"We followed the evaluation setup in Dong et al. (2019), and measure the adequacy ( How much meaning from the original sentence is preserved? ), simplicity ( Is the output simper than the original sentence? ), and fluency ( Is the output grammatical? ) on a five-point Likert scale.",
"We recruited three volunteers, one native English speaker and two non-native fluent English speakers.",
"Each of the volunteer was given 30 sentences from different models (and references) in a randomized order.",
"Additionally, we asked the volunteers to measure the number of instances where models produce incorrect details or generate text that is not implied by the original sentence.",
"We did this because neural models are known to hallucinate information (Rohrbach et al., 2018).",
"We report the average count of false information per sentence, denoted as FI.",
"We observe that our model RM+EX+LS+RO (when validated by GM) performs better than Hybrid , a combination of PBMT and discourse representation structures, in all aspects.",
"It also performs competitively with remaining supervised NMT models.",
"For adequacy and fluency, Dress-Ls performs the best since it produces relatively longer sentences.",
"For simplicity, S2S-All-FA performs the best since it produces shorter sentences.",
"Thus, a balance is needed between these three measures.",
"As seen, RM+EX+LS+RO ranks second in terms of the average score in the list (reference excluded).",
"The human evaluation confirms the effectiveness of our unsupervised text simplification, even when compared with supervised methods.",
"We also compare our model variants RM+EX+LS+RO (validated by GM) and RM+EX+LS+RO (validated by SARI).",
"As expected, the latter generates shorter sentences, performing better in simplicity but worse in adequacy and fluency.",
"Regarding false information (FI), we observe that previous neural models tend to generate more false information, possibly due to the vagueness in Method A S F Avg FI Hybrid 2.63 2.74 2.39 2.59 0.03 Dress-Ls 3.29 3.05 4.11 3.48 0.2 EntPar 1.92 2.97 3.16 2.68 0.47 S2S-All-FA 2.25 3.24 3.90 3.13 0.3 Edit-NTS 2.37 3.17 3.73 3.09 0.23 RM+EX+LS+RO 2.97 3.09 3.78 3.28 0.03 RM+EX+LS+RO 2.58 3.21 3.33 3.04 0.07 Reference 2.91 3.49 4.46 3.62 0.77 Table 5: Human evaluation on Newsela, where we measure adequacy (A), simplicity (S), fluency (F), and their average score (Avg), based on 15 Likert scale.",
"the continuous space.",
"By contrast, our approach only uses neural networks in the scoring function, but performs discrete edits of words and phrases.",
"Thus, we achieve high fidelity (low FI) similar to the non-neural Hybrid model, which also performs editing on discourse parsing structures with PBMT.",
"In summary, our model takes advantage of both neural networks (achieving high adequacy, simplicity, and fluency) and traditional phrase-based approaches (achieving high fidelity).",
"Interestingly, the reference of Newsela has a poor (high) FI score, because the editors wrote simplifications at the document level, rather than the sentence level.",
"We proposed an iterative, edit-based approach to text simplification.",
"Our approach works in an unsupervised manner that does not require a parallel corpus for training.",
"In future work, we plan to add paraphrase generation to generate diverse simple sentences.",
"We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), under grant Nos.",
"RGPIN-2019-04897, RGPIN-2020-04465, and the Canada Research Chair Program.",
"Lili Mou is also supported by Al-taML, the Amii Fellow Program, and the Canadian CIFAR AI Chair Program.",
"This research was supported in part by Compute Canada ( www. computecanada.ca )."
]
| [
"objective",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"method",
"objective",
"method",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"objective",
"abstain",
"method",
"other",
"other",
"other",
"other"
]
|
[
"We propose a novel conditioned text generation model.",
"It draws inspiration from traditional template-based text generation techniques, where the source provides the content (i.e., what to say ), and the template influ-ences how to say it .",
"Building on the successful encoder-decoder paradigm, it first encodes the content representation from the given input text; to produce the output, it retrieves exemplar text from the training data as soft templates, which are then used to construct an exemplar-specific decoder.",
"We evaluate the proposed model on abstractive text summarization and data-to-text generation.",
"Empirical results show that this model achieves strong performance and outperforms comparable baselines.",
"Conditioned text generation is the essence of many natural language processing (NLP) tasks, e.g., text summarization (Mani, 1999), machine translation (Koehn, 2009), and data-to-text generation (Kukich, 1983; McKeown, 1992; Reiter and Dale, 1997).",
"In its common neural sequence-to-sequence formulation (Sutskever et al., 2014; Cho et al., 2014), an encoder-decoder architecture is used.",
"The decoder generates the text autoregres-sively, token-by-token, conditioning on the feature representations encoded from the source, typically with attention (Bahdanau et al., 2015) and copy mechanisms (Gu et al., 2016; See et al., 2017).",
"This paradigm is capable of generating fluent abstractive text, but in an uncontrolled and sometimes unreliable way, often producing degenerate outputs and favoring generic utterances (Vinyals and Le, 2015; Li et al., 2016).",
"ods (Becker, 2002; Foster and White, 2004; Reiter et al., 2005; Gatt and Reiter, 2009, inter alia ), where the source content is filled into the slots of a handcrafted template.",
"These solutions offer higher generation precision compared to neural approaches (Wiseman et al., 2017), but tend to lack the naturalness of neural systems, and are less scalable to open domain settings, where the number of required templates can be prohibitively large.",
"To sidestep the scalability problems with handcrafted templates, it has been proposed to use similar training samples as exemplars , to guide the decoding process (Gu et al., 2018; Guu et al., 2018; Weston et al., 2018; Pandey et al., 2018; Cao et al., 2018a, inter alia ).",
"1 In general, existing methods accomplish this by",
"(a) using traditional information retrieval (IR) techniques for exemplar extraction (e.g., TF-IDF), and then",
"(b) concatenating the exemplar to the source as additional inputs, allowing the decoder to attend over and copy from both.",
"We propose a different strategy for using exemplars.",
"For motivation, Figure 1 shows a source-target pair together with its exemplar from the Gigaword dataset (Graff et al., 2003).",
"The target is a summary of the source sentence, and the exemplar is retrieved from the training set ( 3.2).",
"2 There is word overlap between the exemplar and the desired output, which would be easily captured by an attention/copy mechanism (e.g. Norway and aid ).",
"Despite this, ideally, the model should also exploit the structural and stylistic aspects to produce an output with a similar sentence structure, even if the words are different.",
"The term exemplar indicates a training instance used to help generation.",
"We aim to distinguish from templates, since here no explicit slot-filling procedure is involved.",
"2 We use the training target as the exemplar, whose source is most similar to the current input.",
"3.2 describes the details.",
"Source : Norway said Friday it would give Zimbabwe 40 million kroner (7.02 million dollars, 4.86 million euros) in aid to help the country deal with a lack of food and clean drinking water and a cholera outbreak.",
"Exemplar : Norway boosts earthquake aid to Pakistan.",
"Target : Norway grants aid of 4.86 million euros to Zimbabwe.",
"Figure 1: A source-target pair from Gigaword training set, along with its exemplar.",
"is supposed to determine what to say, while the templates aim to address how to say it, reminiscent of the classical content selection and surface realization pipeline (Reiter and Dale, 1997).",
"For instance, an ideal template for this example might look as follows: grants aid of to In the neural formulation, the how to say it aspect is primarily controlled by the decoder.",
"Inspired by the above intuition, we propose exemplar-based adaptive decoding , where a customized decoder is constructed for each exemplar.",
"This is achieved by letting the exemplars to directly influence decoder parameters through a reparameterization step ( 3.1).",
"The adaptive decoder can be used as a drop-in replacement in the encoder-decoder architecture.",
"It offers the potential to better incorporate the exemplars' structural and stylistic aspects into decoding, without excessive increase in the amount of parameters or computational overhead.",
"We empirically evaluate our approach on abstractive text summarization and data-to-text generation ( 4), on which most of the recent efforts on exemplar-guided text generation have been studied.",
"On three benchmark datasets, our approach outperforms comparable baselines, and achieves performance competitive with the state of the art.",
"The proposed method can be applicable in many other conditioned text generation tasks.",
"Our implementation is available at https://homes.",
"cs.washington.edu/hapeng .",
"This section lays out the necessary background and notations for further technical discussion.",
"We begin with conditioned text generation and the encoder-decoder framework (Sutskever et al., 2014; Cho et al., 2014).",
"In the interest of the notation clarity, 3 will use an Elman network (El-man, 1990) as a running example for the decoder, which is briefly reviewed in",
"3. The proposed technique generalizes to other neural network architectures ( 3.3).",
"Conditioned text generation and the encoder-decoder architecture.",
"Our discussion centers around conditioned text generation, i.e., the model aims to output the target y = y 1 y 2 . . . y T given the source input x = x 1 x 2 . . . x S , both of which are sequences of tokens.",
"Each token x i , y i takes one value from a vocabulary V .",
"x and y could vary depending on the tasks, e.g., they will respectively be articles and summaries for text summarization; and for data-to-text generation, x would be structured data, which can sometimes be linearized (Lebret et al., 2016; Wiseman et al., 2018, inter alia ), and y is the output text.",
"We aim to learn a (parameterized) conditional distribution of the target text y given the source x , p \u0000 y | x \u0000 = TY t =1 p \u0000 y t | y <t , x \u0000 , (1) where y <t = y 1 . . . y t \u0000 1 is the prefix of y up to the ( t \u0000 1) th token (inclusive).",
"The probability of each target token is usually estimated with a softmax function: p ( y t | y <t , x ) = exp h > t \u0000 1 w y t P y exp h > t \u0000 1 w y .",
"(2) w y denotes a learned vector for token y 2 V .",
"h t \u0000 1 depends on y <t and x , and is computed by a function which we will describe soon.",
"A typical implementation choice for computing h t is the encoder-decoder architecture (Sutskever et al., 2014).",
"More specifically, an encoder g first gathers the feature representations from the source x ; then a decoder f \u0000 is used to compute the h t feature vectors: h t = f \u0000 y t , h t \u0000 1 , g ( x ) .",
"(3) and \u0000 are, respectively, the collections of parameters for the encoder and the decoder, both of which can be implemented as recurrent neural networks (RNNs) such as LSTMs (Hochre-iter and Schmidhuber, 1997) or GRUs (Cho et al., 2014), or the transformer (Vaswani et al., 2017).",
"In Sutskever et al. (2014), the dependence of f \u0000 on g is made by using the last hidden state of the encoder as the initial state of the decoder.",
"Such dependence can be further supplemented with attention (Bahdanau et al., 2015) and copy mechanisms (Gu et al., 2016; See et al., 2017), as we will do in this work.",
"3 introduces how we use exemplars to inform decoding, by dynamically constructing the decoder's parameters \u0000 .",
"For the notation clarity, we will use the Elman network as a running example, reviewed below.",
"Elman networks.",
"Given input sequence x , an Elman network (Elman, 1990) computes the hidden state at time step t from the previous one and the current input token by h t = tanh \u0000 Ph t \u0000 1 + Qv t \u0000 , (4) where P and Q are learned d d parameter matrices (with d being the hidden dimension), and v t is the embedding vector for token x t .",
"We omit the bias term for clarity.",
"This section introduces the proposed method in detail.",
"Our aim is to use exemplars to inform the decoding procedure (i.e., how to say it ).",
"To accomplish this, we reparameterize the decoder's parameters with weighted linear sums, where the coefficients are determined by an exemplar.",
"The decoder is adaptive , in the sense that its parameters vary according to the exemplars.",
"The adaptive decoder can be used as a drop-in replacement in the encoder-decoder architecture.",
"Before going into details, let us first overview the high-level generation procedure of our model.",
"Given source text x , the model generates an output as follows:",
"1. Run a standard encoder to gather the content representations g ( x ) from the source.",
"2. Retrieve its exemplar z x , and compute exemplar-specific coefficients ( 3.2).",
"3. Construct the adaptive decoder parameters \u0000 ( 3.1), using the coefficients computed at step",
"2. Then the output is generated by applying the adaptive decoder followed by a softmax , just as in any other encoder-decoder architecture.",
"Aiming for a smoother transition, we will first describe step 3 in 3.1, and then go back to discuss step 2 in 3.2.",
"For clarity, we shall assume that the decoder is implemented as an Elman network (El-man, 1990; Equation 4).",
"The proposed technique generalizes to other neural network architectures, as we will discuss later in 3.3.",
"At its core, the exemplar-specific adaptive decoder involves a reparameterization step, which we now describe.",
"We focus on the parameters of the Elman network decoder, i.e., P and Q in Equation 4.",
"We aim to reparameterize the pair of matrices ( P , Q ) , in a way that they are influenced by the exemplars.",
"Let us first consider an extreme case, where one assigns a different pair of parameter matrices to each exemplar, without any sharing.",
"This leads to an unreasonably large amount of parameters, which are difficult to estimate reliably.",
"3 We instead construct P and Q from a set of pre-defined parameters matrices.",
"Take P for example, it is computed as the weighted sum of P i matrices: P = r X i =1 \u0000 i P i , (5) where P i 2 R d d , with d being the size of the hidden states.",
"r is a hyperparameter, determining the number of P i matrices to use.",
"4 The summation is weighted by the coefficients \u0000 i , which are computed from the exemplar z x .",
"For clarity, the dependence of both P and \u0000 i on z x is suppressed when the context is clear.",
"Equation 5 constructs the decoder's parameter matrix P using a linear combination of { P i } ri =1 .",
"The exemplar informs this procedure through the coefficients \u0000 i 's, the detailed computation of which is deferred to 3.2.",
"The other matrix Q can be similarly constructed by Q = P i \u0000 i Q i .",
"Rank-1 constraints.",
"In the above formulation, the number of parameters is still r times more than a standard Elman network, which can lead to over-fitting with a limited amount of training data.",
"Besides, it would be more interesting to compare the adaptive decoder to a standard RNN under a comparable parameter budget.",
"Therefore we want to further limit the amount of parameters.",
"This can be achieved by forcing the ranks of P i and Q i to be 1, since it then takes 2 d parameters to form each of them, instead of d 2 .",
"More formally, we upper-3 The amount of parameters grows linearly with the number of possible exemplars, which, as we will soon discuss in 3.2, can be as large as the training set.",
"4 Instead of choosing r empirically, we set it equal to d in the experiments.",
"Please see the end of 3.1 for a related discussion.",
"P i = u ( p ) i v ( p ) i .",
"(6) a b = ab > denotes the outer product of two vectors; u ( p ) i and v ( p ) i are learned d -dimensional vectors.",
"Each Q i can be similarly constructed by a separate set of vectors Q i = u ( q ) i v ( q ) i .",
"Let U p , V p 2 R d r denote the stack of u ( p ) i , v ( p ) i vectors, i.e., U p = h u ( p ) 1 , . . . , u ( p ) r i , (7a) V p = h v ( p ) 1 , . . . , v ( p ) r i .",
"(7b)",
"Equations 5 and 6 can be compactly written as P = U p V > p .",
"(8) where is the diagonal matrix built from the r dimensional coefficient vector \u0000 = [ \u0000 1 , . . . , \u0000 r ] > : = diag( \u0000 ) = 2 64 \u0000 1 ... \u0000 r 3 75 .",
"(9) The construction of Q is similar, but with a different set of parameters matrices U q and V q : 5 Q = U q V > q .",
"(10)",
"Note that, despite their similarities to SVD at a first glance, Equations 8 and 10 are not performing matrix factorization.",
"Rather, we are learning { U p , V p , U q , V q } directly; P , Q , { P i } , and { Q i } are never explicitly instantiated (Peng et al., 2017, 2018c).",
"To summarize, we reparameterize P and Q as interpolations of rank-1 matrices.",
"By the fact that rank( A + B ) rank( A ) + rank( B ) , the ranks of P and Q are upper-bounded by r .",
"As pointed out by Krueger and Memisevic (2017), the parameter matrices of a trained RNN tend to have full rank.",
"Therefore, in the experiments, we set r equal to the hidden size d , aiming to allow the adaptive decoder to use full-rank matrices in the recurrent computation.",
"Yet, if one holds a priori beliefs that the matrices should have lower ranks, using r < d could be desirable.",
"When r = d , an adaptive RNN constructed by the above approach has 4 d 2 parameters, which is comparable to the 2 d 2 parameters in a standard Elman network.",
"6 5 The bias term in the Elman network b can be constructed as b = B \u0000 , with B being a learned d r matrix.",
"We now discuss the computation of coefficients \u0000 , through which the exemplars inform the decoder construction (Equations 8 and 10).",
"Before detailing the neural network architecture, we begin by describing the exemplar retrieval procedure.",
"Retrieving exemplars z x .",
"Intuitively, similar source texts should hold similar targets.",
"Therefore, given source input x , we use the training target as its exemplar z x , whose source is most similar to x .",
"7 To compute the similarities between source texts, we use bag-of-words (BOW) features and cosine similarity.",
"We extract the top-1 exemplar for each instance.",
"This step is part of the preprocessing, and we do not change the exemplars as the training proceeds.",
"There are, of course, many other strategies to get the exemplars, e.g., using handcrafted or heuristically created hard templates (Reiter et al., 2005; Becker, 2002; Foster and White, 2004, inter alia ), randomly sampling multiple training instances (Guu et al., 2018), or learning a neural reranker (Cao et al., 2018a).",
"Using more sophistically extracted exemplars is definitelly interesting to explore, which we defer to future work.",
"Computing coefficients.",
"Next we describe the computation of \u0000 , the r -dimensional coefficient vector, which is used to construct the adaptive decoder (Equations 8 and 10).",
"Intuitively, the rank-1 matrices ( P i 's and Q i 's in Equation 6 and thereafter) can be seen as capturing different aspects of the generated text.",
"And \u0000 determines how much each of them contributes to the adaptive decoder construction.",
"A natural choice to calculate \u0000 is to use the similarities between the exemplar and each of the aspects.",
"To accomplish this, we run a RNN encoder over z x , and use the last hidden state as its vector representation a .",
"8 We further associate each ( P i , Q i ) pair with a learned vector c i ; and then \u0000 i is computed as the similarity between a and c i , using an inner product \u0000 i = a > c i .",
"More compactly, \u0000 = Ca , (11) with C = [ c 1 , . . . , c r ] > .",
"7 The source of an exemplar is only used in the retrieval and never fed into the encoder-decoder model.",
"For a training instance, we additionally disallow using its own target as the exemplar.",
"8 For clarity, the dependence of a on the exemplar z x is suppressed, just as \u0000 .",
"Closing this section, Algorithm 1 summarizes the procedure to construct an adaptive decoder.",
"Although we've based our discussion on Elman networks so far, it is straightforward to apply this method to its gated variants (Hochreiter and Schmidhuber, 1997; Cho et al., 2014, inter alia ), and other quasi-/non-recurrent neural architectures (Bradbury et al., 2017; Vaswani et al., 2017; Peng et al., 2018a, inter alia ).",
"Throughout the experiments, we will be using an adaptive LSTM decoder ( 4).",
"As a drop-in replacement in the encoder-decoder architecture, it introduces a reasonable amount of additional parameters and computational overhead, especially when one uses a small encoder for the exemplar (i.e., the sizes of the c i vectors in Equation 11 are small).",
"It can benefit from the highly-optimized GPU implementations, e.g., CuDNN, since it uses the same recurrent computation as a standard nonadaptive RNN.",
"In addition to the neural networks, the adaptive decoder requires access to the full training set due to the retrieval step.",
"In this sense it is semi-parametric .",
"9 The idea to dynamically construct the parameters is inspired by Hypernet-works (Ha et al., 2017) and earlier works therein.",
"It proves successful in tasks such as classifica-tion (Jia et al., 2016; Liu et al., 2017) and machine translation (Platanios et al., 2018).",
"Many recent template-based generation models include the exemplars as content in addition to the source, and allow the decoder to attend over and copy from both (Gu et al., 2018; Guu et al., 2018; Weston et al., 2018; Pandey et al., 2018; Cao et al., 2018a, inter alia ).",
"We compare to this approach in the experiments, and show that our model offers fa-9 Nothing prohibits adaptively constructing other components of the model, e.g., the encoder g .",
"Yet, our motivation is to use exemplars to inform how to say it , which is primarily determined by the decoder (in contrast, the encoder relates more to selecting the content).",
"vorable performance, and that they can potentially be combined to achieve further improvements.",
"This section empirically evaluates the proposed model on two sets of text generation tasks: abstractive summarization ( 4.2) and data-to-text generation ( 4.3).",
"Before heading into the experimental details, we first describe the architectures of the compared models in 4.1.",
"In addition to previous works, we compare to the following baselines, aiming to control for confounding factors due to detailed implementation choices.",
"SEQ 2 SEQ .",
"The encoder-decoder architecture enhanced with attention and copy mechanisms.",
"The encoder is implemented with a bi-directional LSTM (BiLSTM; Hochreiter and Schmidhuber, 1997; Schuster and Pali-wal, 1997; Graves, 2012), and the decoder a uni-directional one.",
"We tie the input embeddings of both the encoder and the decoder, as well as the softmax weights (Press and Wolf, 2017).",
"We use beam search during evaluation, with length penalty (Wu et al., 2016).",
"ATTEXP .",
"It is based on SEQ 2 SEQ .",
"It encodes, attends over, and copies from the exemplars, in addition to the source inputs.",
"Our model using the adaptive decoder (ADADEC ) closely builds upon SEQ 2 SEQ .",
"It uses a dynamically constructed LSTM decoder, and does not use attention or copy mechanisms over the encoded exemplars.",
"The extracted exemplars are the same as those used by ATTEXP .",
"To ensure fair comparisons, we use comparable training procedures and regularization techniques for the above models.",
"The readers are referred to the appendix for further details such as hyperparameters.",
"Datasets.",
"We empirically evaluate our model on two benchmark text summarization datasets: Annotated Gigaword corpus (Gigaword; Graff et al., 2003; Napoles et al., 2012).",
"Gigaword contains news articles sourced from various news services over the last two decades.",
"To produce the dataset, we follow the split and preprocessing by Rush et al. (2015), and pair the first sentences and the headlines in the news articles.",
"It results in a 3.8M/190K/1,951",
"train/dev./test split.",
"The average lengths of the source and target texts are 31.4 and 8.2, respectively.",
"New York Times Annotated Corpus (NYT; Sandaus, 2008).",
"It contains news articles published between 1996 and 2007 by New York Times.",
"We use the split and preprocessing by Durrett et al. (2016).",
"10 Following their effort, we evaluate on a smaller portion of the test set, where the gold summaries are longer than 50 tokens.",
"We further randomly sample 9,000 instances from the training data for validation, resulting in a 91,834/9,000/3,452",
"train/dev./test split.",
"Compared to Gigaword, the inputs and targets in NYT are much longer (averaging 939.0 and 48.6, respectively).",
"Table 1 summarizes some statistics of the datasets.",
"We note that some recent works use a different split of the NYT corpus (Paulus et al., 2018; Gehrmann et al., 2018), and thus are not comparable to the models in Table",
"3. We decide to use the one by Durrett et al. (2016) because their preprocessing script is publicly available.",
"For both datasets, we apply byte-paired encoding (BPE; Sennrich et al., 2016), which proves to improve the generation of proper nouns (Fan et al., 2018).",
"on Gigaword test set in ROUGEF 1 (Lin, 2004).",
"11 By using adaptive decoders, our model (ADADEC ) improves over SEQ 2 SEQ by more than 1.1 ROUGE scores.",
"Cao et al. (2018b) and the FULL model by Cao et al. (2018a) hold the best published results.",
"The former uses extensive handcrafted features and relies on external information extraction and syntactic parsing systems; 10 https://github.com/gregdurrett/ berkeley-doc-summarizer .",
"while the latter uses additional encoding, attention and copy mechanisms over the exemplars extracted using a novel neural reranker.",
"ADADEC achieves better or comparable performance to the state-of-the-art models, without using any handcrafted features or reranking techniques.",
"The BASIC model by Cao et al. (2018a) ablates the reranking component from their FULL model, and uses the top exemplar retrieved by the IR system.",
"Therefore it is a more comparable baseline to ours.",
"ADADEC outperforms it by more than 1.3 ROUGE scores.",
"Surprisingly, we do not observe interesting improvements by ATTEXP over the sequence-to-sequence baseline.",
"We believe that our model can benefit from better extracted exemplars by, e.g., applying a reranking system.",
"Such exploration is deferred to future work.",
"The NYT experimental results are summarized in Table",
"3. We follow previous works and report limited-length ROUGE recall values.",
"12 Durrett et al. (2016) is an extractive model, and Paulus et al. (2018) an abstractive approach based on reinforcement learning.",
"Our ADADEC model outperforms both.",
"We observe similar trends when comparing ADADEC to the SEQ 2 SEQ and ATTEXP baselines, with the exception that ATTEXP does improve over SEQ 2 SEQ .",
"Data-to-text generation aims to generate textual descriptions of structured data, which can be",
"12 Following Durrett et al. (2016) and Paulus et al. (2018), we truncate the predictions to the lengths of the gold summaries, and evaluate ROUGE recall, instead of F 1 on full-length predictions.",
"seen as a table consisting of a collection of records (Liang et al., 2009).",
"For a given entity, each record is an (attribute, value) tuple.",
"Figure 2 shows an example for entity Jacques-Louis David .",
"The table specifies the entity's properties with tuples (born, 30 August 1748) , (nationality, French) , and so forth.",
"The table is paired with a description, which the model is supposed to generate using the table as input.",
"We refer the readers to Lebret et al. (2016) for more details about the task.",
"Dataset and implementation details.",
"We use the Wikibio dataset (Lebret et al., 2016).",
"It is automatically constructed by pairing the tables and the opening sentences of biography articles from English Wikipedia.",
"We follow the split and preprocessing provided along with the dataset, with around 583K/73K/73K",
"train/dev./test instances.",
"Following Lebret et al. (2016), we linearize the tables, such that we can conveniently train the sequence-to-sequence style models described in 4.1.",
"Table 1 summarizes some statistics of the dataset.",
"In contrast to the text summarization experiment ( 4.2), we do not apply BPE here.",
"Further, the word embeddings are initialized with GloVe (Pennington et al., 2014; fixed during training), and not tied with the softmax weights.",
"In addition to the models introduced in 4.1, we additionally compare to ADADEC +A TTEXP , aiming to study whether the adaptive decoder can further benefit from attention and copy mechanisms over the exemplars.",
"the Neoclassical style.",
"et al., 2002).",
"13 Table 4 summarizes the data-to-text generation results on the Wikibio test set.",
"Overall, we observe similar trends to those in the summarization experiment ( 4.2): by attending over and copying from the exemplars, ATTEXP improves upon the SEQ 2 SEQ baseline by around 0.6 absolute scores.",
"Also utilizing exemplar information, our ADADEC model outperforms SEQ 2 SEQ by a larger margin: 1.3 for ROUGE -4 and 1.1 for BLEU .",
"We further study whether we can get further improvements by combining both.",
"ADADEC +A TTEXP achieves around 0.5 absolute improvements over ADADEC , less than those by ATTEXP over SEQ 2 SEQ .",
"This provides evidence that, to some extend, the ways ATTEXP and ADADEC incorporate exemplar information might be complementary.",
"Wiseman et al. (2018) is a template-motivated model based on a semi-Markov model.",
"Liu et al. (2018) hold the current state-of-the-art results.",
"They encode the table structures by using",
"(a) position and filed embeddings, and",
"(b) structure-aware attention and gating techniques.",
"These techniques are beyond the scope of this work, which focuses mainly on the decoding end.",
"13 We use the script by Lin (2004) to calculate the ROUGE score, and the mteval script for BLEU : https:// github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/mteval-v13a.pl .",
"We now qualitatively evaluate our model, by studying how its outputs are affected by using different exemplars.",
"Figure 3 shows two randomly sampled Gigaword development instances.",
"It compares the outputs by ADADEC (i.e., without attention/copy over exemplars; 4.1) when receiving different exemplars, controlling for the same source inputs.",
"In each example, Exemplar 1 is retrieved by the system (i.e., a training target; 3.2); while the remaining ones are produced by the authors, by modifying the first one in styles and sometimes introducing distractions in the content.",
"In the top example, the model includes people into the subject ( Three vs. Three people ) under the influence by Exemplar 2 ; Exemplar 3 changes the tense and adds some distraction by changing the place from Britain to Canada .",
"The model follows the tense switch, but gets confused by the distraction, and decides to let a train in southern Europe collide into North America, which it should not.",
"Looking at the bottom example, the model in general follows the exemplar in using noun adjuncts or prepositional phrases (e.g., new home sales vs. sales of new homes ), except the first one.",
"Perhaps confused by the distraction in Exemplar 3 , the model makes a judgment on the specific amount of growth, but gets it wrong.",
"Exemplar-based generation.",
"Partly inspired by traditional template-based generation (Kukich, 1983; Reiter and Dale, 1997, inter alia ), many recent efforts have been devoted to augmenting text generation models with retrieved exemplars (Ho-dosh et al., 2013; Mason and Charniak, 2014; Song et al., 2016; Lin et al., 2017, inter alia ).",
"Without committing to an explicit slot-filling process, a typical method is to include exemplars as additional inputs to the sequence-to-sequence models (Gu et al., 2018; Pandey et al., 2018; Guu et al., 2018, inter alia ).",
"Wiseman et al. (2018) took a different approach and used a semi-Markov model to learn templates.",
"Dynamic parameter construction.",
"The idea of using a smaller network to generate weights for a larger one dues back to Stanley et al. (2009) and Koutnik et al. (2010), mainly under the evolution computing context.",
"It is later revisited with representation learning (Moczulski et al., 2015; Fernando et al., 2016; Al-Shedivat et al., 2017, inter alia ), and successfully applied to classifica-tion (Jia et al., 2016; Liu et al., 2017) and machine translation (Platanios et al., 2018).",
"We presented a text generation model using exemplar-informed adaptive decoding.",
"It repa-rameterizes the decoder using the information gathered from retrieved exemplars.",
"We experimented with text summarization and data-to-text generation, and showed that the proposed model achieves strong performance and outperforms comparable baselines on both.",
"The proposed model can be applicable in other conditioned text generation tasks.",
"We release our implementation at https://homes.cs.washington.",
"edu/hapeng .",
"We thank Antonios Anastasopoulos, Ming-Wei Chang, Michael Collins, Jacob Devlin, Yichen Gong, Luheng He, Kenton Lee, Dianqi Li, Zhouhan Lin, Slav Petrov, Oscar Tackstrom, Kristina Toutanova, and other members of the Google AI language team for the helpful discussion, and the anonymous reviewers for their valuable feedback."
]
| [
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"other",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"other",
"other",
"objective",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"other",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"other"
]
|
[
"Event detection (ED) is a critical subtask of event extraction that seeks to identify event triggers of certain types in texts.",
"Despite significant advances in ED, existing methods typically follow a one model fits all types approach, which sees no differences between event types and often results in a quite skewed performance.",
"Finding the causes of skewed performance is crucial for the robustness of an ED model, but to date there has been lit-tle exploration of this problem.",
"This research examines the issue in depth and presents a new concept termed trigger salience attribution, which can explicitly quantify the underlying patterns of events.",
"On this foundation, we develop a new training mechanism for ED, which can distinguish between trigger-dependent and context-dependent types and achieve promising performance on two benchmarks.",
"Finally, by highlighting many distinct characteristics of trigger-dependent and context-dependent types, our work may promote more research into this problem.",
"Event detection (ED) is the first and a crucial step of event extraction, which aims to identify events of certain types in plain texts (Ahn, 2006; Nguyen and Grishman, 2015; Mitamura et al., 2017).",
"Previous methods to ED typically adopt a one model fits all types approach, seeing no difference between event types and using a single model to address them all (Ji and Grishman, 2008; Li et al., 2013; Chen et al., 2015; Lin et al., 2020).",
"However, such approaches produce quite skewed performance on different types.",
"Tasking the ACE benchmark as an example, we note the state-of-the-art ED model (Wadden et al., 2019) can strike 90% in F1 for the type DIVORCE , yet only 50% for the type STARTPOSITION , and it is more surprising that the training set of DIVORCE is eight times smaller than that of START-POSITION .",
"Finding the causes underlying S1: The couple divorced four years later.",
"the skewed performance is crucial to the robustness of an ED model; however, this problem is still understudied in current research.",
"In this study we take a fresh look at above problem and for the first time attribute the skewed performance to the contextual patterns of events .",
"Let consider the two typical instances of DIVORCE and START-POSITION shown in Figure 1.",
"Intuitively, they demonstrate distinct patterns: the DIVORCE event is more trigger-dependent , and the trigger word (i.e., divorced) is very indicative of the event's occurrence; by contrast, the STARTPOSITION event is more context-dependent the event semantic is primarily expressed by contexts rather than the trigger become, which is a merely light verb.",
"We hypothesize an ED model performs poorly on context-dependent types because capturing context semantics is challenging (Lu et al., 2019; Liu et al., 2020b).",
"With the above intuitions, two questions rise:",
"(i) Can we estimate an event's pattern quantitatively?",
"(ii)) How to robustify an ED model by characterizing such patterns?",
"To address the first question, we introduce a brandy new concept called trigger saliency attribution , which can explicitly quantify an event's contextual pattern.",
"Figure 2 illustrates the key idea: to determine how much an event is trigger-dependent or context-dependent, we measure the trigger's contribution to expressing overall the event semantic.",
"Specifically, we first assign each sentence a global event label that represents the overall event semantic.",
"Then, inspired by the feature attribution method 4573 (Simonyan et al., 2014; Sundararajan et al., 2017), we regard each word as a feature and compute its contribution (i.e., saliency value) for predicting the global event label.",
"Finally, by examining the ground-truth trigger's saliency value, we can tell how much an event depends on triggers or contexts: a higher value, for example, indicates that the trigger contributes more to the event, implying the event is more trigger-dependent.",
"To answer the second question, we develop a new training mechanism based on trigger saliency attribution, which uses saliency as evidence to enhance learning.",
"Our method is simple and straightforward instead of using a single model to detect all event types, we group event types with similar patterns together (assessed by trigger saliency attribution) and develop separate models for each group.",
"This strategy enables different models to capture distinct patterns for example, the model for context-dependent type can focus on mining contextual information for learning.",
"To further boost learning, we also propose two saliency-exploration strategy to augment the above framework, which can explicitly integrate saliency information into learning and produce improved performance particularly for context-dependent types ( 6.2).",
"To verify the effectiveness of our approach, we have conducted extensive experiments on two ED benchmarks (i.e., ACE 2005 (LDC, 2005) and MAVEN (Wang et al., 2020)).",
"According to the results:",
"(i) Our trigger saliency attribution method can capture the underlying pattern and well explain the skewed performance, obtaining Spearman's correlation coefficients of 0.72 and 0.61 with per-type F1 on ACE 2005 and MAVEN respectively;",
"(ii) Our new training regime based on saliency demonstrates improved results on the two benchmarks.",
"On ACE 2005, for example, it produces a 2% ab-solute gain in F1 over methods training different event types jointly.",
"Finally, in ablation studies, we compare and highlight many significant characteristics (e.g., linguistic and lexical patterns) of trigger-dependent and context-dependent event types; our work may inspire future research into their patterns.",
"To summarize, our contributions are three-fold: We analyze the origins of an ED model's skewed performance and propose a new notion termed trigger saliency attribution, which can assess the underlying pattern of events.",
"Our findings, as a seminal study, raises the possibility that the traditional one model fits S1: The couple divorced four years later. S2: He became the first minister to England. [Divorce] [Start-Pos] Step 1 High Contribution i / J w Step 2 Step 1 Step 2 Low Contribution i / J w Figure 2: Illustration of trigger saliency attribution, where the saliency value of a trigger can quantify its contribution to the overall event semantic. all types paradigm may need to be changed.",
"We present a new ED training mechanism based on trigger saliency attribution that achieves promising results on two benchmarks, especially when dealing with context-dependent event types.",
"We highlight several diverse patterns of trigger-dependent and context-dependent event types, and our findings may stimulate future research into their differences.",
"Event Detection.",
"ED is a critical subtask of event extraction that seeks to locate event instances in text, which has received a lot of attention from researchers.",
"Traditional methods for ED typically use fine-grained features (Ahn, 2006; Ji and Grish-man, 2008; Liao and Grishman, 2010; Hong et al., 2011; Li et al., 2013), whereas newer methods rely on neural networks (Chen et al., 2015; Nguyen and Grishman, 2015; Feng et al., 2016; Nguyen and Nguyen, 2019; Liu et al., 2018a, 2019a,b), which have investigated the use of syntactic information (Liu et al., 2018b; Lai et al., 2020), document-level cues (Wadden et al., 2019; Lin et al., 2020; Du and Cardie, 2020; Liu et al., 2020b; Lai et al., 2021; Pouran Ben Veyseh et al., 2021; Li et al., 2021; Chen et al., 2021; Liu et al., 2021), and external supervision signals (Tong et al., 2020; Liu et al., 2020a) to boost learning.",
"However, most methods recognize no distinction between event types and train a single model to identify all event types, resulting in rather skewed performance on different event types.",
"Two seminal works (Lu et al., 2019; 4574 Liu et al., 2020b) have observed the comparatively poor performance on context-dependent texts and offered a better context-exploration strategy to improve training.",
"Nonetheless, they are in a position to improve performance rather than investigate the root causes.",
"Our approach, on the other hand, takes a fresh look at the issue and aims to define the underlying patterns of events for learning.",
"Feature Attribution.",
"The goal of feature attribution (FA) is to assess how important an input feature for model prediction, which has sparked a lot of interest in interpreting model decisions (Si-monyan et al., 2014; Sundararajan et al., 2017).",
"Formally, suppose we have an input vector x = ( x 1 , x 2 , ..., x n ) R n and a function F : R n [0, 1] representing a model.",
"The attribution value of x , with respect to the output F ( x ), is defined as a vector AF ( x ) = ( a 1 , a 2 , ..., a n ) R n , where a i measures the contribution of x i to F ( x ).",
"The existing FA methods are classified as gradient-based methods, which consider the gradient of the output to the input as the attribution value (Simonyan et al., 2014; Springenberg et al., 2015), and reference-based methods, which consider the difference between the model's output and some reference\" output, in terms of the difference between the input and some reference\" input, as the attribution value (Ribeiro et al., 2016; Sundararajan et al., 2017).",
"FA have been used to interpret model predictions in applications including image classification (Si-monyan et al., 2014), machine translation (Ding et al., 2017), text classification (Chen et al., 2018), and others (Bastings and Filippova, 2020).",
"To the best of our knowledge, this is the first work introducing FA to ED for quantifying the underlying event patterns.",
"Integrated Gradient.",
"Integrated Gradient (Sun-dararajan et al., 2017) is a specific (reference-based) FA method that views the feature attribution value as the accumulated gradient along the line between the model's input x and a reference input x (cid:48) , which denotes the lack of a feature 1 .",
"Particularly, the attribution value of x i (i.e., the i th dimension of x ) with respect to an output F ( x ) is defined as: a i = ( x i x (cid:48) i ) (cid:90) 1 =0 F ( x (cid:48) + ( x x (cid:48) )) x i d (1) where F ( x ) x i indicates the gradient of F ( x ) to x i .",
"In our approach, we prefer Integrated Gradient to 1 In text related tasks, x (cid:48) is usually set as a sequence of embedding vectors with all zero values (Wallace et al., 2019).",
"other FA methods due to its computing efficiency and effectiveness in addressing a wide range of text based tasks (Sundararajan et al., 2017; Liu and Avci, 2019; Bastings and Filippova, 2020).",
"Algorithm 1 provides an overview of our trigger saliency attribution method, which consists of three major steps:",
"(i) sentence-level event classification,",
"(ii) word-level saliency estimation, and",
"(iii) type-level saliency estimation.",
"Let s = [ w 1 , w 2 , , w N ] be a sentence of N words, and the ED task corresponds to predicting an event label sequence Y s = [ y 1 , y 2 , , y N ], where y i T { O } indicates the event label of w i , T is a set containing all pre-defined event types, and O is a null type denoting no-trigger words.",
"Sentence-Level Event Classification.",
"We start by giving s a sentence-level event label G s , which represents the overall event semantic.",
"Let the label be G s = [ g 1 , g 2 , ..., g |T | ] R |T | , where g i { 0 , 1 } indicates whether a trigger of the i th event type is contained by s ( g i =1) or not ( g i =0).",
"Following that, we construct a sentence-level event classifier and aim to learn a mapping from s to G s .",
"Particularly, we devise a BERT based sentence classifier (Devlin et al., 2019) and adopt a multi-label binary cross-entropy loss for optimization: L ( G s ; X s ) = 1 |T | |T | (cid:88) i =1 g i log( o si )+(1 g i ) log(1 o si ) (2) where X s is the input embedding of s in BERT, o s R |T | indicates the logits vector computed by the classier, and o si denotes the i th element of o s .",
"Word-Level Saliency Estimation.",
"Based on the sentence-level classifier, we next use Integrated Gradient (Sundararajan et al., 2017) to calculate the contribution (i.e., saliency value) of each word 4575 [Divorce] [CLS] The couple divorced four years later 0 0 1 0 0 0 [Start-Pos] US minister Deep Transformer (BERT) O O O O O 0 O Input Saliency Embeddings Output 0 0 0 1 Deep Transformer (BERT) O O 0 O Input Saliency Embeddings Output [CLS] He become the USO Context Evidence Divorce, Hearing, Fine Start-Position, Pardon, Contextualized Trigger Saliency Attribution S2: He become the first minister to England.",
"We then normalize w i as a scalar value w i with a sentence-wise normalization: w i = e (cid:107) wi (cid:107) 2 / (cid:88) N n =1 e (cid:107) wn (cid:107) 2 (4) where (cid:107)(cid:107) denotes the L 2 norm.",
"In actuality, we may not be concerned with a word's saliency to the general event semantic G s , but rather with a specific event type T T .",
"To this end, we replace G s with the one-hot representation of T in Equation (3) for evaluation.",
"Finally, we represent the word-level saliency of w i with respect to the event type T by ( T ) w i , and we suppose ( T ) w i = 0 if the sentence does not describe any event of type T .",
"Event Type Division.",
"Based on type-level saliency estimation, we divide all event types into a trigger-dependent set T trigger = { T | SL( T ) } and a context-dependent set T context = { T | SL( T ) < } .",
"The threshold is empirically determined as the median of all per-type trigger saliency values, implying that the event types are evenly divided into two sets 2 .",
"Saliency-Enriched Event Detector.",
"Following that, we create separate ED models for T trigger and T context .",
"Each model is implemented using the BERT architecture (Devlin et al., 2019), and given a sentence s , it performs a word-by-word classification over BERT's output to generate a label sequence: Y s = ( y 1 , y 2 , , y N ), with y i being the predicted event label for w i .",
"Based on the different characteristics of trigger-dependent and context-dependent types, we devise different saliency-exploration methods to boost learning.",
"4 Saliency Enhanced ED Based on trigger saliency attribution, we devise a new training paradigm for ED, which can distinguish event types with similar patterns for learning and achieves promising results.",
"The overview is shown in Figure 3, and the technical details follow.",
"to the prediction.",
"We utilize the loss function as the desired model (Wallace et al., 2019), and calculate the saliency of w i , more accurately, its BERT representation x i X s , regarding the loss by: w i = ( x i x (cid:48) i ) (cid:90) 1 =0 L ( G s ; X (cid:48) + ( X s X (cid:48) )) x i d (3) where X (cid:48) is a sequence of all-zero vectors (serving as a reference input), and x (cid:48) i denotes the i th element in X (cid:48) .",
"Type-Level Saliency Estimation.",
"Based on the word-level saliency, we measure the type-level trigger saliency value (regarding an event type T ) as: SL( T ) = (cid:80) ( s,Y s ) (cid:80) w { w i | y i = T } ( T ) w #of training examples of type T (5) where ( s, Y s ) ranges over each training instance; { w i | y i = T } is a set containing all of the triggers of type T in s .",
"SL( T ) indicates how trigger-dependent or context-dependent an event type T is, and it has been shown to correlate strongly with the per-type model performance ( 6.1).",
"gers, we build a mechanism called word saliency embeddings (WSEs) in the model for T trigger to capture such regularities.",
"Specifically, we first quantify each word's saliency value 3 as 0 or 1 based on , i.e., the threshold we used previously for distinguishing event types, and then use a separate embedding vector to distinguish 0 and 1, similar to word embeddings.",
"Such embeddings are incorporated into the model 4 to capture a regularity that words with high saliency values are more likely to be triggers.",
"Note WSEs are also incorporated in the model for the T context , which on the other hand seeks to learn the opposite regularity that words with high saliency values may not be triggers.",
"(ii) Saliency as Context Evidence.",
"In the event detector for T context , we also devise a regime for interpreting salient information as context evidence for reasoning.",
"Consider the previous example S2.",
"Our method identifies the context words US minis-ter as the most salient words (with saliency values larger than ) expressing the overall event semantic.",
"Here we regard salient contexts as supplementary evidence and concatenate them with the sentence for learning, as shown in the bottom of Figure 3.",
"Compared with WSEs, this method can additional capture the lexical semantics of the salient words, which has been shown to considerably aid in the recognition of context-dependent event types ( 7).",
"Model Ensemble.",
"In the testing stage, we combine the results of two models to make a final prediction.",
"If ambiguous cases occur, i.e., the two ED models predict different event types for the same word, we use the type with a higher probability as the result.",
"We use cross-entropy loss for optimization.",
"For example, the model for T trigger is trained by minimizing the following loss: L = (cid:88) ( s,Y s ) (cid:88) ( w i ,y i ) ( s,Y s ) log P ( y i | w i ) (6) where ( s, Y s ) refers to each training instance; ( w i , y i 5 ) ranges over each pair of word and its ground-truth event label; P ( y i | w i ) denotes the conditional probability that the model predicts y i for w i .",
"We use Adam (Kingma and Ba, 2015) with default hyper-parameters for parameter update.",
"3 To prevent label leaking, at the testing stage we use predicted labels rather than ground-truth labels for attribution.",
"4 Because combining external embeddings with BERT remains difficult, we alter the segmentation embeddings in BERT to WSEs, motivated by (Wu et al., 2019).",
"5 Note in the event detector for T trigger , we should consider y i as O for y i T context .",
"Datasets.",
"We conduct experiments on ACE 2005 (LDC, 2005) and MAVEN (Wang et al., 2020).",
"ACE 2005 defines 33 event types and contains 599 documents.",
"We adopt a common split for evaluation following previous works (Li et al., 2013; Wadden et al., 2019).",
"MAVEN is a newly released corpus defining 168 more fine-grained event types (Wang et al., 2020).",
"Because the MAVEN test set is not publicly available and our study is concerned with per-type performance, we instead use the MAVEN development set for assessment and divide the original MAVEN training set as 9:1 for training and validating.",
"Table 1 displays the comprehensive data statistics for the two datasets.",
"Evaluation Metrics.",
"We adopt the following metrics to evaluate our model:",
"(i) Spearman's rank correlation coefficient, which can determine the statistical dependency between two ranked variable sequences.",
"The metric is defined as = 1 6 (cid:80) d 2 i n ( n 2 1) , where d i is the difference between the i th pair of ranked variables, and n is the sequence length.",
"We use it to measure how well our trigger saliency attribution results correlate with per-type model performance.",
"(ii) Precision (P), Recall (R) and (Mi-cro) F1, which are widely used to assess the overall performance of an ED model.",
"(iii) Macro F1, the arithmetic mean of class-wise F1-scores, which will be low for models that only perform well on common types but badly on rare types.",
"Implementations.",
"In our trigger saliency attribution method, the sentence-level classifier is built on the BERT-base .",
"The batch size is set to 20, and the learning rate is set to 1e-5.",
"After 5 epochs, it achieves 74.8% in F1 on the ACE 2005 development set, matching the state-of-the-art performance (Liu et al., 2019c).",
"As for the two ED models, we consider BERT-base architectures.",
"The batch size is set to 20, chosen from [1, 5, 10, 20, 30].",
"The 4577 Dataset Setting Method ACE 05 MAVEN Static # of Training Instances 0.06 0.09 Trigger Variance 0.26 0.25 Dynamic Trigger Attention 0.12 0.14 Trigger Saliency (Ours) 0.72 0.61 Table 2: The Spearman's correlation ( [-1, 1]) between per-type F1 and different criteria (high correlation is considered when > 0.6).",
"learning rate is set to 1e-5, chosen from a range from 1e-3 to 1e-6.",
"The dimension of word saliency embeddings is empirically set to 100.",
"To allow for further investigation, we have made our code publicly available at https://github.com/ jianliu-ml/SaliencyED .",
"Table 2 shows the Spearman's rank correlation between per-type F1 and four criteria: 1) the number of training instances (regarding an event type); 2) trigger variance, defined as the ratio of the number of unique event triggers to the total number of event triggers (regarding an event type); 3) trigger attention value, which corresponds to the ground-truth trigger's attention value in the BERT model; 4) trigger saliency attribution (our method).",
"We use a state-of-the-art ED model (Wadden et al., 2019) and perform a 5-run average on the development set to obtain the per-type F1 score.",
"According to the results, our trigger saliency attribution approach correlates the best with model performance, yielding a score as high as 0.72 and 0.61 in Spearman's correlation.",
"This suggests that our method can well explain the skewed performance.",
"Our other findings are interesting:",
"(i) Surprisingly, the number of training examples shows a negligible correlation ( = 0.06 and 0.09) with per-type F1.",
"This implies that simply collecting more training data may not be an effective way to improve an ED model.",
"(ii) The trigger variance metric demonstrates a moderate association ( = 0.25 and 0,26), indicating that the diversity of event triggers is a factor influencing model performance.",
"(iii) The trigger attention value also shows a poor association, which may be another proof that attention is not explainable (Jain and Wallace, 2019).",
"and our trigger saliency attribution method.",
"In addition to noting that our method adequately explains the per-type F1-score, we find that = 0.25 may be a good threshold for distinguishing between trigger-dependent and context-dependent event types.",
"To test the efficacy of our saliency enhanced ED model: 1) For ACE 2005, we compare our model with",
"(i) DYGIE++ (Wadden et al., 2019), which uses a graph view to learn context features;",
"(ii) TriggerQA (Du and Cardie, 2020), which uses a question answering formulation for the task;",
"(iii) OneIE (Lin et al., 2020), which adopts cross-sentence features for the task.",
"Because pre-processing has a significant impact on the results (Orr et al., 2018), to ensure a fair comparison, we only consider models using the same pre-processing steps as in (Wad-den et al., 2019).",
"2) For MAVEN, we use the BERT+CRF proposed in the original work (Wang et al., 2020) for comparison.",
"As a baseline, we also construct a model called BERTEns, which ensembles two BERT models similar to ours but does not differentiate event types.",
"We refer to our approach that merely separates event types for learning (with-out saliency-exploration strategies) as SaliencyED (SL), and our full approach as SaliencyED (Full).",
"Table 3 displays performances of different models.",
"The results have confirmed our approach's effectiveness.",
"Particularly:",
"(i) our full model achieves the best Micro F1 score (75.8% and 67.1%) on 4578 Method P (cid:78) R (cid:78) F1 (cid:78) F1 (cid:79) ACE DYGIE++ (2019) -73.6 65.7 TriggerQA (2020) 71.2 73.7 72.4 64.5 OneIE (2020) -75.2 66.6 BERTEns 71.5 73.1 72.3 65.4 SaliencyED (SL) 74.7 75.5 75.1 68.1 SaliencyED (Full) 75.4 76.2 75.8 68.8 MAV BERT+CRF (2020) 62.3 64.1 63.2 55.2 BERTEns 64.7 66.9 65.8 58.0 SaliencyED (SL) 64.9 68.2 66.5 59.2 SaliencyED (Full) 64.9 69.4 67.1 60.3 Table 3: Results on ACE 2005 and MAVEN (MVN).",
"ACE 2005 and MAVEN without the use of sophisticated architectures or external resources, as DY-GIE++ and OneIE do.",
"(ii) Impressively, with the identical architectures, our full model SaliencyED (Full) outperforms BERTEns by 2.8% and 1.7% in F1 on the two datasets, respectively; SaliencyED (SL), which only differentiates event types for training, outperforms BERTEns by 1.6% in F1.",
"This emphasizes the significance of identifying event patterns for ED.",
"(iii) Our method gives the best Macro F1 on two datasets, indicating that it performs well on both common and rare event types.",
"Table 4 shows the performance breakdown for trigger-dependent (TD) and context-dependent (CD) types.",
"According to the results, different models consistently produce good performance on TD types but low performance on CD types, implying that the patterns found by our trigger saliency attribution method are reasonable.",
"When comparing SaliencyED (SL) and SaliencyED (Full), we see that the saliency-exploring method is more effective on CD types (+2.3% in F1) than on TD types (+0.3% in F1).",
"This makes sense because detecting context-dependent events relies significantly on context reasoning, and our method can just use important contexts as evidence to improve learning.",
"Ablation Study.",
"We undertake an ablation study in Table 5 to investigate different model components, using the more challenging context-dependent (CD) types as an example.",
"In the variant models, +WSE and +Evidence denote supplementing SaliencyED (SL) with word saliency embeddings and context evidence, respectively.",
"+MaskAtt is an approach for calculating atten-TD Types CD Types Method F1 (cid:78) F1 (cid:79) F1 (cid:78) F1 (cid:79) ACE DYGIE++ (2019) 78.2 74.4 65.8 52.1 TriggerQA (2020) 80.1 76.3 65.2 53.2 OneIE (2020) 83.6 77.9 69.0 54.2 BERTEns 83.3 77.8 68.3 52.3 SaliencyED (SL) 86.2 82.0 70.0 56.9 SaliencyED (Full) 86.4 81.6 71.5 57.8 MAV BERT+CRF (2020) 67.5 67.1 49.2 38.1 BERTEns 70.3 70.0 51.5 38.1 SaliencyED (SL) 71.3 70.2 52.6 49.1 SaliencyED (Full) 71.6 70.8 53.5 50.4 Table 4: Results on trigger-dependent (TD) and context-dependent (CD) event types, where F1 (cid:78) and F1 (cid:79) indicate Micro and Macro F1 respectively.",
"tion that masks the word itself, which can drive the model to focus more on contexts for learning; +Gold Argument is an oracle method that uses gold event arguments as evidence for learning.",
"Based on the results, +Evidence outperforms +WSE and +MaskAtt, indicating its efficacy.",
"Interestingly, +MaskAtt also boosts performance, implying that the contexts of CD events do carry important information for asserting the event.",
"Finally, the superior performance of +Gold Arguments implies that find-ing indicative evidence (e.g., event arguments) is the key factor boosting learning on CD types.",
"Impact of Event Type Division.",
"We use our event type division method as a baseline and compare it to three other event type division strategies: 1) at random; 2) based on the amount of training instances; 3) based on development set performance.",
"According to the results, the first two strategies decrease performance by 1.27% and 1.41% in Micro F1 on ACE, and 1.53% and 1.40% on MAVEN, which suggests that an inappropriate separation of event types impairs learning.",
"The third strategy based on development performance improves learning (+0.8%/+1.1% on ACE/MAVEN), but it 4579 H@1 H@2 H@5 A cc u r a cy ( % ) 72 84 88 43 61 69 TD Type CD Type TD Types CD Types F 1 -S c o r e ( % ) 78 65 23 31 -- =55 =34 Original Adversarial Figure 5: Left: Top k accuracy (hit@k) when the most salient word appears to be an event trigger.",
"is still inferior to our approach.",
"An explanation is that the final model performance is the product of a combination of factors, and thus categorizing event types based on development set performance may not assure that event types with similar patterns are grouped together, resulting in inferior results.",
"Distinctions in TD/CD Types.",
"We use ACE 2005 as a case to highlight the distinct characteristics between TD and CD types.",
"Figure 5 (Left) depicts the top k accuracy (hit@k) in the case where the most salient word in a sentence appears to be an event trigger; Figure 5 (Right) depicts the performance drop in an adversarial attack in which the gold event triggers are masked for sentence-level event type classification.",
"The CD and TD types exhibit opposing behaviors: TD types display excellent H@k accuracy but a significant performance loss in adversarial attack, whereas CD types exhibit the opposite tendency.",
"This implies that the CD and TD types respectively rely on triggers and contexts.",
"Figure 6 shows a comparison of the number of event arguments for TD and CD types.",
"Clearly, CD types have a larger number of event arguments than TD types.",
"This is also another indication that CD types rely on contexts they require more arguments to convey an event.",
"Linguistic/Lexical Insights.",
"Table 6 give typical TD and CD types on ACE 2005 (Please refer to Appendixes for the full set).",
"Intuitively, the TD types appear to be finer-grained and concrete, 8 Most Trigger-Dependent (TD) Types: Divorce (0 . 434) , Hearing (0 . 355) , Fine (0 . 349) , Injure (0 . 308) , Be_Born (0 . 306) , Elect (0 . 305) , Sentence (0 . 304) , Die (0 . 304) 8 Most Context-Dependent (CD) Types: Start_Org (0 . 127) , Pardon (0 . 129) , Nominate (0 . 132) , Extradite (0 . 134) , Acquit (0 . 142) , Merge_Org (0 . 151) , Transfer_Money (0 . 155) , End_Org (0 . 156) Table 6: Typical TD and CD types on ACE 2005.",
"whereas the CD types appear to be coarser-grained and abstract.",
"For example, we may further subdivide a CD type TRANSFER _ MONEY into finer-grained ones like LOAN and PURCHASE .",
"We provide linguistic/lexical insights by comparing the hierarchy levels of TD/CD types on WordNet (Miller, 1992).",
"Accordingly, triggers of TD types are at the lower level of WordNet, with an average of 5.6 hypernyms; yet CD type triggers are at a higher level of WordNet, with 2.3 hypernyms.",
"This find-ing supports our intuition that TD types are more concrete whereas CD types are more abstract.",
"Case Visualization.",
"Figure 7 depicts the saliency map of several cases.",
"Accordingly, event triggers of TD types do usually have large saliency values.",
"For example, case 2) is the instance of DIVORCE with the lowest trigger saliency value, which is still as high as 0.34.",
"In contrast, event triggers of CD types typically have low saliency values.",
"For example, case 4) and 6) show random instances of TRANSFER-MONEY and TRANSPORT , where the trigger saliency values are only 0 .",
"01 .",
"In this study, we analyze the origins of an ED model's skewed performance and introduce a new notion called trigger saliency attribution to quantify the pattern of events.",
"We devise a new training paradigm for ED that can distinguish between trigger-dependent and context-dependent types for 4580 learning, yielding promising results on two benchmarks.",
"We also examine the differences between the two types extensively, and our work may promote future research on this problem.",
"In the future, we would apply our method to other tasks (e.g., relation extraction) where contextual patterns matter.",
"This work is supported by the National Natural Science Foundation of China (No.62106016).",
"This work is also supported by Fundamental Research Funds for the Central Universities (No. 2021RC234), the National Key R&D Program of China (2019YFB1405200), and the Open Projects Program of National Laboratory of Pattern Recognition."
]
| [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"objective",
"result",
"abstain",
"result",
"objective",
"abstain",
"result",
"objective",
"other",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"other",
"other"
]
|
[
"User interest modeling is critical for personalized news recommendation.",
"Existing news recommendation methods usually learn a single user embedding for each user from their previous behaviors to represent their overall interest.",
"However, user interest is usually diverse and multi-grained, which is difficult to be accurately modeled by a single user embedding.",
"In this paper, we propose a news recommendation method with hierarchical user interest modeling, named HieRec .",
"Instead of a single user embedding, in our method each user is represented in a hierarchical interest tree to better capture their diverse and multi-grained interest in news.",
"We use a three-level hierarchy to represent",
"1) overall user interest;",
"2) user interest in coarse-grained topics like sports; and",
"3) user interest in fine-grained topics like football.",
"Moreover, we propose a hierarchical user interest matching framework to match candidate news with different levels of user interest for more accurate user interest targeting.",
"Extensive experiments on two real-world datasets validate our method can effectively improve the performance of user modeling for personalized news recommendation.",
"Recently, massive people are habituated to reading news articles on online news platforms, such as Google News and Microsoft News (Khattar et al., 2018; Das et al., 2007).",
"To help users efficiently obtain their interested news information, personalized news recommendation technique that aims to recommend news according to user interests, is widely used by these platforms (Wu et al., 2020a; Liu et al., 2010; Lin et al., 2014).",
"User interest modeling is a critical step for personalized news recommendation (Wu et al., 2021; Zheng et al., 2018; Wu et al., 2020c).",
"Existing methods usually learn a single representation vector Figure 1: Click and non-click logs of an example user.",
"to model overall user interests from users' clicked news (Okura et al., 2017; Wu et al., 2020b; An et al., 2019).",
"For example, Okura et al. (2017) used a GRU network to model user interests from clicked news.",
"They used the latest hidden state of GRU as the user interest representation.",
"Wu et al. (2019e) used multi-head self-attention network to capture user interests, and used an attentive pooling network to obtain a unified user representation.",
"However, user interest is usually diverse and multi-grained.",
"For example, as shown in Fig. 1, a user may have interest in movies, sports, finance and health at the same time.",
"In addition, for users who are interested in sports, some of them may have general interest in this area, while other users like the example user in Fig. 1 may only have interest in a specific sport like football.",
"However, it is difficult for these methods to accurately model the diverse and multi-grained user interest for news recommendation via a single user embedding.",
"In this paper, we propose a personalized news recommendation approach with hierarchical user interest modeling, named HieRec , which can effectively capture the diverse and multi-grained user interest.",
"Our approach contains three levels of user interest representations to model user interests in different aspects and granularities.",
"The first one is subtopic-level, which contains multiple interest representations to model fine-grained user interests in different news subtopics (e.g., interest in football and golf).",
"They are learned from embeddings of subtopics and the clicked news in the corresponding subtopics.",
"The second one is topic-level, which contains multiple interest representations to capture coarse-grained user interests in major news topics (e.g., interest in sports and finance).",
"They are learned from embeddings of news topics and their subordinate subtopic-level interest representations.",
"The third one is user-level, which contains an interest representation to model overall user interests.",
"It is learned from topic-level interest representations.",
"Besides, we propose a hierarchical user interest matching framework to match candidate news with different levels of interest representations to target user interests more accurately.",
"Extensive experiments on two real-world datasets show that HieRec can effectively improve the accuracy of user interest modeling and news recommendation.",
"Personalized news recommendation is an important intelligent application and is widely studied in recent years (Bansal et al., 2015; Wu et al., 2019c; Qi et al., 2020; Ge et al., 2020).",
"Existing methods usually model news from its content, model user interest from user's clicked news, and recommend candidate news based on their relevance with user interests (Okura et al., 2017).",
"For example, Okura et al. (2017) utilized an auto-encoder to learn news representations from news bodies.",
"They applied a GRU network to capture user interests from the sequence of users' historical clicks and used the last hidden state vector of GRU as user interest representation.",
"Besides, they proposed to model relevance between user interest and candidate news based on the dot product of their representations.",
"Wu et al. (2019a) learned news representations from news titles, bodies, categories, and subcategories based on an attentive multi-view learning framework.",
"They build user interest representation based on the attentive aggregation of clicked news representations.",
"An et al. (2019) used a CNN network to learn news representations from news titles and categories.",
"They applied a GRU network to user's clicked news to build a short-term user interest representation and applied user ID embedding to learn long-term user interest representation.",
"They further learned a unified user interest representation based on the aggregation of shortand long-term user interest representation.",
"Liu et al. (2020) proposed to learn news representations from news titles and entities via a knowledge graph attention network.",
"They also obtained user interest representation from representations of clicked news via an attention network.",
"Besides, all of these three methods adopted the inner product for matching candidate news.",
"Most existing methods learn a single user embedding to represent the overall user interests (Wang et al., 2018; Wu et al., 2019e,b).",
"However, user interests are usually very diverse and multi-grained, which are difficult to be accurately modeled by a single user embedding.",
"Different from these methods, we propose a hierarchical user interest modeling framework to model user interests in different aspects and granularities.",
"In addition, we propose a hierarchical user interest matching framework to understand user interest in candidate news from different interest granularities for more accurate user interest targeting.",
"In this section, we first give a problem formulation of personalized news recommendation.",
"Then we introduce our HieRec method in detail.",
"Given a candidate news n c and a target user u , the goal is calculating an interest score o to measure the interest of this user in the candidate news.",
"Each news n has a title, a topic t and a subtopic s .",
"The title is composed of a text sequence T = [ w 1 , w 2 , ..., w T ] and an entity sequence E = [ e 1 , e 2 , ..., e E ] , where w i and e i respectively denote the i -th word and entity in news title, T and E respectively denote the number of words and entities.",
"We assume the user has M clicked news.",
"In HieRec , we further divide these clicks based on their topics and subtopics for hierarchical user interest modeling.",
"More specifically, we build a clicked topic set { t i | i = 1 , ..., m } from topics of user's clicks, where t i is the i -th clicked topic and m is the number of clicked topics.",
"We can further obtain a clicked subtopic set { s ij | j = 1 , ..., d } subordinate to each clicked topic t i , where s ij is the j -th clicked subtopic subordinate to topic t i and d is the size of the set.",
"Finally, user's clicked news in topic t i and subtopic s ij are divided into the same click group N ij = { n i,jk | k = 1 , ..., l } , where n i,jk denotes the k -th clicked news in this group and l is the number of clicked news in the group.",
"In general, user interest is usually very diverse and multi-grained.",
"For example, according to Fig. 1, Figure 2: Framework of hierarchical user interest modeling in HieRec .",
"the example user has interests in many different aspects at the same time, such as sports, movies, and finance.",
"Besides, for users who are interested in sports, some of them may have general interests in this area and may read news on different kinds of sports, such as basketball, football, golf, and so on.",
"While other users (like the example user in Fig.",
"1) may only have interest in a specific sport like football.",
"Understanding user interest in different aspects and granularities has the potential to model user interests more accurately.",
"Thus, we propose a hierarchical user interest modeling framework, which learns a hierarchical interest tree to capture diverse and multi-grained user interest.",
"As shown in Fig. 2, HieRec represents user interests via a three-level hierarchy.",
"First, we learn multiple subtopic-level interest representations to model fine-grained user interests in different news subtopics (e.g. football and golf).",
"The subtopic-level interest representation for subtopic s ij is learned from N ij that is composed of user's clicked news in subtopic s ij .",
"Since clicked news may have different informativeness for modeling user interest, we adopt a subtopic-level attention network to select informative clicked news for modeling user interest in subtopic s ij : c ij = l (cid:88) k =1 k n i,jk , k = exp( s ( n i,jk )) (cid:80) lp =1 exp( s ( n i,jp )) , (1) where k denotes the attention weight of the k -th clicked news n i,jk in N ij , n i,jk is the representation of news n i,j k (Section. 3.4 introduces how to obtain it) and s ( ) denotes a dense network.",
"Besides, we also adopt a subtopic embedding layer to capture semantic information of different subtopics, from which we can obtain the embedding vector s ij of subtopic s ij .",
"Finally, we learn the subtopic-level user interest representation u si,j based on the combination of c ij and s ij , i.e., u si,j = c ij + s ij .",
"Similarly, we also learn subtopic-level interest representations for other subtopics clicked by the user.",
"Second, we learn multiple topic-level interest representations to model coarse-grained user interests in major news topics (e.g. sports and fi-nance).",
"The topic-level interest representation for a clicked topic t i is learned from subtopic-level interest representations { u si,j | j = 1 , ..., d } of subtopics { s ij | j = 1 , ..., d } subordinate to the topic t i .",
"More specifically, user interests in different subtopics may have different importance for modeling user interest in a specific topic.",
"Besides, the number of clicked news on a subtopic may also reflect its importance for modeling topic-level user interest.",
"Thus, we utilize a topic-level attention network to select important subtopic-level user interest representations to model user interest in topic t i : z i = d (cid:88) j =1 j u s i,j , j = exp( t ( v s i,j )) (cid:80) dk =1 exp( t ( v si,k )) , (2) where v si,j = [ u si,j ; r ij ] , r ij is the embedding vector for the number of clicked news on subtopic s ij , [ ; ] is the concatenation operation, j is the attention weight of u si,j , and t ( ) is a dense network.",
"Besides, we also use a topic embedding layer to model semantic information of different topics and drive the embedding vector t i for topic t i .",
"Finally, we aggregate z i and t i to learn the topic-level user interest representation u ti in topic t i : u ti = z i + t i .",
"Similarly, we also learn topic-level interest representations for other clicked topics.",
"Third, we learn a user-level interest representation u g to model overall user interests.",
"It is learned from topic-level interest representations.",
"Similarly, we adopt a user-level attention network to model relative importance of topic-level user interests to learn user-level interest representation: u g = m (cid:88) i =1 i u ti , i = exp( g ( v t i )) (cid:80) mj =1 exp( g ( v tj )) , (3) where v ti = [ u ti ; r i ] , r i is the embedding vector for the number of user's clicked news on topic t i , i denotes the attention weight of the i -th topic-level interest representation, and g ( ) denotes a dense network for calculating attention scores.",
"Matching between candidate news and user interests at different granularities can provide various clues for user interest targeting.",
"For example, according to Fig. 1, although all of the 3rd, 4th, and 5th news are about sports, the user only clicks the 3rd news probably because of her fine-grained interests in football rather than basketball and golf.",
"This implies that the matching between candidate news and fine-grained user interests is useful for personalized news recommendation.",
"Besides, not all candidate news can match with fine-grained user interests.",
"For instance, a news on subtopic baseball cannot match any fine-grained interests of the example user in Fig. 1.",
"Fortunately, the coarse-grained user interests (i.e., interest in sports) and overall user interests can match with this candidate news.",
"This implies that matching candidate news with coarse-grained user interests and overall user interests is also important.",
"Thus, we propose a hierarchical user interest matching framework, which models user interests in candidate news from different interest granularities.",
"As shown in Fig. 3, it takes candidate news (including its representation n c , topic t c and subtopic s c ) and hierarchical user interest representation as input.",
"First, we match candidate news with overall user interests and calculate a user-level interest score o g based on the relevance between n c and u g : o g = n c u g .",
"Second, topic-level interest representation u tt c models coarse-grained user interests in the topic t c of candidate news.",
"It can provide coarse-grained information to understand user interest in candidate news.",
"Thus, we match topic-level interest representation u tt c with candidate news n c as: o t = n c u tt c .",
"Besides, we can infer users may be more interested in topics that they have clicked more.",
"Thus, we weights o t based on the ratio w t c of topic t c in historical clicked news and obtained topic-level interest score o t : o t = o t w t c .",
"Besides, if the candidate news does not belong to any user's clicked topics, we set o t as zero directly.",
"Third, subtopic-level interest representation u ss c models fine-grained user interest in the subtopic s c of candidate news and can be used to capture fine-grained user interests in candidate news.",
"Thus, we match subtopic-level interest representation u ss c and candidate news n c as: o s = n c u ss c Similarly, we weights o s based on the ratio w s c of subtopic s c in user's clicked news and obtain the subtopic-level interest score: o s = o s w s c .",
"Finally, interest scores of three different levels are aggregated to an overall interest score o : o = s o s + t o t + (1 s t ) o g , (4) where t , s , R + are hyper-parameters for controlling the relative importance of interest scores of different levels.",
"Fig. 4, we first use a text encoder to model news texts.",
"It first applies a word embedding layer to enrich semantic information of the model.",
"Next, it adopts a text self-attention network (Vaswani et al., 2017) to learn word representations from contexts of news texts.",
"Then, it uses a text attention network to learn text representation n t by aggregating word representations.",
"Besides texts, knowledge graphs can also provide rich information for understanding news content via entities in news (Wang et al., 2018).",
"Thus, we apply an entity encoder to learn entity representation of news.",
"We first use an entity embedding layer to incorporate information from knowledge graphs into our model.",
"We further apply an entity self-attention network to capture relatedness among entities.",
"Next, we utilize an entity attention network to learn entity representation n e of news by aggregating entities.",
"Finally, we build representation n of news as: n = W t n t + W e n e , where W t and W e are parameters.",
"Following (Wu et al., 2019d), we utilize the NCE loss for model optimization.",
"Given a positive sample n + i (a clicked news) in the training dataset O , we randomly select K negative samples [ n 1 i , ..., n K i ] (non-clicked news) for it from the same news impression displayed to the user u .",
"The NCE loss L requires the positive sample should be assigned a higher interest score o + i than other negative samples [ o 1 i , ..., o Ki ] and is formulated as: L = |O| (cid:88) i =1 log exp( o + i ) exp( o + i ) + (cid:80) Kj =1 exp( o ji ) .",
"We conduct extensive experiments on two real-world datasets to evaluate the effectiveness of Hi-#",
"eRec .",
"The first one is the public MIND dataset (Wu et al., 2020d) 1 .",
"It is constructed by user behavior data collected from Microsoft News from October 12 to November 22, 2019 (six weeks), where user data in the first four weeks was used to construct users' reading history, user data in the penultimate week was used for model training and user data in the last week was used for evaluation.",
"Besides, MIND contains off-the-shelf topic and subtopic label for each news.",
"The second one (named Feeds ) is constructed by user behavior data sampled from a commercial news feeds app in Microsoft from January 23 to April 01, 2020 (13 weeks).",
"We randomly sample 100,000 and 10,000 impressions from the first ten weeks to construct training and validation set, and 100,000 impressions from the last three weeks to construct test data.",
"Since Feeds only contains topic label of news, we implement a simpli-fied version of HieRec with only userand topic-level interest representations on Feeds .",
"Besides, following Wu et al. (2020d), users in Feeds were anonymized via hash algorithms and de-linked from the production system to protect user privacy.",
"Detailed information is summarized in Table 1.",
"Next, we introduce experimental settings and hyper-parameters of HieRec .",
"We use the first 30 words and 5 entities of news titles and users' recent 50 clicked news in experiments.",
"We adopt pre-trained glove (Pennington et al., 2014) word embeddings and TransE entity embeddings (Bor-des et al., 2013) for initialization.",
"In HieRec , the word and entity self-attention network output 400-and 100-dimensional vectors, respectively.",
"Besides, the unified news representation is 400-dimensional.",
"Attention networks (i.e., s ( ) , t ( ) , and g ( ) ) are implemented by single-layer dense networks.",
"Besides, dimensions of topic and subtopic embeddings are 400, both of which are randomly initialized and fine-tuned.",
"The hyper-parameters for combining different interest scores, i.e. t and s , are set to 0.15 and 0.7 respectively.",
"Moreover, we utilize dropout technique (Srivastava et al., 2014) and Adam optimizer (Kingma and Ba, 2015) for training.",
"HieRec is trained for 5 epochs with 0.0001 1 We use the small version of MIND for quick experiments.",
"This dataset is at https://msnews.github.io/index.html MIND Feeds AUC MRR nDCG@5 nDCG@10 AUC MRR nDCG@5 nDCG@10 EBNR 61.62 0.15 28.07 0.18 30.55 0.22 37.07 0.21 63.48 0.32 28.01 0.18 32.05 0.23 37.64 0.22 DKN 63.99 0.23 28.95 0.08 31.73 0.14 38.38 0.17 62.94 0.22 28.05 0.26 32.15 0.34 37.68 0.36 DAN 64.68 0.13 29.78 0.13 32.63 0.21 39.27 0.15 62.67 0.49 27.75 0.34 31.74 0.44 37.42 0.43 NAML 64.30 0.30 29.81 0.17 32.64 0.24 39.11 0.20 64.48 0.24 28.99 0.13 33.37 0.16 38.90 0.18 NPA 64.28 0.53 29.64 0.33 32.28 0.37 38.93 0.39 64.02 0.63 28.71 0.39 33.01 0.50 38.55 0.47 LSTUR 65.68 0.35 30.44 0.39 33.49 0.45 39.95 0.39 65.01 0.13 29.28 0.06 33.74 0.09 39.16 0.11 NRMS 65.43 0.15 30.74 0.18 33.13 0.17 39.66 0.15 65.27 0.19 29.40 0.15 33.89 0.16 39.34 0.15 KRED 65.89 0.31 30.80 0.32 33.78 0.27 40.23 0.26 65.51 0.11 29.57 0.06 34.04 0.06 39.60 0.05 GNewsRec 65.91 0.21 30.50 0.21 33.56 0.21 40.13 0.18 65.23 0.16 29.36 0.11 33.87 0.13 39.44 0.12 FIM 64.65 0.14 29.70 0.17 32.51 0.25 39.30 0.16 65.41 0.23 29.57 0.18 34.08 0.25 39.56 0.23 HieRec 67.95 0.14 32.87 0.08 36.36 0.07 42.53 0.10 66.23 0.10 29.82 0.11 34.42 0.13 39.94 0.13 Table 2: Performance of different methods.",
"learning rate.",
"All hyper-parameters of HieRec and baseline methods are manually tuned on the validation set.",
"2 Following Wu et al. (2019e), we use four ranking metrics, i.e., AUC, MRR, nDCG@5, and nDCG@10, for performance evaluation.",
"We first introduce the baseline methods we compared in experiments: (1) EBNR (Okura et al., 2017): learning user representations from the sequence user's clicked news via a GRU network.",
"(2) DKN (Wang et al., 2018): using a candidate-aware attention network to learn user representations.",
"(3) DAN (Zhu et al., 2019): using an attentive LSTM network to learn user representations.",
"(4) NAML (Wu et al., 2019a): learning user representations by attentively aggregating user's clicked news.",
"(5) NPA (Wu et al., 2019b): learning news and user representations via personalized attention networks.",
"(6) LSTUR (An et al., 2019): modeling short-term user interests from user's clicked news via a GRU network and long-term user interests from user-news interactions via user ID embeddings.",
"(7) NRMS (Wu et al., 2019e): applying multi-head self-attention networks to learn news representations and user representations.",
"(8) KRED (Liu et al., 2020): proposing a knowledge graph attention network to learn news representations from texts and entities of news titles.",
"(9) GNewsRec (Hu et al., 2020): modeling short-term user interests from clicked news sequences via an attentive GRU network and long-term user interests from user-news click graph via a graph neural network.",
"(10)",
"FIM (Wang et al., 2020): modeling user interests in candidate news from semantic relevance of user's clicked news and candidate news 2 https://github.com/JulySinceAndrew/HieRec via a 3-D CNN network.",
"Each experiment is repeated 5 times.",
"The average results and standard deviations are listed in Table 2, from which we have several observations.",
"First, HieRec significantly outperforms other baseline methods which learn a single user embedding to model overall user interests, such as NRMS , NPA , and NAML .",
"This is because user interests are usually diverse and multi-grained.",
"However, it is difficult for a single representation vector to model user interests in different aspects and granularities, which may be suboptimal for personalized news recommendation.",
"Different from these methods, we propose a hierarchical user interest modeling framework, which can represent diverse and multi-grained user interests via a three-level hierarchy.",
"Besides, we also propose a hierarchical user interest matching framework to match user interest with candidate news from different granularities, which can better target user interests.",
"Second, HieRec can significantly outperform FIM , which directly model user interests in candidate news from the semantic relevance of candidate news and user's clicked news.",
"This may be because FIM did not consider user interests from different granularities for matching candidate news.",
"To fairly compare different methods with HieRec on the performance of interest modeling, we compare them based on the same news modeling method (the news modeling method introduced in Section 3.4).",
"Experimental results are summarized in Table 3 and we only show experimental results on MIND in the following sections.",
"Table 3 shows that HieRec significantly outperforms existing interest modeling methods.",
"This is because user in-AUC MRR nDCG@5 nDCG@10 NAML 65.81 0.27 30.89 0.21 34.16 0.30 40.55 0.24 DKN 66.03 0.27 31.17 0.25 34.47 0.33 40.85 0.29 EBNR 65.90 0.27 30.86 0.21 34.14 0.30 40.58 0.24 LSTUR 66.02 0.14 31.16 0.15 34.37 0.15 40.83 0.12 GNewsRec 66.16 0.14 31.19 0.05 34.40 0.09 40.82 0.10 NRMS 66.04 0.21 31.20 0.19 34.53 0.22 40.89 0.18 HieRec 67.95 0.14 32.87 0.08 36.36 0.07 42.53 0.10 Table 3: Effect of HieRec in user interest modeling.",
"terests are usually diverse and multi-grained.",
"It is difficult for existing methods with single user embedding to capture user interests in different aspects and granularities.",
"Different from these methods, HieRec learns a three-level hierarchy to represent diverse and multi-grained user interests.",
"We evaluate the effectiveness of user interest representations of different levels by removing the corresponding interest matching scores from Eq.",
"4.",
"Results are shown in Fig. 5 and we have several findings.",
"First, HieRec with userand topicor subtopic-level interest representation significantly outperforms HieRec with only user-level interest representation.",
"This is because matching candidate news with fine-grained user interests has the potential to improve the accuracy of news recommendation.",
"Topicand subtopic-level interest representation can model finer-grained user interests than the user-level interest representation.",
"Thus, they can provide additional information to match candidate news than user-level interest representation.",
"Second, HieRec with interest representations of three levels also outperforms HieRec with userand topicor subtopic-level interest representation.",
"This may be because matching candidate news with user interests of different granularities can help perform more accurate interest matching.",
"Since topicand subtopic-level interest representa-Figure 6: Recall rates of different methods.",
"tion capture user interests at different granularities, incorporating both of them can further improve the recommendation performance.",
"Next, we compare different user interest modeling methods on the news recall task.",
"3 Since methods that model user interests with candidate news information, e.g., DKN and GNewsRec , cannot be applied in the news recall task due to efficiency issues (Pal et al., 2020), we do not compare them in experiments.",
"We evaluate the accuracy and diversity of top K recalled candidate news.",
"Following existing works (Pal et al., 2020; Chen et al., 2018), the former is measured by recall rates, and the latter is measured by intra-list average distance (ILAD).",
"For HieRec , we employ subtopic-level interest representations to perform multi-channel news recall and equally integrate news recalled by different interest channels.",
"Experimental results are summarized in Fig. 6 and Fig. 7, which show that HieRec significantly outperforms other methods in terms of both recall rates and diversity.",
"This is because user interests are usually very diverse and multi-3 News recall task aims to recall a small number of candidate news from a large news pool according to user interests.",
"grained, which are difficult to be comprehensively modeled by a single representation vector.",
"Different from these methods, HieRec hierarchically represents user interests and can better model user interests in different aspects and granularities.",
"Besides, this also implies that compared to existing personalized methods, HieRec can help users explore more diverse information and alleviate filter bubble issues (Nguyen et al., 2014) to some extent.",
"As shown in Fig. 9, we analyze the influence of two important hyper-parameters of HieRec (i.e., t , s ) used for combining different levels of interest scores.",
"First, when t is fixed, performance of HieRec first gets better with the increase of s .",
"This is because s controls the importance of o s .",
"Bedsides, o s measures the relevance of candidate news and fine-grained user interests, which can provide accurate information to understand user interests in the candidate news.",
"When s is too small, HieRec cannot effectively exploit information in o s .",
"Second, large value of s also hurts the performance of HieRec .",
"This is because when s is too large, HieRec cannot effectively exploit user-and topic-level matching scores to recommend candidate news.",
"However, matching candidate news with both overall and coarse-grained user interests is important for personalized news recommendation.",
"Thus, a moderate s , i.e., 0.65 or 0.7, is suitable for HieRec .",
"Third, when s is fixed, the performance of HieRec also first gets better with the increase of t and gets worse when t is too large.",
"This is because HieRec cannot effectively utilize information of o t when t is too small.",
"Besides, HieRec cannot effectively utilize information of o g and o s when t is too large.",
"Thus, a moderate t , i.e., 0.12 or 0.15, is suitable for HieRec .",
"We conduct a case study to show the superior performance of HieRec .",
"We compare HieRec with GNewsRec since GNewsRec achieves best AUC score in Table 2 among baseline methods.",
"In Fig. 8, we show the top 5 news recommended by HieRec and GNewsRec in a randomly sampled impression.",
"Besides, we also show the historical clicks of the target user in this impression.",
"We can find that the top 5 news recommended by GNewsRec is dominated by news on politics, which cannot comprehensively cover different user interests.",
"This is because user interests are usually diverse and multi-grained.",
"However, it is difficult for GNewsRec , which learns a single representation to model overall user interests, to effectively capture user interests in different aspects and granularities.",
"Different from GNewsRec , the top 5 news recommended by HieRec are diverse and can cover topics that the user may be interested in.",
"Besides, the user clicked a news recommended by HieRec .",
"This is because HieRec learns a hierarchical user interest representation which can effectively model user interests in different aspects and granularities.",
"With the help of the hierarchical user interest representation, HieRec can match candidate news with user interests in different aspects and granularities.",
"In this paper, we propose a personalized news recommendation method named HieRec for hierarchical user interest modeling, which can effectively model diverse and multi-grained user interests.",
"HieRec learns a three-level hierarchy to represent user interest in different aspects and granularity.",
"First, we learn multiple subtopic-level interest representations to model fine-grained user interests in different news subtopics.",
"Second, we learn multiple topic-level interest representations to model coarse-grained user interests in several major news topics.",
"Third, we learn a user-level interest representation to model overall user interests.",
"Besides, we propose a hierarchical user interest matching framework to match candidate news with user interest from different granularity for more accurate user interest targeting.",
"Extensive experiments on two real-world datasets show the effectiveness of HieRec in user interest modeling.",
"This work was supported by the National Natural Science Foundation of China under Grant numbers U1936208, U1705261, U1936216, and U1836204.",
"We thank Tao Di and Wei He for their great comments and suggestions.",
"In this paper, we present HieRec to model diverse and multi-grained user interest.",
"HieRec can be applied to online news platforms for personalized news recommendation, which can help platforms improve user experience and help users find interested news information.",
"Although HieRec can bring many benefits, it may also have several potential risks, which we will discuss in detail.",
"Accuracy Although HieRec outperforms baseline methods in term of recommendation accuracy (Table 2), it may also have some inaccurate recommendation results that users are not interested in.",
"Users usually just ignore them and will not click them to read.",
"The user experience may be harmed and users may use the online news service less in the future, or turn to other online news platforms.",
"Privacy In HieRec , we rely on user behavior data centrally stored on the news platform for model training and online services.",
"User behavior data is usually privacy-sensitive, and its centralized storage may lead to privacy concerns and risks.",
"In the future, we will explore to train and deploy HieRec in a more privacy-preserving way based on some effective privacy protection techniques like Federated Learning (Qi et al., 2020).",
"Diversity Filter bubbles and echo chambers are the common problem for many recommender systems (Nguyen et al., 2014), which harms user experience.",
"Improving recommendation diversity has the potential to alleviate the problem of filter bubbles and echo chambers.",
"Through experiments in Fig. 7, we find that HieRec can outperform many news recommendation methods in term of recommendation diversity.",
"Thus, compared with existing methods, HieRec has the potential to alleviate filter bubble problem to some extent.",
"Besides, in order to further improve recommendation diversity, HieRec can be combined with some existing methods in this field like DPP (Chen et al., 2018).",
"Fake News and Clickbait There may be some fake news and clickbait in some online platforms.",
"In order to handle the negative social impact and the user experience harm brought by these fake news and clickbait, online news platforms can use some existing fake news detection and clickbait detection techniques such as (Kumar et al., 2018; Shu et al., 2019) to filter these kinds of news before applying HieRec for personalized recommendation.",
"Fairness Like many other recommender systems, HieRec relies on user behavior data for model training and online service.",
"The bias in user behavior data may lead to some specific groups of users not be able to receive news information with sufficient accuracy and diversity, and the recommendation results may be more suitable for some major populations.",
"Recently, some fairness-aware recommendation methods like FairRec (Wu et al., 2021) have been proposed to eliminate bias and unfairness in recommender systems.",
"We can combine HieRec with these methods to improve the fairness of the recommendation results and mitigate the harms for marginalized populations.",
"Misuse The proposed HieRec method works in a data-driven way.",
"It trains the model from the user logs and makes personalized recommendations to users based on their interest inferred from their clicked news.",
"However, in some extreme cases, the recommendation results may be maliciously manipulated to influence users.",
"To avoid the potential misuse, the usage of HieRec should comply with the regulations and laws, and intentional manipulation should be prohibited."
]
| [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
]
|
[
"We introduce AVA, an automatic evaluation approach for Question Answering, which given a set of questions associated with Gold Standard answers (references), can estimate system Accuracy.",
"AVA uses Transformer-based language models to encode question, answer, and reference texts.",
"This allows for effectively assessing answer correctness using similarity between the reference and an automatic answer, biased towards the question semantics.",
"To design, train, and test AVA, we built multiple large training, development, and test sets on public and industrial benchmarks.",
"Our innovative solutions achieve up to 74.7% F1 score in predicting human judgment for single answers.",
"Additionally, AVA can be used to evaluate the overall system Accuracy with an error lower than 7% at 95% of confidence when measured on several QA systems.",
"Accuracy evaluation is essential both to guide system development as well as to estimate its quality, which is important for researchers, developers, and users.",
"This is often conducted using benchmark datasets containing a data sample, possibly representative of the target data distribution, provided with Gold Standard (GS) labels (typically produced with a human annotation process).",
"The evaluation is done by comparing the system output with the expected labels using some metrics.",
"This approach falls short when the system output spans a large, possibly infinite set of correct items.",
"For example, in retrieval-based Question Answering (QA) systems, a correct answer can be any string in the referent text database.",
"For example, for the question, When did Marlins start?",
", an answer could be: The Miami Marlins began play in the 1993 season as the Florida Marlins ; They started in 1993 ; They firstly played in 1993 ; In 1993 ; or any possible natural language text conveying the information that they started in 1993.",
"As annotating all possible system pieces of output is infeasible, the standard approach is to re-evaluate the new output of the system manually.",
"This dramatically limits the experimentation velocity while significantly increases the development costs.",
"A viable solution for specific NLP tasks such as Machine Translation (MT), automatically estimates an evaluation score between the system and the reference answers, which correlates with human judgment, e.g., the BLEU score is one popular measure (Papineni et al., 2002).",
"Such methods cannot be applied to a standard QA setting, since QA systems, e.g., those developed for TREC-QA track (Voorhees and Tice, 1999), have the purpose to provide correct answers and are evaluated with Accuracy, i.e., the percentage of correct answers.",
"Segment overlapping metrics such as BLEU, METEOR, or ROUGE do not provide a binary outcome, i.e., correct or incorrect (as this is not the aim of MT evaluation).",
"Hypothetically speaking, we could apply a threshold to their score to obtain a binary outcome.",
"However, it would not be sufficient as the correctness of an answer loosely depends on the match between the reference and candidate answers.",
"Two answers can be correct or incorrect independently of their overlap with the reference.",
"For example, for the question, What percentage of water in the body?",
", associated with a reference, The percentage of water in the body is 60% , a correct answer is Most of the human body is water, with an average of roughly 60% .",
"In contrast, an incorrect answer, still very similar to the reference, could be: The percentage of water in the body is variable .",
"The MT metrics above would find the similarity of the reference with the incorrect answer higher than the one of the references with the correct answer.",
"Even a powerful model such as BERTScore (Zhang et al., 2020) would not provide a higher score to the correct answer since it is an unsupervised approach, not trained for this task.",
"It should also be noted that simply training models for matching the answer candidate with the reference will again not work.",
"The question semantics would radically influence the correctness of the answer.",
"That is, match ( t, r | q 1 ) can be true while match ( t, r | q 2 ) can be false, where t and r are a pair of answer candidate and reference, and q 1 and q 2 are two different questions.",
"In this paper, we study the design of models for measuring the Accuracy of QA systems, i.e., percentage of correct answers over a test set (to our knowledge this is the first successful and thorough study).",
"In particular, we",
"(i) build several baselines based on pre-trained Transformer models (Devlin et al., 2019; Liu et al., 2019) to encode the triple, question q , candidate t , and reference r , in different ways; and",
"(ii) propose a new attention mechanism, peer attention, to model the interaction between t and r , given the semantic bias of q .",
"To develop and test our models, we created",
"(i) a dataset, Web-based Question Answering 1 (WQA) for training and testing AVA, the point-wise estimation of QA system output, i.e., the evaluation if an answer is correct or not, given a GS answer; and",
"(ii) a System Dataset (SD) constituted by a set of outputs from several QA systems, for which AVA estimates their Accuracy.",
"The results show a high F1 for point-wise models, up to 74.7%.",
"AVA can almost always rank systems in terms of Accuracy as manual annotation does.",
"Finally, the Root Mean Square Error (RMSE) with respect to human evaluation depends on the datasets, ranging from 2% to 9.5%, with a Std.",
"Dev.",
"lower than 5%.",
"Automatic evaluation has been an interesting research area for decades (Papineni et al., 2002; Magnini et al., 2002).",
"There are two typical strategies to design an automatic evaluator: supervised and unsupervised.",
"In MT research, for example, BLEU (Papineni et al., 2002) has been a very popular unsupervised evaluation method for the task.",
"Other supervised methods have been recently proposed, most notably (Ma et al., 2019).",
"Neural-based automatic evaluators for dialog systems were studied in (Ghazarian et al., 2019; Lowe et al., 2017; Tao et al., 2017; Kannan and Vinyals, 2017).",
"domain QA systems (Leidner and Callison-Burch, 2003; Lin and Demner-Fushman, 2006; Shah and Pomerantz, 2010; Gunawardena et al., 2015).",
"However, little progress has been made in the past two decades towards obtaining a standard method.",
"Automating QA evaluation is still an open problem, and there is no recent work supporting it.",
"As mentioned in the introduction MT unsupervised metrics, e.g., BLEU score or BERTScore, are not either a solution or a reasonable baseline for automatic QA evaluation.",
"They could be used as features for our models, but we designed several supervised approaches based on pre-trained Transformer models, which subsume these MT features.",
"A remotely related research effort for automatizing answer evaluation concerns student essays.",
"Short answer grading (SAG), or short answer scoring, involves the automatic grading of students' answers, typically written in free text, for a given prompt or question (Mohler et al., 2011).",
"This task has been studied in (Mitchell et al., 2002; Pulman and Sukkarieh, 2005) for educational applications.",
"Neural-based systems have also been recently proposed to improve the models (Riordan et al., 2017; Wang et al., 2019).",
"Despite the conceptual similarity, i.e., evaluating an answer, the problem setting for the task is fundamentally different.",
"Specifically, SAG is prompt-centric; thus, the learning objective is to score accurately other different answer variants for a particular question by building models trained on previously known variants (Wang et al., 2019).",
"Besides, the answers, while written in free text, are not typically complete sentences.",
"Therefore, the SAG design aims to capture sufficient content covered in the reference responses for a question.",
"On the contrary, AVA is designed to operate in an open-domain QA setting, where both the question and answer are arbitrary input and complete sentences.",
"We consider retrieval-based QA systems, which are mainly constituted by",
"(i) a search engine, retrieving top-k documents related to the questions, and",
"(ii) an Answer Sentence Selection (AS2) model, which reranks passages/sentences extracted from the documents.",
"We can automatically evaluate the",
"(i) Accuracy of the QA system, which is the percentage of correct top sentences, and",
"(ii) complex measures, such as MAP and MRR, which quantify the quality of the rank produced by the AS2 model.",
"q : What is the population of California?",
"r : With slightly more than 39 million people (according to 2016 estimates), California is the na-tion's most populous stateits population is almost one and a half times that of second-place Texas (28 million).",
"s : 39 million t : The resident population of California has been steadily increasing over the past few decades and has increased to 39.56 million people in 2018.",
"Let q be a question, T q = { t 1 , . . . , t n } be a set of answer sentence candidates for q , we define R as a ranking function, which orders the candidates in T q according to a score, p ( q, t i ) , indicating the probability of t i to be a correct answer for q .",
"Popular methods modeling R include Compare-Aggregate (Yoon et al., 2019), inter-weighted alignment networks (Shen et al., 2017), and Transformers (Garg et al., 2020).",
"The AVA performance can be measured in two ways:",
"(i) evaluation of the single answers provided by the target system (point-wise evaluation); and",
"(ii) the aggregated evaluation of a set of questions (system-wise evaluation).",
"We define the former as a function: A ( q, r, t i ) { 0 , 1 } , where r is a reference answer (from GS) and the output is simply a correct/incorrect label.",
"Table 1 shows an example question associated with a reference, a system answer, and a short answer s 2 .",
"A can be applied to compute the final Accuracy of a system using an aggregator function: we simply assume the point-wise AVA predictions as they were the GS.",
"For example, in case of Accuracy, we simply average the AVA predictions, i.e., 1 | Q | (cid:80) q QA ( q, r, t [ , s ]) , where s is a short GS answer (e.g., used in machine reading).",
"It is an optional input, which we only use for building a linear model baseline, described in Section 5.",
"To learn and test our models, we needed to build AVA datasets.",
"The interesting aspect is that we can automatically derive them from standard AS2 cor-2 The latter can be very effective but it adds an additional annotation cost, thus we limit its use just for the baseline model.",
"pora if they contain questions with multiple correct answers.",
"For this purpose, we created our dataset WQA for AS2 and transformed it into AVA-WQA.",
"We describe our approach to transforming AS2 to AVA datasets in this section.",
"Finally, we build another benchmarking dataset for AVA constituted by a set of QA systems and their output on target test sets.",
"This is used to measure the end-to-end system performance (system-wise evaluation).",
"These datasets consist of a set of questions Q , and for each q Q , there are T q = { t 1 , . . . , t n } candidates, comprised of both correct answers C q and",
"incorrect answers C q , T q = C q C q .",
"WQA: The Web-based Question Answering is a dataset built by Alexa AI as part of the effort to improve understanding and benchmarking in QA systems.",
"The creation process includes the following steps:",
"(i) given a set of questions we collected from the web, a search engine is used to retrieve up to 1,000 web pages from an index containing hundreds of millions of pages.",
"(ii) From the retrieved documents, all candidate sentences are extracted and ranked using AS2 models from (Garg et al., 2020).",
"Finally,",
"(iii) top candidates for each question are manually assessed as correct or incorrect by human judges.",
"This allowed us to obtain a richer variety of answers from multiple sources with a higher average number of answers, as shown in Table 2.",
"We use AS2 datasets as follows: firstly, we only keep questions with at least two correct answers, which is critical to build positive and negative examples.",
"Secondly, given (cid:104) q, t i , t j (cid:105) , where t i , t j are two candidates, we build: AVA-Pos = (cid:104) q, ( t i , t j ) C q C q and t i (cid:54) = t j (cid:105) AVA-Neg = (cid:10) q ; ( t i , t j ) C q C q (cid:11) We create AVA-WQA from WQA.",
"The statistics are shown in Table 2.",
"To measure AVA with respect to the overall system Accuracy, we need to have a sample of systems and their output on different test sets.",
"We created a dataset with candidate answers collected from eight systems answering a set of 1,340 questions.",
"The questions were again sampled from the Web.",
"We only considered information questions.",
"The systems differ from each other in multiple ways including:",
"(i) modeling : Compare-Aggregate (CNN-based) and different Transformers-based architectures with different hyper-parameter settings;",
"(ii) training : the systems are trained on different resources;",
"(iii) candidates : the pool of candidates is collected and filtered differently and in different numbers; and",
"(iv) retrieval : different search engines, diverse indexed data sources, and different retrieval settings.",
"This system variability provides high generality of our AVA results.",
"The central intuition for the design of an automatic QA evaluator is",
"(i) capturing the same information a standard QA system uses, while",
"(ii) exploiting the semantic similarity between t and r , biased by q .",
"We build three types of models:",
"(i) a linear classifier, which is more interpretable and can help the model design,",
"(ii) Transformer-based methods, based on powerful language models, and",
"(iii) our Peer Attention approach to better model the interaction among q , t , and r .",
"Given an input example, ( q, r, s, t ) , our classifier uses the following similarity features: x 1 = is-included ( s, t ) , x 2 = sim-text ( r, t ) , x 3 = sim-text ( r, q ) ; and x 4 = sim-text ( q, t ) , where is-included applied to s and t is a binary feature testing if t includes s , sim-text is a sort of Jaccard similarity defined as: sim-text ( s i , s j ) = 2 | tok ( s i ) tok ( s j ) | | tok ( s i ) | + | tok ( s j ) | , and tok ( s ) is a function that splits s into tokens.",
"Let x = f ( q, r, s, t ) = ( x 1 , x 2 , x 3 , x 4 ) be a similarity feature vector describing our evaluation tuple, and let l be a binary label indicating whether t answers q or not.",
"We train w on a dataset D = { ( x i , l i ) } , i = 1 ,",
".., | D | , using SVM.",
"We compute the point-wise evaluation of t as the test x i w > , where is a threshold trading off Precision for Recall in standard classification approaches.",
"Transformer-based architectures have delivered powerful language models, which can capture complex similarity patterns.",
"Thus, they are suitable methods to improve our basic approach described in the previous section.",
"Following the linear classifier modeling, we propose three different ways to exploit the relations among the members of the tuple ( q, r, s, t ) .",
"Let B be a pre-trained language model, e.g., the recently proposed BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), XLNet (Yang et al., 2019), AlBERT (Lan et al., 2020).",
"We use B to compute the embedding representation of a tuple: B ( a, a (cid:48) ) x R d , where ( a, a (cid:48) ) is a short text pair, x is the output representation of the pair, and d is the dimension of the output representation.",
"We use a standard feedforward network, i.e., A ( x ) = W (cid:124) x + b , to implement the classification layer, deciding if an answer is correct, where W and b are parameters we learn by fine-tuning the model on AVA datasets.",
"We describe the following different designs for A .",
"We build a language model representation for pairs of members of the tuple, x = ( q, r, t ) by simply inputting them to Transformer models B in the standard sentence pair fashion.",
"We consider four different configurations of A 0 , one for each of the following pairs: ( q, r ) , ( q, t ) , ( r, t ) , and one for the triplet, ( q, r, t ) , modeled as the concatenation of the previous three representations.",
"The representation for each pair is produced by a different and inde-pendent Transformer instance, i.e., B p .",
"More formally, we have the following three models A 0 ( B p ) , p P 0 , where P 0 = { ( q, r ) , ( q, t ) , ( r, t ) } .",
"Additionally, we design a model over ( q, r, t ) with A 0 ( p P 0 B p ) , where means concatenation of the representations.",
"We do not use the short answer, s , as its contribution is minimal when using powerful Transformer-based models.",
"The methods above are limited to pair representations.",
"We improve them by designing B models that can capture pattern dependencies across q , r and t .",
"To achieve this, we concatenate pairs of the three pieces of text.",
"We indicate this string concatenation with the operator.",
"Specifically, we consider P 1 = { ( q, r t ) , ( r, q t ) , ( t, q r ) } and propose the following A 1 .",
"As before, we have the individual models, A 1 ( B p ) , p P 1 as well as the combined model, A 1 ( p P 1 B p ) , where again, B p uses different instances that are fine-tuned together.",
"Our previous designs instantiate different B for each pair; thus, they learn the feature representations of the target pair and the relations between its members during the fine-tuning process.",
"This individual optimization limits the modeling of patterns across the representations of different pairs as there is no attention mechanism between the B instances: the combination of features only happens in the last classification layer.",
"We propose Peer Attention to improve feature connections between different B instances.",
"The idea, similar to the encoder-decoder setting in Transformer-based models (Vaswani et al., 2017), is to introduce an additional decoding step for each pair.",
"That is, we use another Transformer instance to decode the output from the previous instance.",
"Figure 1 depicts our proposed setting for learning the representation of two different pairs: a (e.g., equal to ( q, t ) ) and b (e.g., equal to ( q, r ) ).",
"The approaches from the previous section would learn two Transformer instances, B a and B b , with one pass.",
"Our Peer Attention , instead, operates two steps, using four instances, B a 0 , B a 1 , B b 0 , and B b 1 as follows: First, in the encoding step, we learn the representations, B a 0 and B b 0 , as before.",
"Second, in the decoding step, we use the H [ CLS ] a 0 from B a 0 and H [ CLS ] b 0 from B b 0 , and concatenate them to a and b , respectively, providing input to B a 1 and B b 1 for the second pass of fine-tuning.",
"Thus, the representation in one pair can attend over the representation in the other pair during the decoding stage.",
"This allows the feature representations from each instance B to be shared during training and prediction stages.",
"The final representation input to the classification layers is constituted by H [ CLS ] a 0 , H [ CLS ] a 1 , H [ CLS ] b 0 , and H [ CLS ] b 1 .",
"We study the performance of AVA in predicting:",
"(i) the correctness of the individual answers output by a system (point-wise estimation); and",
"(ii) the overall system performance derived on a test set.",
"We consider QA Accuracy and passage reranking measures in comparison to human labeling.",
"The first aspect evaluates the quality of our approaches, whereas the second provides evidence on the practical use of AVA to develop QA systems.",
"We train and test models using our new AVA-WQA dataset.",
"We also evaluate the point-wise performance on the WikiQA and TREC-QA datasets.",
"Table 3 summarizes the configurations we consider for training and testing.",
"As the linear classifier baseline, we used SVM by scikit-learn, setting the probability parameter to enable Platt scaling calibration on the classifier score.",
"We developed our Transformer-based AVA on top of the HuggingFace's Transformer library (Wolf et al., 2020), which also offers a native encoder-decoder setting through the encoder_hidden_states feature.",
"We use RoBERTa-Base as the initial pre-trained model for each B instance (Liu et al., 2019), with the default hyper-parameter setting of GLUE trainings:",
"(i) AdamW variant (Loshchilov and Hutter, 2017) as optimizer,",
"(ii) a learning rate of 1 e 06 set for all fine-tuning exercises, and",
"(iii) a maximum sequence length set to 128.",
"Our number of iterations is two.",
"We also use a development set to enable early stopping based on F1 measure after the first iteration.",
"We fix the same batch size setting in the experiments to avoid possible performance discrepancies caused by different batch sizes.",
"We study the performance of AVA in evaluating passage reranker systems, which differ not only in methods but also in domains and application settings.",
"We employ the following evaluation strategies to benchmark AVA.",
"System-wise evaluation We use AVA in a simple aggregator to estimate the overall system performance over a test set.",
"The metrics we consider in our estimation are: Precision-at-1 (P@1), Mean Average Precision (MAP), and Mean Reciprocal Rank (MRR), as TREC-QA and WikiQA contain answer ranks.",
"In contrast, we only use P@1 on SD dataset, as this only includes the selected answers for each system.",
"To measure the quality of AVA with respect to GS annotation we use",
"(i) Root Mean Square Error: RMSE ( a, h ) = (cid:113) 1 n ni =1 ( a i h i ) 2 , where a and h are the measures given by AVA and the human annotation, respectively; and",
"(ii) Kendall's Taub 3 to measure the correlation between the system ranks produced by AVA and GS one, i.e., = c d c + d , where c and d are the numbers of concordant and discordant pairs between the two rankings.",
"We evaluate the performance of AVA in predicting if an answer t is correct for a question q , given a reference r .",
"Table 4 shows the result.",
"The first column reports the names of the systems described in Section 5.",
"The second column shows the F1 measured on AVA-WQA.",
"We note that: The SVM classifier performs much lower than any Transformer-based model (fed with a complete input): clearly, Transformer models can exploit powerful language models, suggesting that generalization is important.",
"A 0 ( { ( q, r ) } ) as expected cannot predict if an answer is correct (its F1 is lower than 7%) since it does not use the answer representation.",
"A 0 ( { ( q, t ) } ) is already a good model as it is as much powerful as a QA system.",
"A 0 ( { ( r, t ) } ) is already a reasonable model, intuitively based on paraphrasing between r and t , but its F1 is 9% (62.47 vs 68.07) lower than A 0 ( P 0 ) , which uses all information, indicating that the semantic bias of q is essential to learn the right similarity between r and t .",
"The results of the A 1 models using a single triplet of q , r and t (i.e., 70.14, 73.87, 72.36) indicate that a text concatenation as input to Transformer models captures more information than concatenating the three separate embedding pairs, e.g., A 0 ( { ( r, t ) } ) only obtains 68.07.",
"Interestingly, q text must be concatenated with t or r , to generate more effective features (2 or 4 points more).",
"The triplet combination, e.g., A 1 (cid:0) { r, q t ) , ( t, q r ) } (cid:1) , provides an even more accurate model, while the redundant information from A 1 ( P 1 ) does not produce benefits.",
"Finally, the Peer Attention model applied to the best representations, e.g., A 1 (cid:0) { r, q t ) , ( t, q r ) } (cid:1) , boost them even more, reaching 75%.",
"This is an important result, considering that the annotator agreement (the refer-Metrics RMSE Kendall p TREC-QA-Dev P@1 0.000 0.000 1.000 0.003 MAP 0.040 0.019 1.000 0.003 MRR 0.015 0.011 0.866 0.017 TREC-QA-Test P@1 0.034 0.018 1.000 0.003 MAP 0.041 0.029 0.867 0.017 MRR 0.020 0.012 1.000 0.003 WikiQA-Dev P@1 0.000 0.000 1.000 0.009 MAP 0.050 0.039 0.733 0.056 MRR 0.063 0.052 0.690 0.056 WikiQA-Test P@1 0.079 0.030 0.889 0.017 MAP 0.081 0.040 0.733 0.056 MRR 0.095 0.035 0.867 0.017 Table 5: System-wise evaluation on TREC-QA and WikiQA using AVA model, A 2 (( r, q t ) , ( t, q r )) .",
"We evaluate the ability of AVA in predicting the Accuracy of QA systems as well as the performance of AS2 tasks.",
"We conduct two evaluation studies with two public datasets, TREC-QA and WikiQA, and our SD dataset.",
"Results on public datasets For TREC-QA and WikiQA, we evaluated a bag of different models on the development and test sets and compared the results to the performance measured by AVA using one of the best models according to the point-wise evaluation, i.e., A 2 (( r, q t ) , ( t, q r )) .",
"More specifically, we apply each model m to select the best answer t from the list of candidates for q in the dataset.",
"We first compute the performance of model m based on the provided annotations.",
"The metrics include Accuracy or Precision-at-1 (P@1), MAP, and MRR.",
"We then run AVA for ( q, t ) using the GS answers of q as references, r .",
"When multiple references are available, the final score of ( q, t ) is the average of AVA scores applied to different r .",
"Before computing the Accuracy on the test set, we tune the AVA threshold to minimize the RMSE between the Accuracy (P@1) measured by AVA and GS, on the dev.",
"set of each dataset.",
"We use these thresholds to evaluate the results also on the test sets.",
"We considered six different systems built with one Compare-Aggregate (CNN) trained model and five other Transformers-based models.",
"Four of the latter are collected from public resources 4 (Garg 4 github.com/alexa/wqa_tanda et al., 2020).",
"These models differ in the architectures, BERT vs RoBERTa vs TANDA, and their training data; thus, their output is rather different.",
"We removed questions that have no correct or no incorrect answers.",
"Table 5 reports the overall results averaged over the six models.",
"We note that",
"(i) if we set the threshold on the dev.",
"set, the error on P@1 on the dev.",
"set is 0, which should not surprise the reader as we fit such set.",
"(ii) This is not the case for MAP, which is a much harder value to predict as it requires to estimate an entire ranking.",
"(iii) On the TREC-QA test set, AVA has an error ranging from 2 to 4.1 points on any measure.",
"(iv) On the WikiQA test set, the error is higher, reaching 9.5%, probably due to the fact that WikiQA data is rather different (more than TREC-QA data) from the data used for training AVA.",
"(v) the Std.",
"Dev.",
"is low, suggesting that AVA can be used to estimate system performance, with an error ranging from 4% to 16.5% at 95% confidence, depending on measure and dataset.",
"Additionally, we compute the Kendall's Tau-b correlation between the ranking of the six systems sorted in order of performance (P@1) according to GS and AVA.",
"We observe a perfect correlation on TREC-QA and a high correlation on WikiQA.",
"This means that AVA can be used to determine if a model is better than another, which is desirable when developing and/or deploying new systems; the low p-values indicate reliable results.",
"Finally, Table 7 compares the performance evaluated with GS and AVA for all six models.",
"It is interesting to note the high variability of the performance of our tested QA systems, e.g., P@1 ranges from 59.6 to 96.2 (with several intermediate results) on TREC-QA.",
"Nevertheless, as shown in Table 5, the predictions of AVA are close to those from humans.",
"Results on SD We use the SD dataset in this evaluation to have a further system-wise evaluation.",
"This differs from the one before as the systems' configurations and the data reflect an industrial scenario.",
"The task is more challenging as the output is not just from one neural model, it comes from a combination of modules, ranging from query understanding, retrieval engine setting, indexed data, document and sentence filters, and finally, the adopted AS2 model.",
"Additionally, the questions set is rather different from the one used for training.",
"Table 6 reports the Accuracy of eight QA systems (S1, ..., S8) on the dev.",
"and test sets, evaluated according to ADS Split Evaluator S1 S2 S3 S4 S5 S6 S7 S8 RMSE Kendall p Dev (20%) AVA 0.215 0.278 0.22 0.369 0.285 0.294 0.283 0.355 0.0198 0.012 0.929 0.0004 GS 0.218 0.282 0.234 0.379 0.309 0.315 0.261 0.319 Test (80%) AVA 0.235 0.289 0.235 0.355 0.319 0.321 0.301 0.357 0.0350 0.019 0.643 0.031 GS 0.235 0.324 0.26 0.393 0.356 0.365 0.249 0.336 Table 6: Systems' P@1 evaluated with AVA and the GS annotations of SD Metrics M1 M2 M3 M4 M5 M6 TREC-D e v G o l d P@1 0.717 0.870 0.891 0.935 0.739 0.826 MAP 0.691 0.858 0.913 0.912 0.769 0.796 MRR 0.819 0.923 0.937 0.967 0.835 0.890 AVA P@1 0.717 0.870 0.891 0.935 0.739 0.826 MAP 0.688 0.831 0.864 0.857 0.717 0.772 MRR 0.809 0.920 0.940 0.967 0.803 0.876 TREC-T e s t G o l d P@1 0.596 0.885 0.904 0.962 0.712 0.788 MAP 0.661 0.873 0.894 0.904 0.771 0.801 MRR 0.763 0.933 0.945 0.976 0.820 0.869 AVA P@1 0.635 0.904 0.962 0.981 0.712 0.827 MAP 0.639 0.845 0.896 0.886 0.680 0.789 MRR 0.764 0.936 0.981 0.990 0.793 0.880 W i k i QA-D e v G o l d P@1 0.545 0.727 0.455 0.545 0.636 0.727 MAP 0.636 0.744 0.656 0.621 0.755 0.781 MRR 0.720 0.831 0.695 0.703 0.803 0.864 AVA P@1 0.545 0.727 0.455 0.545 0.636 0.727 MAP 0.523 0.751 0.643 0.617 0.713 0.774 MRR 0.568 0.841 0.682 0.698 0.788 0.841 W i k i QA-T e s t G o l d P@1 0.563 0.844 0.781 0.688 0.813 0.781 MAP 0.634 0.778 0.753 0.746 0.834 0.820 MRR 0.746 0.917 0.876 0.833 0.906 0.883 AVA P@1 0.625 0.781 0.719 0.656 0.719 0.656 MAP 0.660 0.750 0.687 0.683 0.705 0.704 MRR 0.732 0.820 0.783 0.741 0.791 0.762 Table 7: Details of system-wise evaluation on TREC-QA and WikiQA using AVA model and GS, A 2 (( r, q t ) , ( t, q r )) GS and AVA, along with RMSE and Kendall statistics of the two different evaluations.",
"The RMSE is rather low 3.5% with a standard deviation of 1.9%, which indicates a max prediction error less than 7% with a confidence of 95%.",
"The rank correlation is lower than what was observed on the academic benchmarks as the 8 evaluated systems have very close Accuracy.",
"In any case, AVA can still be effectively used to select the top 3-4 systems.",
"Table 8 reports some example questions from TREC-QA test set, the top candidate selected by the TANDA system (Garg et al., 2020), the classification score of the latter, and the AVA score, which will determine a correct answer when it is larger than 0.5.",
"For the first three questions, we note that, even thought the score of TANDA system is low, e.g., 0.0001, AVA can assign a rather high score, e.g., 0.596.",
"In the first question, this is possible since AVA can match the winner of the literature prize, Sully Prudhomme , as well as the year of the event with the answer candidate.",
"This match can not happen with the question.",
"In the second question, Eileen Marie can be matched with the question but there is basically no direct match between branch of the service and to command a space shuttle mission as air force col .",
"In contrast, the reference provides easy matching, such as air force colonel and command a space mission .",
"A similar rationale applies to the third question.",
"Conversely, a wrong answer could be classified as such by AVA, even if TANDA assigned it a very large score.",
"For example, 1988 can be a reasonable date in an answer to the fourth question.",
"This match prevents the selector to discard the answer.",
"In contrast, the date above does not match with 1986 in the reference, and the importance of this mismatch is amplified by the presence of when in the question, which suggests AVA to pay attention to dates (in line with peer-attention modeling).",
"AVA vs. Overfitted reranker We investigated the performance of AVA in an open-domain setting, where the candidate answers are all sentences contained in the retrieved web documents.",
"Given a question, we analyzed the top-1 candidates reranked by two models:",
"(i) a Transformer-based reranker fine-tuned on the same test questions (overfitting them); and",
"(ii) the general AVA model using the answer the reranker was trained on, as reference.",
"We used ASNQ (Garg et al., 2020) questions, which are typically associated with only one correct answer.",
"For each question, we retrieved the top 200 relevant documents, 10,000 sentences, from a large index built with the 100MM documents from Common Crawl ( commoncrawl.org ), and used them as input of our models.",
"We manually evaluated the top-1 answer candidate produced by the reranker and AVA for 100 randomly selected questions.",
"The results show that Question q Candidate t TANDA Reference r A when were the no-bel prize awards first given ?",
"AVA is much more accurate than the overfitted reranker, 66% versus 25%.",
"Table 9 shows some questions q , with their references r , and the answers selected by the two models.",
"We note that the overfitted reranker selects answers that either",
"(i) highly overlap with the reference (first example), or",
"(ii) are typically wrong when such continuous word overlapping is missing (second and third examples).",
"We have presented AVA, the first automatic evaluator method for QA systems.",
"We created seven different datasets, classified into three different types, which we used to develop AVA.",
"We released those based on public data and plan to release the others.",
"Then, we proposed different Transformer-based models and a new peer attention approach to capture answer and reference similarity induced by the question semantics.",
"Our extensive experimentation has shown the AVA effectiveness for different types of evaluation: point-wise and system-wise over Accuracy, MAP and MRR.",
"The results suggest that AVA can estimate the measures above, with a max error of 7% at 95% of confidence.",
"AVA can also be applied to generate distant supervision data.",
"An example of this future application is given by (Krishnamurthy et al., 2021)."
]
| [
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain"
]
|
[
"Lexical simplification involves identifying complex words or phrases that need to be simplified, and recommending simpler meaning-preserving substitutes that can be more easily understood.",
"We propose a complex word identification (CWI) model that exploits both lexical and contextual features, and a simplification mechanism which relies on a word-embedding lexical substitution model to replace the detected complex words with simpler paraphrases.",
"We compare our CWI and lexical simplification models to several baselines, and evaluate the performance of our simplification system against human judgments.",
"The results show that our models are able to detect complex words with higher accuracy than other commonly used methods, and propose good simplification substitutes in context.",
"They also highlight the limited contribution of context features for CWI, which nonetheless improve simplification compared to context-unaware models.",
"Automated text simplification is the process that involves transforming a complex text into one with the same meaning, but can be more easily read and understood by a broader audience (Saggion, 2017).",
"This process includes several subtasks such as complex word and sentence identification, lexical simplification, syntactic simplification, and sentence splitting.",
"In this paper, we focus on lexical simplification, the task of replacing difficult words in a text with words that are easier to understand.",
"Lexical simplification involves two main processes: identifying complex words within a text, and suggesting simpler paraphrases for these words that preserve their meaning in this context.",
"To identify complex words, we train a model on data manually annotated for complexity.",
"Unlike Figure 1: An example sentence with complex words identified by our classifier, and their substitutes proposed by the embedding-based substitution model.",
"previous work, our classifier takes into account both lexical and context features.",
"We extract candidate substitutes for the identified complex words from SimplePPDB (Pavlick and Callison-Burch, 2016), a database of 4.5 million English simplification rules linking English complex words to simpler paraphrases.",
"We select the substitutes that best fit each context using a word embedding-based lexical substitution model (Melamud et al., 2015).",
"An example sentence, along with the complex words identified by our model and the proposed replacements, is shown in Figure 1.",
"We show that our complex word identification classifier and substitution model improve over several baselines which exploit other types of information and do not account for context.",
"Our approach proposes highly accurate substitutes that are simpler than the target words and preserve the meaning of the corresponding sentences.",
"Prior approaches to text simplification have addressed the task as a monolingual translation problem (Zhu et al., 2010; Coster and Kauchak, 2011; Wubben et al., 2012).",
"The proposed models are trained on aligned sentences extracted from Wikipedia and Simple Wikipedia, a corpus that 207 contains instances of transformation operations needed for simplification such as rewording, reordering, insertion and deletion.",
"Zhu et al. (2010) propose to use a tree-based translation model which covers splitting, dropping, reordering and substitution.",
"Coster and Kauchak (2011) employ a phrase-based Machine Translation system extended to support phrase deletion, and Wubben et al. (2012) augment a phrase-based system with a re-ranking heuristic.",
"Woodsend and Lapata (2011) view simplification as a monolingual text generation task.",
"They propose a model based on a quasi-synchronous grammar, a formalism able to capture structural mismatches and complex rewrite operations.",
"The grammar is also induced from a parallel Wikipedia corpus, and an integer linear programming model selects the most appropriate simplification from the space of possible rewrites generated by the grammar.",
"The hybrid model of Angrosh et al. (2014) combines a synchronous grammar extracted from the same parallel corpus with a set of hand crafted syntactic simplification rules.",
"In recent work, Zhang and Lapata (2017) propose a reinforcement learning-based text simplification model which jointly models simplicity, grammaticality, and semantic fidelity to the input.",
"In contrast to these methods, Narayan and Gardent (2016)'s sentence simplification approach does not need a parallel corpus for training, but rather uses a deep semantic representation as input for simplification.",
"The above-mentioned systems support the full range of transformations involved in text simplification.",
"Other works address specific subtasks, such as syntactic or lexical simplification, which involve identifying grammatical or lexical complexities in a text and rewriting these using simpler words and structures.",
"Syntactic simplification might involve operations such as sentence splitting, rewriting of sentences including passive voice and anaphora resolution (Chandrasekar and Srinivas, 1997; Klerke and Sgaard, 2013).",
"1 Lexical simplification involves complex word identification, substitute generation, context-based substitute selection and simplicity ranking.",
"To identify the words to be simplified, Shardlow (2013a) proposes to use a Support Vector Machine (SVM) that exploits several lexical features, such as fre-1 For a detailed overview of syntactic simplification works, see (Shardlow, 2014).",
"quency, character and syllable length.",
"Our approach also uses a SVM classifier for identifying complex words, but complements this set of features with context-related features that have not been exploited in previous work.",
"2 In the lexical simplification subtask, existing methods differ in their decision to include a word sense disambiguation (WSD) step for substitute selection and in the ranking method used.",
"Ranking is often addressed in terms of word frequency in a large corpus since it has been shown that frequent words increase a text's readability (Devlin and Tait, 1998; Kauchak, 2013).",
"Models that include a semantic processing step for substitute selection aim to ensure that the selected substitutes express the correct meaning of words in specific contexts.",
"WSD is often carried out by selecting the correct synset (i.e. set of synonyms describing a sense) for a target word in WordNet (Miller, 1995) and retrieving the synonyms describing that sense.",
"Thomas and Anderson (2012) use WordNet's tree structure (hypernymy relations) to reduce the size of the vocabulary in a document.",
"Biran et al. (2011) perform disambiguation in an unsupervised manner.",
"They learn simplification rules from comparable corpora and apply them to new sentences using vector-based context similarity measures to select words that are the most likely candidates for substitution in a given context.",
"This process does not involve an explicit WSD step, and simplification is addressed as a context-aware lexical substitution task.",
"The SemEval 2012 English Lexical Simplification task (Specia et al., 2012) also addresses simplification as lexical substitution (Mc-Carthy and Navigli, 2007), allowing systems to use external sense inventories or to directly perform in-context substitution.",
"In our work, we opt for an approach which addresses lexical substitution in a direct way and does not include an explicit disambiguation step.",
"Lexical substitution systems perform substitute ranking in context using vector-space models (Thater et al., 2011; Kremer et al., 2014; Melamud et al., 2015).",
"Recently, Apidianaki (2016) showed that a syntax-based substitution model can successfully filter the paraphrases available in the 2 Datasets for system training and evaluation have been made available in the SemEval 2016 Complex Word Identification task (Paetzold and Specia, 2016) but present several issues that make system comparison problematic.",
"We explain the drawbacks of the proposed datasets that led to their exclusion from this work in Section 5.",
"Paraphrase Database (PPDB) (Ganitkevitch et al., 2013) to select the ones that are adequate in specific contexts.",
"In the same line, Cocos et al. (2017) used a word embedding-based substitution model (Melamud et al., 2015) for ranking PPDB paraphrases in context.",
"We extend this work and adapt the Melamud et al. (2015) model to the simplification setting by using candidate paraphrases extracted from the Simple PPDB resource (Pavlick and Callison-Burch, 2016), a subset of the PPDB that contains complex words and phrases, and their simpler counterparts that can be used for in-context simplification.",
"The first step for lexical simplification is to identify the complex words that should be simplified.",
"The bulk of prior work on text simplification has addressed the complex word identification problem by training machine learning algorithms on the parallel Wikipedia Simplification (PWKP) corpus (Zhu et al., 2010).",
"The PWKP corpus, however, has several shortcomings, as described in Xu et al. (2015).",
"Namely, it was determined that 50% of the parallel sentences in PWKP were either not aligned correctly, or the simple sentence was not in fact simpler than the complex sentence.",
"Xu et al. (2015) created a more reliably annotated dataset, which uses a corpus consisting of 1,130 articles, manually rewritten by experts at Newsela 3 at four different reading levels.",
"Xu et al. (2015) also aligned sentences from these texts, extracting 141,582 complex/simple sentence pairs.",
"We use the Newsela corpus to create a gold-standard dataset of complex and simple words for training and testing our models.",
"We do this by hiring crowdsourced annotators through Amazon Mechanical Turk, and asking them to identify complex words in the context of given texts.",
"We randomly select 200 texts from the Newsela corpus, and take the first 200 tokens from each to be labeled by nine annotators.",
"We preprocess the texts using the Stanford CoreNLP suite (Man-ning et al., 2014) for tokenization, lemmatization, part-of-speech (POS) tagging, and named entity recognition.",
"The annotators are instructed to label at least 10 complex words they deem worth sim-3 Newsela is a company that provides reading materials for students in elementary through high school.",
"plifying for young children, people with disabilities, and second language learners.",
"After filtering out stop words (articles, conjunctions, prepositions, pronouns) and named entities, we are left with 17,318 labeled tokens.",
"Tokens identified by at least three annotators are considered as complex, and tokens labeled by less than three or no annotators as simple.",
"This increases the likelihood of complex segments being actually complex; as we can see from Table 1, words identified by only one or two annotators tend to be somewhat noisy.",
"Following Shardlow (2013a), we use a Support Vector Machine classifier.",
"We also conduct experiments with a Random Forest Classifier.",
"Shardlow (2013a) identified several features that help to determine whether or not a word is complex, including word length, number of syllables, word frequency, number of unique WordNet synsets, and number of WordNet synonyms.",
"Shardlow used word frequencies extracted from SUBTLEX, a corpus of 51 million words extracted from English subtitles.",
"4 We instead use n-gram frequencies from the Google Web1T corpus Brants and Franz (2006) (henceforth Google n -gram).",
"Our motivation for using Google n -gram frequencies is based on the hypothesis that word frequency is a strong indicator of word difficulty.",
"More frequent words are more likely to be easy, and less frequent words are more likely to be unknown and therefore hard to understand.",
"The size of the Google n -gram corpus, consisting of a variety of texts across many genres and years, makes 4 SUBTLEX can be found at: https://www.ugent.be/pp/experimentele-psychologie/en/research/documents/subtlexus 209 it a good candidate for computing more accurate word frequencies.",
"In addition to word frequencies and word specific features, we include several context-specific features: average length of words in the sentence, average number of syllables, average word frequency, average number of WordNet synsets, average number of WordNet synonyms, and sentence length.",
"The intuition for including context-specific features is that if a target word is surrounded by simple words, a reader is likely better able to understand the meaning of the target word, which would thus not need it simplified.",
"For our model and baselines, we consider candidate substitutions from three datasets.",
"The first is WordNet (Miller, 1995), a lexical network encoding manually identified semantic relationships between words, such as synonymy, hypernymy and hyponymy.",
"This resource has been widely used in substitution tasks (McCarthy and Navigli, 2007).",
"We also use paraphrases extracted from the Paraphrase Database (PPDB) and the Simple Paraphrase Database (SimplePPDB).",
"PPDB is a collection of more than 100 million English paraphrase pairs (Ganitkevitch et al., 2013).",
"These pairs were extracted using a bilingual pivoting technique (Bannard and Callison-Burch, 2005), which assumes that two English phrases that translate to the same foreign phrase have the same meaning.",
"PPDB was updated by Pavlick et al. (2015) to assign labels stating the precise entailment relationship between paraphrase pairs (e.g. for-ward/backward entailment), and new confidence scores (PPDB 2.0 scores) reflecting the strength of paraphrase relations.",
"SimplePPDB is a subset of PPDB which contains 4.5 million simplification rules, linking a complex word or phrase with a simpler paraphrase with the same meaning.",
"Simplification rules come with both a PPDB 2.0 score and a simplification confidence score (Pavlick and Callison-Burch, 2016), which represent both the strength of the paraphrase relation and how well the replacement word simplifies the target word.",
"These rules were created by sampling 1,000 PPDB phrases, using crowdsourcing to find correct simplifica-tions for each phrase, and building a model to identify rules that simplify the input phrase.",
"To evaluate the performance of our lexical simplification model, we create a test set from the Newsela corpus.",
"We extract lexical simplification rules from these parallel sentences using two methods.",
"First, we find sentence pairs with only one lexical replacement and use these word pairs as simplification instances.",
"Next, we use a monolingual word alignment software (Sultan et al., 2014) to extract all non-identical aligned word pairs.",
"We only consider word pairs corresponding to different lemmas (i.e. words with different base forms).",
"From this process, we collect a test set of 14,436 word pairs.",
"To accurately replace words in texts with simpler paraphrases and ensure the generated sentences preserve the meaning of the original, we need to take into account the surrounding context.",
"To do this, we adapt the word embedding-based lexical substitution model of Melamud et al. (2015) to the simplification task.",
"Vector-space models have been shown to effectively filter PPDB paraphrases in context while preserving the meaning of the original sentences (Apidianaki, 2016; Cocos et al., 2017).",
"The Melamud et al. (2015) model (hereafter AddCos) quantifies the fit of substitute word s for target word t in context C by measuring the semantic similarity of the substitute to the target, and the similarity of the substitute to the context: AddCos ( s, t, C ) = cos( s,t )+ P w C cos( s,w ) | C | +1 (1) The vectors s and t are word embeddings of the substitute and target generated by the skip-gram with negative sampling model (Mikolov et al., 2013b,a).",
"The context C is the set of context embeddings generated by skip-gram for words appearing within a fixed-width window of the target t in a sentence.",
"We use a context window of 1; while this seems counter-intuitive, this is the best-performing window found by (Cocos et al., 2017), and we also confirm this result remains true in Section 5.2.",
"We use the AddCos implementation of Cocos et al. (2017) 5 , and 300-dimensional word and context embeddings trained over the 4 billion words in the AGiga corpus (Napoles 5 Available at https://github.com/acocos/ lexsub_addcos 210 et al., 2012) using the gensim word2vec package (Mikolov et al., 2013b,a; Rehurek and Sojka, 2010).",
"6 In our experiments, candidate substitutes for a target word are its paraphrases in the PPDB and SimplePPDB resources.",
"The model needs to select among these candidates the ones that best carry the meaning of target words in specific contexts.",
"We only consider content words (nouns, verbs, adjectives and adverbs) as simplification targets.",
"For a target word-substitute pair, we include in the model the following features which encode the strength of the semantic relationship between them: PPDB 1.0 and 2.0 scores, which represent the overall quality of paraphrases.",
"Distributional similarity scores calculated by Ganitkevitch et al. (2013) on the Google n grams and the AGiga corpus.",
"Independence probability, that is the probability that there is no semantic entailment relationship between the paraphrase pair, as calculated by Pavlick et al. (2015).",
"SimplePPDB score (Pavlick and Callison-Burch, 2016) when considering SimplePPDB paraphrases which reflects the confidence in the simplification rule.",
"Datasets for training and evaluating Complex Word Identification (CWI) systems were created and released in the SemEval 2016 competition (Paetzold and Specia, 2016) but we decided not to use them for several reasons.",
"Although this was a CWI task, surprisingly only 4.7% of the words in the test data were identified as complex, and all the other words were viewed as simple.",
"As a consequence, none of the systems that participated in the SemEval task managed to beat the accuracy of the All Simple baseline which labeled all words in the test set as simple (0.953).",
"As noted by Paetzold and Specia (2016), the inverse problem is present in the corpus developed by Shardlow (2013b), where the All Complex baseline 6 The word2vec training parameters we use are a context window of size 3, learning rate alpha from 0.025 to 0.0001, minimum word count 100, sampling parameter 1 e 4 , 10 negative samples per target word, and 5 training epochs.",
"achieved higher accuracy, recall and F-scores than all other tested systems, suggesting that marking all words in a sentence as complex is the most effective approach for CWI.",
"Another problem in the SemEval-2016 dataset is that although the number of complex words is much higher in the training data (32%), 18% of all words were annotated as complex by only one out of 20 annotators and considered as complex.",
"In addition to the highly different number of complex words in the training and test data, the two datasets are also imbalanced in terms of size, with only 2,237 training instances and 88,211 testing instances.",
"These factors make this dataset a dubious choice for system evaluation.",
"Comparison to the participating systems is also extremely difficult, since the best systems are ones that label most of the data as simple.",
"For these reasons, we decided to create and use our crowdsourced data for training and evaluation.",
"7 We compare the performance of an SVM classifier with only word features (SVM-word) to one that exploits both word and context features (SVM-context).",
"We use 5-fold cross validation on unique words from the training data collected through Mechanical Turk (see Section 4.1).",
"We also compare a Random Forest classifier with only word features (RF-word) to one with word and context features (RF-context).",
"We consider three baselines: labeling all words as complex (All-Complex).",
"thresholding for word length (Token Length), considering longer words as complex; the length threshold with the best performance was 7.",
"7 We have released the new datasets at https://rekriz11.github.io 211 Words with 1 paraphrase All words Model Coverage Top 1 Top 5 Oracle Top 1 Top 5 Oracle WordNet frequency 0.911 0.141 0.267 0.291 0.129 0.244 0.265 SimplePPDB Score 0.935 0.180 0.403 0.669 0.168 0.377 0.626 AddCos-PPDB 0.975 0.196 0.444 0.962 0.191 0.433 0.938 AddCos-SimplePPDB 0.819 0.353 0.601 0.643 0.289 0.492 0.527 Table 3: Performance of the lexical simplification models on the Newsela aligned test set.",
"thresholding for word frequency using Google n -gram counts ( n -gram Frequency), considering more frequent words as simple; the frequency threshold with the best performance was 19,950,000.",
"The results of this experiment are shown in Table",
"2. While the Token Length and n -gram Frequency baselines have higher recall, both of our models show substantial improvements in terms of precision and increases overall accuracy and F-score, with SVM outperforming Random Forest.",
"The context-based features seem to have an ambiguous impact, in that they do not improve the performance of the SVM classifier, but they do improve that of the Random Forest classifier.",
"While there are indeed some cases where a relatively simple word is more difficult to understand, due to the size of our corpus, these cases are not found that often in our dataset.",
"We evaluate the performance of the lexical substitution model using Simple PPDB paraphrases on a test set created from the Newsela corpus, described in Section 4.1.",
"Using the complex word and the corresponding sentence, we find the top suggestions made by our word-embedding based substitution model using SimplePPDB.",
"We compare to three baselines: WordNet Frequency: We extract all WordNet synonyms for a complex word, and collect the Google n -gram frequencies for each synonym.",
"We then rank the synonyms in decreasing order of frequency (i.e. the most frequent synonym will be ranked first, and the least frequent one will be ranked last.",
"SimplePPDB Score: We extract all SimplePPDB synonyms for a complex word.",
"We then rank the synonyms in decreasing order of their SimplePPDB score.",
"AddCos-PPDB: We extract all PPDB synonyms for a complex word and rank them using the AddCos model described above.",
"The performance of AddCos with SimplePPDB paraphrases (AddCos-SimplePPDB) in the lexical simplification task is compared to performance of the baselines in Table",
"3. For each model, we calculate Top 1 and Top 5 accuracy scores, which show how often the gold-standard simple word was proposed as the best fitting or among the 5 highest-ranked paraphrases.",
"In addition, we calculate the upper bound performance for each dataset (PPDB, SimplePPDB and WordNet), i.e. how often the gold-standard simple word was found as a paraphrase of the target word in the dataset.",
"This is useful in telling us how well we could potentially do, if we could perfectly rank the paraphrases.",
"When performing this experiment, we also evaluated the impact of the context window size on the quality of the proposed substitutions.",
"We varied the context window used by the AddCos-SimplePPDB model from 0 to 10.",
"The results of this comparison are found in Table 4.",
"As we can see, the largest effect, as expected, is when the model changes from using no context to choosing a window size of 1 word on either side of the word that is being replaced.",
"As the context window in-212 SynonymRank Substitution Simplification Both 1 0.396 0.280 0.227 2 0.311 0.214 0.153 3 0.278 0.184 0.127 4 0.228 0.142 0.093 5 0.193 0.123 0.075 All 0.622 0.553 0.435 Table 5: Performance of our overall lexical simplification system.",
"creases above 2, however, we see a significant decrease in Top 1 accuracy, and a slower decrease in Top 5 accuracy.",
"Thus, in our model, we chose to use a context window of 1.",
"We experimented with filtering the substitution candidates using SimplePPDB confidence scores, PPDB paraphrase quality scores, and AddCos context similarity scores, but these all resulted in a slight, non-significant increase in performance, and a significant decrease in coverage.",
"We will also explore other ways for promoting high-quality substitutions without hurting the overall coverage of the system in the future.",
"One thing to note is that just because a model does not find the gold-standard simple word, does not necessarily mean that it does not find any good substitutes in context.",
"Concrete examples of this are shown in Section 6.",
"We integrate the best complex word identification classifier (SVM-context) and the substitution model that provided the best ranking in context (AddCos-SimplePPDB), into a simplification pipeline.",
"The input text is a complex text that needs to be simplified and the output consists of simplification suggestions for experts to choose from in order to create simpler versions of texts.",
"The input text is pre-processed using the Stanford CoreNLP suite (Manning et al., 2014) which performs tokenization, sentence splitting, lemmati-zation, part-of-speech and named entity tagging.",
"The SVM-Context classifier is used to classify each content word that is not part of a named entity Baseline Simple Complex n -gramFrequency dug, sled, chart, lakes, push, tight, harm estimates, frequent, attributed, isolated, preferred, liability TokenLength nursing, unknown, squares, feeling, teaching, strength adorns, asylum, myriad, rigors, nutria, edible RF-Context malls, hungry, therefore, hears, heavily, rainy engaging, secular, gridlock, torrent, sanctions, lobbying SVM-Context peacefully, favorite, amazing, websites, harmful, somewhat swelled, entice, tether, chaotic, vessel, midst Table 6: Examples of words that were incorrectly classified by the two best performing baselines and the RF-Context model, but were correctly classified by the SVM-Context model.",
"as either simple or complex.",
"The lexical substitution model then gathers the SimplePPDB substitutes available for the complex target word and ranks them according to how well they fit the corresponding context.",
"We only keep the top five suggestions made by the model as final output.",
"To evaluate the performance of the overall simplification system, we used the 930 texts from the Newsela corpus that were not used in the training of the CWI classifier.",
"Our model identified over 170,000 complex words that also had paraphrases in SimplePPDB.",
"We again asked crowdsourced annotators to evaluate the suggestions made for a random sample of 2,500 complex words on Amazon Mechanical Turk, in order to determine the number of good substitutions in context, the number of suggested paraphrases that are simpler than the target words, and the suggestions that are both simpler paraphrases and good in-context substitutes.",
"Table 5 shows the quality of the paraphrases ranked by our system in positions from one to five.",
"We can see that the paraphrases our system selects as the best have a higher likelihood of being both good substitutes in context and simpler than the target word.",
"We also show the proportion of target words that had at least one good substitute in context, one simple substitute, and one good and simple substitute.",
"In this section, we give examples of words for which our models give the correct output and the",
"baselines fail to do so.",
"In addition, we give examples of words on which our models perform poorly.",
"First, we consider examples of words that were incorrectly classified by each of the four best performing CWI models: the RF-Context and SVM-Context models, and the n-gram Frequency and Token Length baselines.",
"(Table 6).",
"In the first three rows, we give words that were correctly identified by SVM-Context, but incorrectly categorized by the two baselines and RF-Context; in the last row, we give examples of words incorrectly classified by SVM-Context.",
"We observe that the n -gram Frequency model tends to incorrectly classify relatively short words that are rare in the Google n -gram corpus as complex.",
"On the other end, the Token Length model shows that using this feature alone leads to incorrectly identifying shorter words such as adorn and myriad as simple, when these words are relatively complex.",
"Table 7 presents examples of substitution where the baseline systems did not find the correct paraphrase, but AddCos-SimplePPDB did.",
"As we have mentioned, even when a model did not find the gold-standard paraphrase, they sometimes did find a different paraphrase that works well in the context.",
"In Example 7.2, the top paraphrase identified by both AddCos-PPDB and AddCos-Simple PPDB for the word monitor is track, which is a reasonable substitute.",
"On the other hand, in Example 7.3, AddCos-Simple PPDB model was able to identify a good simple substitute, when none of the other models were able to identify a suitable word with comparable complexity.",
"Finally, Table 8 shows examples of output of the overall simplification system.",
"Here, the blue word is a word that our CWI classifier identified as complex (for simplicity, we only look at one complex word per sentence).",
"From there, we consider the five top-ranked substitutes proposed by AddCos-Simple PPDB, and show which were identified by the majority of annotators as good substitutes for the target word, simpler than the target, good simpler substitutes, and bad substitutes.",
"In row 5 of Table 8, we can see that for the word adop-tion, all five words identified by our model are considered to be bad substitutes, since they are all synonyms describing a different sense of adoption.",
"Even though SimplePPDB is quite large, it does not cover all senses of the words represented.",
"Another issue is that SimplePPDB contains some noisy paraphrases, as is the case with all automatically collected synonym banks.",
"We see this with recognize being a synonym of recognition, even though we specified that recognition is a noun.",
"Our model does filter out the worst paraphrases (with PPDB2.0 score < 2), but there are still some words that are simply poor substitutes.",
"We reviewed the examples where our system failed to generate acceptable substitutions for the identified complex words.",
"Below we present the major categories of errors.",
"dle or High School is a description of the type of school.",
"Elementary School has an alternative name in some cases but High School should never become Tall School .",
"The complex word has no simpler synonym that would be a good substitute.",
"The diffi-culty of the word might reside in its meaning which can be unknown to the reader.",
"In Example 8.3, it would be more useful to point to the definition of refugees .",
"The complex word is part of a predicate with arguments that are not accessible to our model.",
"In Example 8.4, the intended meaning of adoption , human adoption, is hard to capture in the vicinity of the complex word.",
"Finally, in some cases, our annotators were quite strict in admitting a substitute.",
"In Example 8.2, for example, cost merit would not be syntactically correct but cost merits would be acceptable.",
"We present a novel model for simplification that first identifies complex words in text, and then ranks lexical simplification candidates according to their adequacy in these specific contexts.",
"We perform experiments showing that our model makes correct simplification suggestions 35% of the time as measured by top-1 accuracy (versus 20% of the time for the best baseline), and produces a good substitution in its top-5 predictions 60% of the time (versus 44% for the best base-line).",
"We perform a detailed error analysis that suggests future improvements, e.g. not replacing words within collocations like elementary school , and extending the context model to include the arguments of words that are going to be simplified.",
"Achieving high performance on single words is crucial for any system that hopes to adequately holistically simplify a text.",
"Our methods can also be extended to the phrase level.",
"SimplePPDB contains phrasal simplification rules, as well as lexical simplification rules.",
"We can assign a vector representation to phrases to be used by the AddCos model, by applying a vector composition method to the vectors of individual words in the phrase.",
"We plan to extend our method in this direction in future work.",
"Although our system outperforms simpler baselines on both tasks, the performance of the overall system is relatively low.",
"The filtering mechanisms we have experimented with up to now in order to make high-confidence predictions, increased the quality of the proposed substitutions but signifi-cantly decreased the coverage.",
"We will explore other ways for promoting high-quality substitutions without hurting the overall coverage of the system in the future.",
"The AddCos implementation we used in this work does not rely on syntactic annotations and can be easily applied to new languages.",
"In future work, we plan to experiment with syntactic substitution models and with syntax-based word embeddings like the ones used in the initial AddCos implementation (Melamud et al., 2015).",
"We expect syntactic information to further enhance the quality of the proposed substitutions, ensuring the functional similarity of the lexical substitutions to the target word.",
"Furthermore, we intend to integrate lexical and syntactic simplification, both crucial steps towards text simplification.",
"We release the data that we collected, which is of higher quality than the data used in previous shared tasks on Complex Word Identification.",
"We also release our software for performing context-aware paraphrase substitutions.",
"The dataset and the code can be found at https://rekriz11.github.io 9 Acknowledgements We would like to thank the anonymous reviewers for their helpful comments and feedback on this work, and Anne Cocos for sharing with us her implementation of the AddCos model with PPDB paraphrase substitutes.",
"This material is based in part on research sponsored by DARPA under grant number FA8750-13-2-0017 (the DEFT program) and HR0011-15-C-0115 (the LORELEI program).",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes.",
"The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA and the U.S. Government.",
"The work has also been supported by the French National Research Agency under project ANR-16-CE33-0013.",
"Finally, we gratefully acknowledge the support of NSF-SBIR grant 1456186."
]
| [
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"method",
"method",
"method",
"objective",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"objective",
"abstain",
"method",
"objective",
"result",
"objective",
"objective",
"objective",
"method",
"objective",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
]
|
[
"Multi-encoder models are a broad family of context-aware neural machine translation systems that aim to improve translation quality by encoding document-level contextual information alongside the current sentence.",
"The context encoding is undertaken by contextual parameters , trained on document-level data.",
"In this work, we discuss the difficulty of training these parameters effectively, due to the sparsity of the words in need of context (i.e., the training signal), and their relevant context.",
"We propose to pre-train the contextual parameters over split sentence pairs, which makes an efficient use of the available data for two reasons.",
"Firstly, it increases the contextual training signal by breaking intra-sentential syntactic relations, and thus pushing the model to search the context for disambiguating clues more frequently.",
"Secondly, it eases the retrieval of relevant context, since context segments become shorter.",
"We propose four different splitting methods, and evaluate our approach with BLEU and contrastive test sets.",
"Results show that it consistently improves learning of contextual parameters, both in low and high resource settings.",
"Neural machine translation (NMT) has seen substantial improvements in recent years, fostered by the advent of the Transformer model (Vaswani et al., 2017).",
"A remaining challenge for modern machine translation (MT) is the ability to contextualize translation of the current sentence with other sentences in the document (Lubli et al., 2018).",
"For this reason, contextual NMT has recently triggered a lot of attention and many approaches have been proposed in the literature.",
"A common taxonomy (Kim et al., 2019; Li et al., 2020) divides them in two broad categories: single-encoder (con-catenation) approaches (Tiedemann and Scherrer, 2017; Agrawal et al., 2018; Ma et al., 2020; Zhang et al., 2020) and multi-encoder approaches (Jean et al., 2017; Tu et al., 2017; Bawden et al., 2018; Miculicich et al., 2018; Voita et al., 2018; Maruf et al., 2019; Zheng et al., 2020).",
"Multi-encoder models are more flexible and can be more efficient than concatenation approaches, but they have been criticized as being mere regularization methods (Kim et al., 2019; Li et al., 2020).",
"In some cases, they have even been shown to perform worse than sentence-level systems on discourse-aware targeted test suites (Lopes et al., 2020).",
"In this work, we address this criticism by showing that training multi-encoder models is difficult because of two reasons:",
"(i) the sparsity of contextual training signal , i.e. the signal that pushes systems to translate in a context-aware fashion, which comes from the words that need context to be correctly translated;",
"(ii) the sparsity of relevant context words, the ones needed to disambiguate translation.",
"A trivial way to improve context-aware learning is by increasing the amount of document-level training data.",
"Large document-level parallel corpora are not always available, but some works have proposed data augmentation techniques to remedy scarcity (Sugiyama and Yoshinaga, 2019; Stojanovski et al., 2020; Huo et al., 2020).",
"However, as we will show in our experimental section, this solution is not efficient and often sub-optimal.",
"We therefore introduce a novel pre-training strategy, divide and rule (d&r) , that is based on a simple and yet powerful technique to augment the contextual training signal and to ease learning efficiently: splitting parallel sentences in segments (see Figure 1).",
"Simply put, feeding a context-aware model with a sequence of incomplete, shorter, consecutive segments, forces it to look for context (i.e., surrounding segments) more frequently, and makes it easier to retrieve relevant context because segments are shorter.",
"This results in faster and improved learning.",
"We pre-train multi-encoder models on split datasets and evaluate them in two ways: BLEU score, and 4557 S i, 1 He said that it was a project of peace S i, 2 and unity and that it brought people together .",
"T i, 1 Il disait que c' tait un projet de paix T i, 2 et d' unit et qu' il runissait les gens .",
"S j, 1 I think single-cell organisms are S j, 2 possible within two years .",
"T j, 1 Je pense que les organismes unicellulaires T j, 2 sont possibles dans 2 ans .",
"Our main contributions are the following:",
"(i) we show that context-aware multi-encoder models need to be trained carefully, because the contextual training signal is sparse, as well as the context elements useful for contextualization;",
"(ii) we propose the d&r pre-training strategy, which fosters training of contextual parameters by splitting sentences into segments, with four splitting variants;",
"(iii) we support this strategy with an analysis of the impact of splitting on the distribution of discourse phenomena;",
"(iv) we demonstrate that this strategy is both effective and efficient, as it allows multi-encoder models to learn better and faster than by simply increasing the training data.",
"The most straightforward approach to context-aware NMT consists in concatenating the context to the current sentence before feeding it to the standard encoder-decoder architecture (Tiede-mann and Scherrer, 2017; Agrawal et al., 2018; Junczys-Dowmunt, 2019; Ma et al., 2020; Zhang et al., 2020).",
"A special token is introduced to mark the boundaries between sentences.",
"Generation can then follow two strategies: the many-to-many strategy consists in translating all the source sentences, and then discarding contextual sentences; the many-to-one strategy consists in translating the current sentence only.",
"The modeling capacity of concatenation methods is limited to few sentences because the complexity of attention scales quadratically with sentence length, although some recent works try to solve this constraint (Tay et al., 2020).",
"Multi-encoder models couple a self-standing sentence-level NMT system, with parameters S , with additional parameters for modeling the context either on source side, target side, or both.",
"We refer to these parameters as the contextual parameters C .",
"The full context-aware architecture has parameters = [ S ; C ] .",
"Most of the multi-encoder models can be described as instances of two architectural families (Kim et al., 2019), that only differ in the way the encoded representations of the context and the current sentence are integrated.",
"Outside integration.",
"In this approach, depicted in Figure 2, the encoded representations are merged outside the decoder (Maruf et al., 2018; Voita et al., 2018; Zhang et al., 2018; Miculicich et al., 2018; Maruf et al., 2019; Zheng et al., 2020).",
"This can happen in different ways, such as by simple concatenation of the encodings, or with a gated sum.",
"Inside integration.",
"Here the decoder attends to the context representations directly, using its internal representation of the decoded history as query (Tu et al., 2018; Kuang et al., 2018; Bawden et al., 2018; Voita et al., 2019b; Tan et al., 2019).",
"Many of these works found it useful to share parameters of current-sentence and context encoders (Voita et al., 2018; Li et al., 2020).",
"In this way, the amount of contextual parameters to learn, | C | , and the computational cost are drastically reduced.",
"Shared representation can also be cached to be re-used and further processed by contextual parameters without the need of re-encoding sentences from scratch, which represents an advantage with respect to single-encoder approaches.",
"Most of the approaches proposed in the literature focus on a few previous sentences, where most of the relevant context is concentrated.",
"Two-step training.",
"Multi-encoder models are commonly trained following a two-step strategy (Tu et al., 2018; Zhang et al., 2018; Miculicich et al., 2018; Maruf and Haffari, 2018; Li et al., 2020).",
"The first step consists in training S independently on a sentence-level parallel corpus CS .",
"Secondarily, contextual parameters C are trained on a document-level parallel corpus CD , while fine-tuning or freezing S .",
"Note that CS can also include sentences from CD .",
"Novel MT systems are usually evaluated by computing BLEU (Papineni et al., 2002) on the test data.",
"However, BLEU is ill-equipped to capture the improvements achieved by context-aware MT (Hard-meier, 2012), because contextualization can improve the translation of only a small fraction of the words in a document, while most of the words can be correctly translated without knowing the context.",
"For instance, only a fraction of the anaphoric pronouns in a document has its nominal antecedent outside its own sentence.",
"However, despite being sparse, these few cases strongly impact the quality of translation (Lubli et al., 2018; Popescu-Belis, 2019).",
"Consequently, a number of discourse-targeted test sets and automatic metrics have been proposed to measure improvements in context-aware MT (Maruf et al., 2021), the most widely adopted ones being contrastive test sets.",
"Contrastive test sets (Bawden et al., 2018; Mller et al., 2018; Voita et al., 2019a) consist of a number of source sentences, each paired with a correct translation and some incorrect ones.",
"Models are assessed on their ability to rank the correct translation first.",
"In many cases, this can be identified only by looking at context, which is provided for both source and target sides.",
"Therefore, the ranking accuracy reflects the context-modeling ability of the evaluated translation system.",
"Some works criticized multi-encoder methods (Kim et al., 2019; Li et al., 2020), arguing that they do not improve sentence-level baselines in terms of BLEU when the baseline is well regularized.",
"When there are improvements, it is argued that the context-encoder simply works as a noise-generator that makes training more robust, and the improvements are not due to better context-modeling.",
"Along this path, Lopes et al. (2020) showed that multi-encoder architectures struggle to model contextual information, and even deteriorate the performance of a sentence-level baseline on contrastive test sets.",
"In fact, many proponents of multi-encoder models only show BLEU improvements, without providing any kind of targeted evaluation.",
"This doesn't allow a direct evaluation of their context-modeling capability.",
"We posit that training the contextual parameters of multi-encoder models is non-trivial because of two challenges:",
"(i) the sparsity of the training signal, which comes from the words that need context to be correctly translated (most of the words of a sentence can be translated without context);",
"(ii) the sparsity of context words that are useful for contextualization (most of the context is useless).",
"As such, missing the right experimental setting can lead to unsuccessful training and unconvincing results.",
"More data?",
"A trivial way to offset sparsity is to increase the volume of training data.",
"In fact, existing works that report strong results with targeted evaluation train their contextual parameters with millions of document-level sentence pairs (Baw-den et al., 2018; Mller et al., 2018; Voita et al., 2019b; Zheng et al., 2020; Wong et al., 2020; Kang et al., 2020).",
"In contrast, many works in the literature train models with the TED talks' subtitles released by the IWSLT shared tasks (Cettolo et al., 2012), which only consist of a couple of hundred thousand parallel sentences.",
"In the experimental section, we will show that IWSLT's subtitles are not sufficient to effectively train multi-encoder models.",
"It follows that one can not make fair comparisons between alternative architectures in such experimental settings.",
"On the other hand, we will give an empirical confirmation to the intuition that increasing the volume of training data helps learning contextual parameters.",
"However, increasing the amount of training data is an inefficient solution, and one that is not always feasible: large document-level training sets may not be available in many languages.",
"In the following section, we propose a pretraining solution that makes an efficient use of the available data for learning contextual-parameters effectively.",
"One way to simulate document-level data is to split sentences in two or more segments (Luong et al., 2016).",
"In this way intra-sentential syntactic relations are broken, and a word previously disam-4559 Algorithm 1: Split parallel corpus 1: input: Parallel corpus C , minimum source length l min , function wheresplit() 2: for i = 1 , . . . , |C| do 3: if len ( S i ) l min then 4: m S , m T = wheresplit( S i , T i , ... ) 5: S i, 1 = S i<m S and S i, 2 = S i m S 6: T i, 1 = T i<m T and T i, 2 = T i m T 7: end if 8: end for 9: return Split corpus CD biguated by looking at its neighbours in the sentence, now requires contextual information from the other segment in order to be correctly translated.",
"Moreover, splitting sentences increases the concentration of relevant words within the context segment, as we will show in Section 4.2.",
"Within the framework of MT, if we split the source sentence, its corresponding reference has to be split too.",
"The proposed approach, divide and rule ( d&r ), consists in pre-training the model on a dataset CD that results from splitting all the sentences of a parallel corpus C that have at least l min tokens, as described by Algorithm 1.",
"Each source-side sentence S i , with index i = 1 , ..., |C| , is split into S i, 1 and S i, 2 .",
"Its corresponding reference T i is split into T i, 1 and T i, 2 .",
"The resulting corpus is a document-level parallel corpus CD , such that, if the original corpus C was itself document-level, then CD keeps the same document boundaries as C .",
"Figure 1 illustrates two examples of parallel sentences that are split in the middle.",
"In both examples, a context-aware system needs to look at S i, 1 for translating S i, 2 correctly, i.e. to look at past context.",
"In the first one, the English neuter pronoun it\" could be translated into il\" or elle\", according to the gender of its antecedent (there is no singular neuter 3rd-person in French). The antecedent a project\", which is in the previous segment, allows to disambiguate it into il\". In the second example, the adjective possible can be correctly translated into its plural version possibles by looking back at the noun it refers to: organisms.",
"In Algoritm 1, the wheresplit function returns the token indices m S and m T of S i and T i , where the sentence is split.",
"In this work, we propose and experiment with four variants of this function.",
"Middle-split.",
"The simplest strategy is to split both the source and the target in the middle.",
"In this case, wheresplit = middlesplit( S i , T i ) returns m S = (cid:98) len ( S i ) / 2 (cid:99) and m T = (cid:98) len ( T i ) / 2 (cid:99) .",
"Following this method, it can happen that S i,j and T i,j , with j = 1 , 2 , are not parallel, as illustrated in the second example of Figure 1.",
"The verb are belongs to S i, 1 , but its translation sont does not belong to its corresponding reference segment T i, 1 .",
"In other words, sometimes the splitting can separate a set of words from their reference, which end up in the other segment.",
"Clearly, this method requires the two languages not to excessively diverge in terms of word order, to avoid too large mismatches between S i,j and T i,j , with j = 1 , 2 .",
"Aligned-split.",
"As a solution to the misalignment problem between source and target segments, we can calculate word alignments A i , and use them to inform our splitting strategy by setting wheresplit = alignedsplit( S i , T i , A i ) , where alignedsplit splits each sentence close to the middle, while avoiding to separate aligned words in different segments.",
"Synt-split .",
"The objective of splitting being to break intra-sentential syntactic and semantic relations in order to force the model to exploit the context more frequently, we can run an NLP toolkit over the training set to retrieve relations L (e.g. syntactic dependencies or coreferences), and then by defining wheresplit = syntsplit( S i , T i , L i ) so that it splits sentences close to the middle, while ensuring that at least a relation is broken whenever possible.",
"Since not all relations raise translation ambiguities when broken, one can choose which of them must be prioritized; in this work we chose pronominal coreferences.",
"Multi-split.",
"The aforementioned methods can be extended to splitting sentences in more than two segments.",
"The more we split sentences the more likely it is that context is needed for each segment, hence increasing training signal for contextual parameters.",
"In Section 6.3, we present an empirical comparison between the four splitting methods.",
"We refer to Appendix A and to our implementation 1 for further details.",
"To give an explicit picture of how and why sentence splitting helps learning contextual parameters, we",
"processed the source training data of IWSLT17 with CoreNLP (Manning et al., 2014) and we computed some statistics on coreference chains and dependency parse trees, before and after applying the middle-split method.",
"Statistics show how splitting the sentences of a document helps in two ways: More cases.",
"Splitting generates new cases that require context for disambiguation, making training signal more abundant.",
"When syntactic dependencies are split in two segments, the model needs to access the context for reconstructing the syntactic structure of the source sentence and correctly translate it, as shown in Figure 1.",
"In order to have an idea of the magnitude of this effect, we calculated the percentage of the sentences where the splitting method breaks at least one syntactic dependency between the main verb of the sentence (the root) and :",
"(i) the subject or object (18.1% of the sentences);",
"(ii) any complement (9.5%);",
"(iii) any modifier (9.3%).",
"If we consider all the dependencies with the root, except punctuations, we find that in 84.8% of the sentences at least a syntactic dependency is broken.",
"Given such high proportion, the middle-split variant is in fact a good approximation of a syntactically supported splitting approach.",
"These cases add up to the many other cases of broken relations, such as coreferences, which make the overall contextual training signal more abundant.",
"Denser cases.",
"The splitting also has the effect of shortening the average length of text sequences, which eases the job of context-aware systems because they have to attend to fewer words while looking for context.",
"In Figure 3, we show how many antecedents of an anaphoric pronoun are present in the data at a given distance d , expressed as number of sentences from the current one for original data, and number of segments for split data.",
"d = 0 means that both the pronoun and its antecedent are in the same sentence (or segment); d = 1 means that the antecedent is in previous sentence (or seg-ment), and so on.",
"We show statistics up to d = 3 , which is the maximum context distance that we experiment with.",
"The absolute number of antecedents is divided by the average length of a sentence or segment.",
"The resulting bar plot shows that splitting sentences into segments makes pronominal antecedents more dense in the set of context tokens that the model is attending, which fosters the learning of contextual parameters.",
"The same effect applies to the other discourse phenomena that require contextual disambiguation.",
"2 5 Experimental setup 5.1 Data We conduct experiments for three language pairs: English Russian/German/French, on different domains.",
"Following Kim et al. (2019), we pre-train sentence-level baselines on large sentence-level parallel data to make them as robust as possible.",
"In particular, we employ data released by Voita et al. (2019b) for En Ru (6.0M sentences from OpenSubtitles2018 (Lison et al., 2018)), data from the WMT17 3 news translation shared task for En De ( 5.2M sentences), and data from WMT14 4 for En Fr ( 35.8M sentences).",
"We train the contextual parameters of context-aware models in two settings, while freezing the rest of the parameters: High resource data.",
"For En Ru, it consists of the document-level data released by Voita et al. (2019b).",
"For the other two language pairs, we build the training set by assembling",
"(i) News-Commentary-v12 for En De and News-Commentary-v9 for En Fr;",
"(ii) Europarl-v7 for En De/Fr;",
"(iii) TED talks subtitles released by IWSLT17 (Cettolo et al., 2012) for En De/Fr.",
"Low resource data.",
"For En Ru, it consists of a random subset of the high resource documents, amounting to 1/10th of its total.",
"For En De/Fr, we use IWSLT17's TED talks alone.",
"The resulting size of the two training settings after pre-processing is reported in Table 1.",
"In the 2 More details are available in Appendix B, along with the same statistics for Opensubtitles2018.",
"case of En De/Fr, baselines and context-aware models that were trained on high resources are also fine-tuned on IWSLT17, so that both high and low resource settings can be bench-marked on the IWSLT17's test set 2015.",
"Test-sets 2011-2014 are used as development set.",
"For En Ru, we use the dev and test sets provided by Voita et al. (2019b).",
"5 5.2 Evaluation Besides evaluating average translation quality with BLEU (Papineni et al., 2002), 6 we employ three contrastive test suites for the evaluation of the translation of discourse phenomena.",
"7 En Ru EllipsisVP (Voita et al., 2019b).",
"Consisting of 500 examples from OpenSubtitles2018, each containing multiple contrastive hypotheses to evaluate the translation of verb phrase ellipses.",
"Source sentences contain an auxiliary verb (e.g. \"do\") and an omitted main verb, which can be imputed thanks to one of the three context sentences.",
"Voita et al. (2019b) proposed test sets for the evaluation of other discourse phenomena, but we do not use them because they are conceived for systems using target-side context too.",
"En De ContraPro (Mller et al., 2018).",
"A large-scale test set from OpenSubtitles2018 (Li-son et al., 2018), that measures translation accuracy of the English anaphoric pronoun it into the corresponding German translations er , sie or es .",
"Examples are balanced across the three pronoun classes (4,000 examples each).",
"Each example requires identification of the pronominal antecedent, either in the source or target side, that can be found in the current sentence or any of the previous ones.",
"En Fr ContraPro (Lopes et al., 2020).",
"A large-scale test set from OpenSubtitles2018, completely analogous to the previous one but focused on the translation of two English pronouns: it and 5 We report in Appendix C a re-cap of the datasets used and details about pre-processing.",
"Transformer-base by Vaswani et al. (2017).",
"K1 .",
"A context aware multi-encoder architecture with outside integration (see Section 2.2), that encodes a single past source sentence as context.",
"K3 .",
"A context aware multi-encoder architecture with outside integration , that encodes three past source sentences as context.",
"8 For both K1 and K3 , sentence-level parameters S follow the Transformer-base configuration (hid-den size of 512, feed forward size of 2048, 6 layers, 8 attention heads, total of 60.7M parameters), while contextual parameters C follow hierarchical architecture with source-side encoder proposed by Miculicich et al. (2018) (hidden size of 512, feed forward size of 2048, 8 attention heads, total of 4.7M parameters).",
"9 Context-aware models are trained following the two-step strategy described in Section 2.2.",
"Sentence-level parameters S of both K1 and K3 are initialized with K0 and freezed.",
"This has the advantage of saving time and computation, since only a small fraction of parameters ( C ) is trained (4.7M over a total of 65.2M).",
"In this section we provide evidence about the difficulty of training contextual parameters on document-level data.",
"In the first block of lines of Table 2, after the results of the sentence-level baseline K0 , we report performance of context-aware models trained on original document-level data, comparing low and high resource settings.",
"When trained on low resources , models display good BLEU on the test set, generally without relevant degradation with respect to K0 , or even with some improvements.",
"However, such marginal fluctuations in BLEU are difficult to interpret, as they do not necessarily correspond to better or worse translation (Freitag et al., 2020).",
"Accuracy on the contrastive test sets also increases marginally over baseline, if at all, for En De/Fr.",
"8 Although the splitting does not increase the number of inter-segment phenomena for d > 1 , it strengthens the signal by making it more dense (see Section 4.2).",
"Thus, K3 and any wider-context model can profit from the proposed approach.",
"9 Details can be found in Appendix C 4562 En De En Fr En Ru Avg.",
"K1 even shows a slight degradation of performance over the sentence-level baseline for En Fr.",
"These results highlight the struggle of contextual parameters to learn an appropriate use of context, other than acting as mere regularizers, as it was suggested by Kim et al. (2019) and Li et al. (2020).",
"On Russian instead, models display some improvements w.r.t. K0 .",
"This aligns with our expectations, since En Ru Low Res has a volume of inter-sentential discourse phenomena (such as coreferences) that is comparable with En De/Fr Low Res, but sentences are 2.5x shorter.",
"10 In other words, the double challenge of sparsity is mitigated on this corpus.",
"When trained on high resources , systems show substantial improvements in their context-modeling capabilities, on all language pairs.",
"Instead, BLEU improves of a few decimal points only, showing its limits to measure improvements in context-aware translation.",
"These results confirm the intuition discussed in Section 3: increasing the volume of data is a trivial solution to mitigate sparsity.",
"In this section, we show that the proposed pretraining strategy is a more efficient answer to the double challenge of sparsity than simply adding more data, and one that allows improvements when resources are abundant too.",
"The second block of Table 2 displays performance of models that have undergone d&r pre-training on the same document-level data as the models in the previous block, but where sentences were split in two segments following the middle-split method with l min = 7 (see 4.1).",
"During d&r pre-training, K1 and K3 encode one and three past segments (instead of 10 See Table 1; more details can be found in Appendix B sentences), respectively.",
"After d&r pre-training, models have been tuned and evaluated on original, non-split data.",
"The pre-training proves to be very effective, as all models show strong improvements in terms of accuracy on the test suites, with the sole exception of K1-d&r on En Fr High Res.",
"The average improvement is of +10.79 accuracy points on Low Res, +8.49 on High Res, showing that d&r brings strong improvements even when data are abundant.",
"Interestingly, improvements are not uniformly distributed across language pairs and domains: +17.20 on average for En Ru, +8.67 for En De, +3.09 for En Fr.",
"In terms of BLEU instead, we keep seeing minor fluctuations.",
"This confirms that, while context-aware translation improves dramatically, the average translation quality measured with BLEU stays more or less constant.",
"11 It is now evident that a proper assessment of multi-encoder approaches can not be undergone without careful training of contextual parameters that targets the problem of sparsity.",
"Efficiency .",
"A comparison between d&r models trained on Low Res against models trained on High Res without d&r shows another quality of the d&r pre-training strategy: efficiency.",
"The same context-aware models achieve superior performances with 1/10th of the document-level data and a much shorter training time (last column).",
"Following Section 4.1, we study the impact of using a different splitting method other than middlesplit .",
"All the variants are applied to the En De/Fr 11 To verify that the improvements on test suites after d&r pre-training really come from a better use of context, we present in Appendix D an analysis of pronoun translation by antecedent distance, and an ablation study in which we test models on ContraPro with inconsistent context.",
"low resource setting (IWSLT), with l min = 7 , and the d&r pre-trained models are evaluated on ContraPro.",
"The aligned-split method is based on alignments learned with fast_align (Dyer et al., 2013), while for the synt-split method we retrieve intra-sentential pronominal coreferences with CoreNLP (Manning et al., 2014), and we try to split them wherever present in a sentence-pair.",
"We split sentences as close to the middle as possible, while attempting to break the maximum number of coreferences.",
"12 Finally, for the multi-split method, we split sentence-pairs in a half for len ( S i ) 7 , and also in three segments of identical size for len ( S i ) 15 .",
"The performance differences between models pre-trained with middle-split and the other variants are reported in Table 3.",
"As we can see, splitting variants allow small improvements in 7 cases out of 10, although variations are marginal: the simple middle-split method seems to be already close to optimal.",
"This observation can be explained by multiple elements.",
"Firstly, middle-split produces segment pairs that are already well aligned: most of the source and target segments are aligned with the exception of one or two words, and the fact of having only a few misplaced words might act as a regularization factor.",
"Secondly, middle-split breaks a syntactic relation for the vast majority of sentences already, as explained in Section 4.2, which means that improvements achieved with syntactically driven splitting can only be marginal.",
"Thirdly, splitting in more than one segment can be beneficial in some cases, because it allows to break more syntactic relations and increase density of signal, but it also increases the risk of misalignment between source and tar-12 More sophisticated synt-split methods could be devised, targeting other discourse phenomena, or several of them at the same time, with different degrees of priority.",
"get, and might make the task too hard.",
"Finally, tools like fast_align and CoreNLP are characterized by a non-negligible language-dependent error rate, which affects the performance of the methods.",
"In conclusion, d&r pre-training with middle-split seems to be the most convenient alternative for most use-cases because of its efficacy, its simplicity and its language-independence.",
"Even though middle-split relies on word order similarity between source and target languages, we argue that the required degree of similarity is met by a large number of language pairs, in the order of millions.",
"In fact, there are around 4,000 written languages in the world (Eberhard et al., 2021), and most of them can be grouped in a few types with similar word orders, as shown by the ample literature on word order typologies (Tomlin, 2014; Dryer and Haspelmath, 2013).",
"The primary order of interest is the constituent order , concerning the relative order of subject (S), object (O) and verb (V) in a clause.",
"There are seven possible language types with respect to the constituent order (Dryer, 2013c): SOV, SVO, VSO, VOS, OVS, OSV, NDO (non-dominant or-der).",
"Tomlin (2014) estimates that more than 40% of the world languages belong to the SOV type (languages adopting the SOV order), another 40% belong to the SVO type, while almost 10% of languages adopt VSO order.",
"The other types are rarer.",
"As we have shown above, the middle-split method is beneficial both in the case of language pairs of the same type, that deploy the same constituent order, like En-Fr/Ru, which all adopt SVO order, as well as for language pairs that belong to different types, as for En-De, where English is SVO and German is NDO, deploying both SOV and SVO according to the use cases (Dryer, 2013c).",
"Similar observations also apply when we look at other word order categories.",
"For instance, when looking at the order of modifiers or adverbials, languages can be clustered in a few types too, where the wide majority of languages belong to the biggest or second biggest type (Dryer, 2013b,a).",
"Therefore, we believe that our method can be beneficial for millions of language pairs, including many low resource languages belonging not only to same word order types, but also slightly different ones (as in the case of SOV and SVO).",
"For a wider contextualization of our results, we report in the first block of Table 4 some experimental results by Lopes et al. (2020), who trained and evaluated various context-aware approaches on the same low-resource setting (IWSLT17), adopting the very same experimental setup as ours.",
"Specifi-cally, they trained and evaluated: K0 : a baseline like ours; Zhang2018 : a multi-encoder model that encodes three past source sentences, with both inside integration (see 2.2) and outside integration (Zhang et al., 2018); Tu2018 : a multi-encoder model that encodes all the past context (at any distance) with a caching system with inside integration in the decoder, both on the source and the target side (Tu et al., 2018); Concat21 : a many-to-one (2-to-1) single-encoder approach (see 2.1) that exploits contextual information from one past source sentence; Concat22 : a many-to-many (2-to-2) single-encoder approach that exploits contextual information from one past sentence, both on the source and the target side; Multi-encoder models ( Zhang2018 and Tu2018 ) perform poorly or even lag behind K0 , confirming the difficulty of multi-encoder models to learn contextualization on low resources and without any help against the problem of sparsity.",
"Instead, concatenation approaches are stronger, likely because they do not have extra contextual parameters to train and simply finetune the same sentence-level Transformer architecture on the context-aware task.",
"This makes them less affected by the problem of sparsity.",
"Our splitting strategy proves to be very effective, since both d&r pre-trained models outperform Concat21 using the same amount of training data.",
"Moreover, K1/3 d&r beat the strong Concat22 on En Fr, which has the non-negligible advantage of using also the target-side context along the source-side.",
"This benchmarking show that multi-encoder models are a viable solution for context-aware NMT, but that they need to be carefully (pre)-trained to harness their capabilities.",
"We leave to future works a more detailed comparison between single-encoder and multi-encoder approaches, as well as between d&r and other recently proposed pre-training strategies for context-aware models (e.g., Fernandes et al. (2021)).",
"Multi-encoder models are a broad family of context-aware NMT models.",
"In this work we have discussed the difficulty of training contextual parameters due to the sparsity of the words in need of context, and the sparsity of their relevant context.",
"We have proposed a pre-training approach called divide and rule , based on splitting the training sentences, with four variants.",
"After having analysed the implications of splitting on the distribution of discourse phenomena within the training data, we have shown that d&r allows to learn contextual parameters better and faster than by simply increasing training data.",
"We have also shown that the simplest and language independent splitting variant, middlesplit , is a strong baseline that can be easily applied for pre-training any multi-encoder NMT model.",
"We thank the anonymous reviewers for their insightful comments.",
"This work has been partially supported by the Multidisciplinary Institute in Artificial Intelligence MIAI@Grenoble Alpes (ANR-19-P3IA-0003), and it was granted access to the HPC resources of IDRIS under the allocation 2020-101501 made by GENCI."
]
| [
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"result",
"other",
"other"
]
|
[
"In this paper, we firstly empirically find that existing models struggle to handle hard mentions due to their insufficient contexts, which consequently limits their overall typing performance.",
"To this end, we propose to exploit sibling mentions for enhancing the mention representations.",
"Specifically, we present two different metrics for sibling selection and employ an attentive graph neural network to aggregate information from sibling mentions.",
"The proposed graph model is scalable in that unseen test mentions are allowed to be added as new nodes for inference.",
"Exhaustive experiments demonstrate the effectiveness of our sibling learning strategy, where our model outperforms ten strong baselines.",
"Moreover, our experiments indeed prove the superiority of sibling mentions in helping clarify the types for hard mentions.",
"Fine-Grained Entity Typing (FGET) aims to assign one or more fine-grained types to an entity mention given its context.",
"For instance, the mention Steve Jobs should be classified as Person and Entrepreneur under the context Steve Jobs cofounded Apple ....",
"Many tasks have witnessed the importance of FGET, such as relation extraction (Jiang et al., 2020b; Chu et al., 2020; Jiang et al., 2020a; Cheng et al., 2021), entity linking (Onoe and Durrett, 2020), and other tasks (Jiang et al., 2020c; Zhang et al., 2020b; Liu et al., 2021b).",
"It is challenging to learn effective representations for contextualized mentions 1 in many information extraction tasks (Gao et al., 2022), espeEqual Contribution Corresponding Authors 1 To simplify the statement, in the rest of this paper, the term mention is referred to as the contextualized mention, i.e., a mention accompanied with its context.",
"cially in FGET, since the representations are required to well distinguish fine-grained types with similar but different semantics.",
"Noticeable efforts have been made to learn type-aware representations for mentions (Ren et al., 2016; Xin et al., 2018; Choi et al., 2018; Zhang et al., 2018; Lin and Ji, 2019; Abhishek et al., 2017; Xu and Barbosa, 2018; Ali et al., 2020; Chen et al., 2021) and sig-nificant progress has been achieved.",
"However, as supported by our empirical experiments, existing SOTA models perform poorly on a certain number of hard mentions, leading to limited overall performance.",
"The main reasons are the following challenges.",
"First, the structure of some contexts surrounding the hard mentions are inherently too complex to extract informative features for identifying entity types.",
"Second, the contexts of some hard mentions are ambiguous and thus it is insufficient to handle these mentions by learning from their contexts only.",
"In this paper, we show that representation learning of such hard mentions can be well handled by learning informative knowledge from their sibling mentions .",
"Sibling mentions refer to the mentions that potentially share the same or semantically similar types (e.g., country and nation ) with the target mention.",
"We illustrate how sibling mentions assist classifying hard mentions in Figure 1.",
"Intuitively, the context of the target mention Sharp is ambiguous and insufficient for inferring the ground-truth types (i.e., organization , company , and tech company ), since both a person and a company can sign a deal with Qualcomm.",
"Fortunately, the sibling mentions provide rich information that works as an important supplement for the target mention Sharp .",
"By aggregating the supplementary information from siblings, it is promising to learn effective representations with less ambiguity for hard target mentions.",
"To utilize sibling mentions, we model FGET as a heterogeneous graph learning problem.",
"The graph is composed of two kinds of nodes, namely the mentions and the types.",
"Besides, there are three kinds of edges connecting the nodes as shown in the left part of Figure 1, which represent the sibling relationship between mentions, the hierarchical relationship between types, and the isLabel relationship between mentions and types, respectively.",
"The sibling relationship is considered as the most important part in our graph.",
"For detecting it, we propose two similarity metrics, based on which we design an effective sibling selection algorithm.",
"Upon the constructed nodes and edges, we employ an attentive graph neural module to learn their representations.",
"Particularly, the representations of mention nodes are enriched by aggregating the information from their sibling and type neighbors.",
"It is also noteworthy that, during inference stage, our graph model is scalable to include the unseen test mentions as new nodes and connect them with their existing sibling mention nodes in the graph to derive reliable representations for predictions.",
"Extensive experiments are conducted to verify the effectiveness of our model.",
"Our experimental results demonstrate that our model outperforms several strong baselines on the standard test sets with a large margin.",
"Moreover, our model is indeed able to well handle hard mentions with the help from sibling mentions.",
"We summarize our contributions as follows: We are the first to point out a bottleneck issue suffered by existing SOTA models, i.e., they perform poorly on a certain number of hard mentions, and we quantitatively analyze its influence on typing accuracy via measuring hard mentions by entropy.",
"We are the first to exploit sibling information for mention representation learning in FGET.",
"E m , E y and E m,y are obtained as follows: E m = { ( m i , m j ) | m i , m j V m , isSib ( m i , m j ) = 1 } (1) E y = { ( y i , y j ) | y i , y j V y , isA ( y i , y j ) = 1 } (2) 2077 E m,y = { ( m i , y j ) | m i V m , y j V y , isLabel ( m i , y j ) = 1 } (3) where isA ( y i , y j ) = 1 indicates y j is the parent or child type of y i in the type hierarchy 2 , and isLabel ( m i , y j ) = 1 means mention m i is labeled with the type y j in the training set.",
"We design two effective metrics for sibling detection and propose a scalable graph model to take advantages of sibling mentions.",
"Given a mention m and the type set Y , an FGET model needs to predict the correct types Y m ( Y m Y ) for m based on its context.",
"In this paper, mention representations are learned and refined with the help of sibling mentions and ground-truth types.",
"To achieve it, we propose a heterogeneous graph model enhanced by sibling mentions for FGET, as illustrated in Figure 1.",
"First, a mention-type graph G is constructed from training samples (Sec 3).",
"Then, the features for mentions and types are learned by an attentive graph neural module upon G (Sec 4).",
"During inference stage (Sec 5), we add test mentions into graph G by connecting them to their sibling mentions in the training set.",
"By aggregating sibling information, the representations of test mentions are generated and used for type prediction.",
"Consider graph G = ( V m , V y , E m , E y , E m,y ) , where V m and V y are the set of mention nodes and type nodes, respectively.",
"E m is the set of edges between the target mentions and their sibling mentions, while E y is the set of edges between types.",
"E m,y denotes the edges connecting the target mentions and the ground-truth types.",
"Since type hierarchy and the ground-truth types of mentions are available in the training set, V m , V y , E y and E m,y can be easily derived.",
"isSib ( m i , m j ) = 1 means m j is the sibling mention of m i , which will be discussed in Sec 3.2.",
"The key to construct E m is to define isSib ( m i , m j ) , i.e., the criterion for judging whether m j is the sibling of m i .",
"We design two metrics to detect the sibling relationships between mentions, named (unsupervised) word distribution-based and (super-vised) typing distribution-based metrics.",
"Word distribution-based metric The basic as-sumption for this metric is that mentions sharing more contextual words tend to have more similar ground-truth types.",
"We use TF-IDF to encode mentions as sparse feature vectors.",
"Then the sibling similarity between any two mentions is measured by the cosine similarity of their vectors.",
"Typing distribution-based metric In this metric, we first derive the prior score distributions over the type set Y for all the mentions in the dataset from an extra base model (Lin and Ji, 2019) trained on the same dataset.",
"Then the sibling mentions are selected by their cosine similarities to the target mention based on the score distributions.",
"Sibling mention selection Given one of the metrics above, we obtain the sibling mentions according to Algorithm 1.",
"Note that for each target mention m i V m , we first select a subset V (cid:48) m from V m and only calculate the similarities between m i and the mentions m j V (cid:48) m .",
"The contexts of mentions from V (cid:48) m share at least one word with that of the target mention and |V (cid:48) m | (cid:28) |V m | , which greatly reduces time complexity.",
"Then, based on the similarity scores, we choose the top-K most similar mentions V (cid:48) m,K as the siblings for m i and let isSib ( m i , m j ) = 1 for each m j V (cid:48) m,K .",
"Be aware that, by definition, the sibling relationship is directed, i.e., isSib ( m j , m i ) = 1 does not ensure isSib ( m i , m j ) = 1 holds.",
"2 The edges between y i and its parent or child type y j are directed, as detailed in",
"Eq.(5) Algorithm 1: Sibling mention selection Input : the set of mention nodes V m 1 for m i , m j V m do 2 isSib ( m i , m j ) 0 3 end 4 for m i V m do 5 (cid:46) select a candidate set V (cid:48) m from V m 6 for m j V (cid:48) m do 7 (cid:46) compute similarity sim ( m i , m j ) 8 end 9 (cid:46) select the topK similar mentions V (cid:48) m,K from V (cid:48) m 10 for m j V (cid:48) m,K do 11 isSib ( m i , m j ) 1 12 end 13 end 4 Graph-based Typing Model 4.1 Attentive Graph Neural Module We employs graph neural networks (GNNs) with L layers (Velickovic et al., 2018; Xu et al., 2019) to aggregate the information of sibling mentions and types for learning mention representations.",
"At the first layer of G , the embedding of each type y i Y (denoted by y (1) i R d r ) is randomly initialized.",
"In contrast, to capture the rich features from contexts, the initial embeddings for mentions are derived by a parameterized mention encoder g ( ) , i.e., m (1) i = g ( m i ; M ) R d r (details in Sec 4.2).",
"Given the initial mention and type embeddings (i.e., m (1) i and y (1) i ), the graph module iteratively updates them to obtain m ( l +1) i and y ( l +1) i .",
"Update of y ( l +1) i In the l -th ( l = 1 , ..., L 1 ) layer, the updating formula for type embedding y ( l +1) i R d r is: y ( l +1) i = f 0 (cid:32) (cid:88) y k Y yi ( l ) i,k f 1 ( y ( l ) k ) + f 1 ( y ( l ) i ) (cid:33) , (4) where f 0 and f 1 are linear layers with ReLU activation.",
"Y y i denotes the type neighbors for y i in graph G , which are the parent or child types of y i in the type hierarchy.",
"( l ) i,k is the attention weight from type y i to y k defined as ( l ) i,k = (cid:40) (cid:0) y ( l ) (cid:62) i W ( l ) 1 y ( l ) k (cid:1) , y k is a child type ; (cid:0) y ( l ) (cid:62) k W ( l ) 1 y ( l ) i (cid:1) , y k is a parent type , (5) 2078 W ( l ) 1 R d r d r is the weight matrix to model the parent-child relationship.",
"Note that",
"Eq.(4) does not involve mention embeddings and only focuses on learning the hierarchical structure of types.",
"The interaction between types and mentions will be modeled by",
"Eq.(6) during the update process of mention embeddings.",
"(cid:33) ,",
"4.4 Loss Function The loss over m i is computed as: (cid:96) i = |Y| (cid:88) k =1 (cid:0) ik log p i [ k ]+(1 ik ) log(1 p i [ k ]) (cid:1) (9) where ik { 0 , 1 } indicates whether y k is the ground-truth type of m i in the training set.",
"The overall loss is the average over all the mentions, i.e., L = 1 |V m | (cid:80) i (cid:96) i .",
"where M m i and Y m i are the sibling and type neighbors 3 of m i in graph G .",
"( l ) i,j and ( l ) i,k are the attention weights from m i to mention m j and type y k in the l -th layer, respectively.",
"Specifically, ( l ) i,j = (cid:0) m ( l ) (cid:62) i W ( l ) 2 m ( l ) j (cid:1) , ( l ) i,k = (cid:0) m ( l ) (cid:62) i W ( l ) 3 y ( l ) k (cid:1) , (7) W ( l ) 2 , W ( l ) 3 R d r d r are learnable parameters.",
"f 2 , f 3 , f 4 are linear layers with ReLU activation.",
"Here, we use the attention mechanism to distinguish informative neighbors.",
"Besides, the update process of target mentions involves both the sibling and type neighbors, whose representations are also updated at the same.",
"In this way, the learned representations for both mentions and types are more consistent and thus more reliable for prediction.",
"The mention encoder uses the backbone from Lin and Ji (2019).",
"Given a mention, we first encode the mention span and the surrounding context as the weighted sum of their ELMo (Peters et al., 2018) word representations respectively.",
"Then, the uni-fied feature vector for the mention is derived by concatenating both representations.",
"Given a mention m i , the predicted score distribution p i R |Y| over the type set Y is computed as:",
"p i = (cid:0) Y ( L ) W 4 m ( L ) i + W 5 m ( L ) i (cid:1) , (8) where Y ( L ) = (cid:104) y ( L ) 1 , y ( L ) 2 , ..., y ( L ) |Y| (cid:105) R |Y| d r , y ( L ) i and m ( L ) i are the type and mention embeddings in the L -th layer in GNN.",
"W 4 R d r d r and 3 We define that M m i contains m i itself, thus the selfconnections are taken into account during graph learning.",
"W 5 R |Y| d r are learnable parameters.",
"p i [ k ] (the k -th element in p i ) denotes the predicted probability for type y k .",
"The representation m ( L ) i incorporates the information from ground-truth type neighbors",
"(Eq.(6)).",
"However, it is then used for predicting the ground-truth types in turn",
"(Eq.(8)).",
"The setting that Y m i contains all the ground-truth types will inevitably degenerate the model to just focus on the type neighbors while totally ignore the mention neighbors.",
"To overcome this, each neighboring type in Y m i is randomly discarded with a certain probability .",
"In this way, the prediction of discarded type will force the model to learn from the sibling mentions rather than directly from type neighbors.",
"In the following, we describe the prediction process for test mentions.",
"Step 1 : Given a batch of n test mentions, we first obtain their sibling mentions.",
"To be specific, for each test mention m t , we select a candidate set V (cid:48) m from the training mentions V m .",
"Then, the cosine similarity is computed between m t and each m i in V (cid:48) m , based on which the top K mentions are selected as siblings (see Sec 3.2).",
"Step 2 : We add the test mentions as nodes into the mention-type graph G , where the test mentions are connected to their sibling mentions selected at Step 1.",
"Note that, in the new graph, test mentions have no type neighbors since their ground-truth types are not available.",
"Besides, there is no edge between any two test mentions in the new graph.",
"Step 3 : Following",
"Eq.(6), the representations of test mentions { m t } are updated by aggregating the embeddings for their sibling mentions.",
"Note that Y m t is empty, so no information from the ground-truth types are involved.",
"Through layers of updates, the final representations { m ( L ) t } are obtained.",
"Step 4 : Based on the mention embedding m ( L t and the type embeddings Y ( L ) , we predict the type score distribution for m t by",
"Eq.(8).",
"We conclude that, (1) our graph module is scalable to add arbitrary number of unseen test mentions as new nodes to the existing graph to derive their representations.",
"By contrast, many popular graph settings (Kipf and Welling, 2017; Velickovic et al., 2018; Wang et al., 2019) fail to extend to new nodes.",
"(2) Since the embeddings for sibling mentions have been well learnt during training, the only need is to compute the embeddings for test mentions for prediction, which are derived simultaneously during graph inference with high efficiency.",
"We evaluate the proposed model on two widely-used datasets: OntoNotes and BBN.",
"OntoNotes The original OntoNotes dataset is annotated by distant supervision (Gillick et al., 2014).",
"The training, development and test samples in OntoNotes are about 251K, 2K and 9K, respectively.",
"We also conduct experiments on the augmented version 4 (Choi et al., 2018) with 793K training samples 5 .",
"The above two versions share the same test set and development set, as well as the same type set of size 89.",
"BBN Different from OntoNotes, BBN is manually annotated (Weischedel and Brunstein, 2005).",
"The training, development and test set contain about 84K, 2K and 14K samples respectively, and the type set contains 47 type in total.",
"Our model is implemented based on the PyTorch Geometric package (Fey and Lenssen, 2019).",
"In the main experiments (Sec 6.4), we obtain the sibling mentions according to the typing distribution-based metric described in Sec 3.2.",
"We conduct hyperparameter search on the development set and the optimal settings are presented in Appendix A. Following the previous works (Ling and Weld, 2012; Ren et al., 2016; Chen et al., 2019), we report the performance in terms of strict accuracy (Acc), macro-average F1 score (Ma-F1) and micro-average F1 score (Mi-F1).",
"To guarantee the relia-4 http://nlp.cs.washington.edu/entity_type 5 We use the open-sourced version, which is a subset of the dataset reported in Choi et al. (2018).",
"We compare our proposed model with several state-of-the-art FGET models: (1) AFET (Ren et al., 2016); (2) AAA (Abhishek et al., 2017); (3) NFETC (Xu and Barbosa, 2018); (4) NEURAL (Shimaoka et al., 2017); (5) ACT (Zhang et al., 2018); (6) Lin and Ji (2019); (7) Chen et al. (2020); (8) LABELGCN (Xiong et al., 2019); (9) Choi et al. (2018); (10) Ali et al. (2020).",
"Note that Lin and Ji (2019) is considered as an important baseline in our experiments and is marked with (cid:70) in Table 1-3, since we use it as the base model to derive the prior typing distributions for sibling selection (Sec 3.2).",
"Table 1, 2 and 3 illustrate the experimental results on the original and the augmented OntoNotes, as well as the BBN dataset.",
"Analysis The results demonstrate that learning from sibling mentions helps our model outperform most baselines across the benchmarks.",
"The detailed analysis is presented as follows: (1) We select sibling mentions according to the typing distribution from Lin and Ji (2019).",
"We observe that, after aggregating sibling information through the attentive graph neural module (Sec 4.1), our model significantly outperforms Lin and Ji (2019) on both the original OntoNotes and the BBN dataset.",
"When trained on the augmented OntoNotes of the same size, our model increases the accuracy score by more than 5% over Lin and Ji (2019) (cid:70) .",
"Compared with Lin and Ji (2019) which utilizes the full 3M augmented OntoNotes for training, our model still maintains a comparable performance and even improves the accuracy score by about 2%.",
"(2) Many previous works have demonstrated the effectiveness of modeling type hierarchy for entity typing (Ren et al., 2016; Xu and Barbosa, 2018; Xiong et al., 2019; Chen et al., 2020).",
"As a comparison, our model also considers the hierarchical information of types and incorporates it in a natural way (Sec 4.1).",
"From the results, we conclude that learning jointly from type hierarchy and sibling mentions can remarkably improve the typing performance.",
"(3) The attention mechanism plays an important role in our graph module and some of the baselines (Ren et al., 2016; Abhishek et al., 2017; Xu and 2080 Barbosa, 2018).",
"It not only helps identify the informative features from neighbors but also helps alleviate noise from the training data constructed by distant supervision (e.g., OntoNotes).",
"The results reveal that our graph-based solution is more effective than the existing solutions.",
"In Sec 3.2, we propose two similarity metrics to discover sibling relationships in graph G , and abbreviate them as: Word-based and Typing-based metrics.",
"Here, we provide two additional metrics for more detailed analysis: the Gold typing-based and the Random-based metrics, which are two extreme variations of the typing-based metrics.",
"Under the gold typing-based metric, the siblings are selected by the gold typing distribution, where each dimension is 0 or 1 according to the ground-truth types of the mention.",
"In this way, candidate mentions that share more ground-truth types with the target mention will have larger cosine similarity and thus be chosen as the siblings with a higher probability.",
"On the contrary, under the random-based metric, siblings are selected at random.",
"Since the type set is large, the siblings are more likely to be irrelevant with the target mention and may contain much noise.",
"Measuring sibling quality Intuitively, different similarity metrics will affect the quality of siblings.",
"To quantify this effect, we measure the sibling quality for the test mentions V (cid:48) in the original OntoNotes and define the metrics as follows.",
"For each mention m i V (cid:48) , denote its ground-truth types as Y m i and sibling mentions in graph G (defined in Sec 3.1) as M m i .",
"Further, for M m i , we denote their ground-truth types as YM i , i.e., YM i = (cid:91) m j M mi \\{ m i } Y m j .",
"Similar to the definitions of Precision, Recall and F1, we define Purity, Coverage and Quality to measure the sibling quality of V (cid:48) : Purity = 1 |V (cid:48) | (cid:88) m i V (cid:48) = |Y m i YM i | |Y M i | Coverage = 1 |V (cid:48) | (cid:88) m i V (cid:48) |Y m i YM i | |Y m i | Quality = 2 Coverage Purity Coverage + Purity (11) Results The results are presented in Table",
"4. In general, the model performance is closely related to the sibling quality.",
"Besides, the typing-based metric performs better than the word-based metric.",
"This indicates that the the continuous type-level probability distribution is more reliable for sibling selection than the discrete word-level distribution.",
"The scores from the gold typing-based and the random-based metrics reveal the upper bound and the lower bound of the scores for the typing-based metric.",
"On the one hand, the quality of the siblings selected by the gold typing-based metric is much higher than those by other methods, with the Coverage up to 97 .",
"6% .",
"Meanwhile, its corresponding model also outperforms the other three by a large margin.",
"Note that the typing performance in this scenario is limited by the annotation 2081 Macro Micro Metrics Purity Coverage Quality P R F1 P R F1 Random-based 9.6 71.3 16.9 65.1 56.4 60.1 65.1 43.5 52.2 Word-based 12.2 75.6 21.0 82.7 69.7 75.3 82.1 60.4 69.7 Typing-based 13.1 82.3 22.5 83.3 71.4 76.5 83.3 61.9 71.0 Gold typing-based 21.8 97.6 35.7 96.1 80.0 86.9 94.9 69.2 80.0 Table 4: Comparison among different sibling selection metrics.",
"Note that the sibling quality scores of Gold typing-based do not reach 1, since the sibling mentions are only selected from a subset V (cid:48) m (see Sec 3.2) for time efficiency, which are not guaranteed to have exactly the same ground-truth types with the target mention.",
"quality to some extent.",
"Since OntoNotes is annotated by distant-supervision, the scores for the gold typing-based metric could not reach higher due to the label noise of the siblings.",
"On the other hand, a distinct drop of the scores is observed with the random-based metric.",
"This is reasonable since the randomly selected siblings contain much noisy information, which is helpless and even harmful for typing of the target mention.",
"It can be concluded from the above observations that there is still much room to improve the sibling quality as well as the typing performance of the sibling-enhanced model.",
"The model performance is sensitive to the size of selected sibling mentions for a target mention in the graph G .",
"Denote the sibling size as K , following the default hyper-parameter settings, we train our model on the original OntoNotes under different K { 0 , 5 , 10 , 15 } using the typing-based sibling selection metric.",
"The corresponding sibling quality and model performance are reported in Table",
"5. We observe that the best scores are obtained with the top 5 sibling mentions.",
"When K = 0 , the graph only contains the self-connections from the target mentions to themselves.",
"Without the additional information from siblings, the Macro F1 score decreases by 2 .",
"1% , which indicates the effectiveness of sibling mentions for improving the typing performance of our model.",
"When K (cid:54) = 0 , the Coverage score goes up while the Purity and Quality scores go down as K ranges from 5 to 15 .",
"Meanwhile, the typing performance decreases as K increases.",
"It suggests that, for OntoNotes, a properly smaller sibling size is a trade-off choice for the model to use siblings with higher quality and thus achieve better typing performance.",
"We randomly discard some type neighbors with a dropout probability during training (Sec 4.5), which forces the model to learn from the sibling mentions other than the ground-truth types.",
"Table 6 shows the results under different values of on the original OntoNotes dataset.",
"Generally, the model achieves better performance with larger .",
"This indicates discarding a large proportion of ground-truth types is beneficial for learning from sibling mentions.",
"Besides, it also narrows the difference between training and test settings where the test mentions do not have ground-truth types as neighbors.",
"The best performance is achieved when equals around 0 .",
"7 .",
"However, dropping all the type neighbors (i.e., = 1 ) will block the interaction between the type and mention representations in the graph, which may slightly damage the performance.",
"To select sibling mentions, we first derive the prior typing distribution from the base model (Lin and Ji, 2019) as described in Sec 3.2.",
"During experiments, we observe that the contextual information 2082 for some mentions are insufficient or too complex, which makes the base model confused on these mentions.",
"Entropy measures the uncertainty of a probability distribution.",
"Thus, we quantify the difficulty of mentions by the entropy of their corresponding prior typing distributions and define the mentions with the top500 highest entropy values as hard mentions , which account for about 5% of the whole mentions .",
"Table 7 compares the performance of our model and the base model on both the whole mentions and the hard mentions from the test dataset of the original OntoNotes.",
"we see that both models perform worse on the hard mentions than on the whole mentions.",
"Besides, except for the superiority of our model regarding the Acc, Ma-F1 and Mi-F1 scores, it also achieves a lower entropy value than the base model especially on the hard mentions.",
"This indicates the information from siblings makes the output type distributions more concentrated and therefore increases the confidence for model predictions.",
"To further provide an intuitive understanding about how our model benefits from sibling mentions, we present an example in Table 8.",
"As expected, the retrieved siblings based on the metric defined in Sec 3.2 share similar ground-truth types with the target mention.",
"This verifies the effectiveness of our sibling selection algorithm.",
"Moreover, we observe that the siblings even help predict the correct but out-of-gold-set types for the target mention in this case.",
"Although the annotated types for the target mention [ GM officials ] only contains /person in the test set.",
"The sibling mentions still provide a strong evidence for our model to also predict /per-son/title as a possible type for the target mention.",
"FGET is an important task for the downstream NLP tasks and many efforts have been make in improving its performance (Zhang et al., 2020a; Liu et al., 2021a).",
"Early works in FGET (Ling and Weld, 2012; Shimaoka et al., 2016) mainly focus Target mention : [ GM officials ] told workers late last week of the following moves: production of full-sized vans will be consolidated into a single plant in Flint, Mich.",
"Ground-truth : /person Prediction from our model : /person, /person/title Sibling 1 : It's been a steadily improving relationship., says the [ president ].",
"Sibling 2 : Apart from those two actions, Mr.Sikes and the three other [ commissioners ] said they expect to reexamine how AT&T is regulated since competition has increased.",
"Ground-truth : /person, /person/title Sibling 3 : HUD Secretary [ Jack Kemp ] backed an unsuccessful effort to strike such language last week, but received little support from the White House Ground-truth : /person, /person/artist, /person/artist/actor, /person/artist/author, /person/political_figure Table 8: An example to illustrate the relationship between the target mention and the sibling mentions from the Original OntoNotes.",
"on feature extraction for mentions, which do not consider label noise introduced by distant supervision (Gillick et al., 2014; Choi et al., 2018; Li et al., 2020).",
"Recent years have witnessed an increasing number of researchers being dedicated to data denoising.",
"A popular solution (Ren et al., 2016; Abhishek et al., 2017; Xu and Barbosa, 2018; Ali et al., 2020) is to design loss functions for the clean and noisy parts of the training data separately.",
"Nevertheless, Zhang et al. (2020c) proposes an automatic relabeling framework to estimate the pseudo-truth label distribution of each sample, which treats the noisy and clean data uniformly.",
"Besides, Chen et al. (2019) groups mentions of the same type into a compact cluster to improve the robustness of the model.",
"Ali et al. (2020) refines noisy representations by corpus-level contextual clues.",
"Onoe and Durrett (2019) introduces two additional models to delete the samples that are too noisy to be useful, and repair noisy labels for the retained examples.",
"In addition, there are some notable work which tries to build FGET with limited resources (Qian et al., 2021).",
"Modeling the type hierarchy is another important topic in FGET.",
"Prior solutions (Shimaoka et al., 2017) introduce a one-hot matrix to encode the hierarchy.",
"Xu and Barbosa (2018) proposes a hierarchy-aware loss function.",
"Recently, graph-based methods have been proven to be powerful in many NLP tasks (Kipf and Welling, 2017; Liang et al., 2021; Xu et al., 2019; Liang et al., 2022).",
"Using graphs 2083 to model the type hierarchy in FGET is a natural idea.",
"Jin et al. (2019) models the potential type correlations for in-knowledge-base entities via hierarchical multi graph convolutional networks (GCNs).",
"Further, Xiong et al. (2019) extends GCNs to a vast number of free-form types.",
"Chen et al. (2020) designs a multi-level learning-to-rank loss to leverage hierarchical information.",
"Recently, Onoe et al. (2021) models the mention and type representations in a box space instead of the traditional vector space.",
"In this paper, we firstly point out that SOTA typing models suffer from a bottleneck issue, i.e., they perform poorly on a certain number of hard mentions, which leads to their limited overall performance.",
"To this end, we propose to exploit sibling information for mention representation learning and define two metrics for detecting sibling relationship between mentions.",
"Further, we model sibling learning as a graph learning problem.",
"Our model is scalable in that, once trained, it can generate sibling-aware representations for previously unseen mentions ef-ficiently during inference stage.",
"Extensive experiments show that the proposed model indeed handles hard mentions well and thereby yields better overall performance than SOTA baseline models.",
"This work was partially supported by the National Natural Science Foundation of China (61876053, 62006062, 62176076), the Shenzhen Foundational Research Funding (JCYJ20200109113441941, JCYJ20210324115614039), Joint Lab of Lab of HITSZ and China Merchants Securities."
]
| [
"result",
"objective",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"objective",
"result",
"objective",
"abstain",
"objective",
"objective",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"method",
"abstain",
"other"
]
|
[
"Pre-trained language model representations have been successful in a wide range of language understanding tasks.",
"In this paper, we examine different strategies to integrate pre-trained representations into sequence to sequence models and apply it to neural machine translation and abstractive summarization.",
"We find that pre-trained representations are most effective when added to the encoder network which slows inference by only 14%.",
"Our experiments in machine translation show gains of up to 5.3 BLEU in a simulated resource-poor setup.",
"While returns diminish with more labeled data, we still observe improvements when millions of sentence-pairs are available.",
"Finally, on abstractive summarization we achieve a new state of the art on the full text version of CNN-DailyMail.",
"1 1 Introduction Pre-training of language models has been shown to provide large improvements for a range of language understanding tasks (Peters et al., 2018; Radford et al., 2018; Phang et al., 2018; Devlin et al., 2018).",
"The key idea is to train a large generative model on vast corpora and use the resulting representations on tasks for which only limited amounts of labeled data is available.",
"Pre-training of sequence to sequence models has been previously investigated for text classification (Dai and Le, 2015) but not for text generation.",
"In neural machine translation, there has been work on transferring representations from high-resource language pairs to low-resource settings (Zoph et al., 2016).",
"In this paper, we apply pre-trained representations from language models to language generaEqual contribution.",
"1 Code and pre-trained models are available at https://github.com/pytorch/fairseq/tree/bi_trans_lm/examples/pretraining tion tasks that can be modeled by sequence to sequence architectures.",
"Previous work on integrating language models with sequence to sequence models focused on the decoder network and added language model representations right before the output of the decoder (Gulcehre et al., 2015).",
"We extend their study by investigating several other strategies such as inputting ELMo-style representations (Peters et al., 2018) or fine-tuning the language model ( 2).",
"Our experiments rely on strong transformer-based language models trained on up to six billion tokens ( 3).",
"We present a detailed study of various strategies in different simulated labeled training data scenarios and observe the largest improvements in low-resource settings but gains of over 1 BLEU are still possible when five million sentence-pairs are available.",
"The most successful strategy to integrate pre-trained representations is as input to the encoder network ( 4).",
"We consider augmenting a standard sequence to sequence model with pre-trained representations following an ELMo-style regime ( 2.1) as well as by fine-tuning the language model ( 2.2).",
"The ELMo approach of Peters et al. (2018) forms contextualized word embeddings based on language model representations without adjusting the actual language model parameters.",
"Specifi-cally, the ELMo module contains a set of parameters 1 . . . L , to form a linear combination of the L layers of the language model: ELMo = (cid:80) Li =0 1 Z exp( i ) h k where is a learned scalar, Z is a constant to normalize the exp( i ) to sum to one and h k is the output of the k -th language model layer; the module also considers the input word embeddings of the language model.",
"We also apply layer normalization (Ba et al., 2016) to each h k before computing ELMo vectors.",
"We experiment with an ELMo module to input contextualized embeddings either to the encoder ( SRC-ELMO ) or the decoder ( TGT-ELMO ).",
"This provides word representations specific to the current input sentence and these representations have been trained on much more data than is available for the text generation task.",
"Fine-tuning the pre-trained representations adjusts the language model parameters by the learning signal of the end-task (Radford et al., 2018; Devlin et al., 2018).",
"We replace learned input word embeddings in the encoder network with the output of the language model ( SRC-FT ).",
"Specifically, we use the language model representation of the layer before the softmax and feed it to the encoder.",
"We also add dropout to the language model output.",
"Tuning separate learning rates for the language model and the sequence to sequence model may lead to better performance but we leave this to future work.",
"However, we do tune the number of encoder blocks N as we found this important to obtain good accuracy for this setting.",
"We apply the same strategy to the decoder: we input language model representations to the decoder network and fine-tune the language model when training the sequence to sequence model ( TGT-FT ).",
"Pre-training.",
"We train language models on two languages: One model is estimated on the German newscrawl distributed by WMT'18 comprising 260M sentences or 6B tokens.",
"Another model is trained on the English newscrawl data comprising 193M sentences or 5B tokens.",
"We learn a joint Byte-Pair-Encoding (BPE; Sennrich et al., 2016) vocabulary of 37K types on the German and English newscrawl and train the language models with this vocabulary.",
"Machine translation.",
"We consider two benchmarks: Most experiments are run on the WMT'18 English-German (en-de) news translation task and we validate our findings on the WMT'18 English-Turkish (en-tr) news task.",
"For WMT'18 English-German, the training corpus consists of all available bitext excluding the ParaCrawl corpus and we remove sentences longer than 250 tokens as well as sentence-pairs with a source/target length ratio exceeding 1.5.",
"This results in 5.18M sentence pairs.",
"We tokenize all data with the Moses tok-enizer (Koehn et al., 2007) and apply the BPE vocabulary learned on the monolingual corpora.",
"For WMT'18 English-Turkish, we use all of the available bitext comprising 208K sentence-pairs without any filtering.",
"We develop on newstest2017 and test on newstest2018.",
"For en-tr we only experiment with adding representations to the encoder and therefore apply the language model vocabulary to the source side.",
"For the target vocabulary we learn a BPE code with 32K merge operations on the Turkish side of the bitext.",
"Both datasets are evaluated in terms of case-sensitive de-tokenized BLEU (Papineni et al., 2002; Post, 2018).",
"2 Summarization.",
"We consider the CNN-DailyMail abstractive document summarization task comprising over 280K news articles paired with multi-sentence summaries.",
"CNN-DailyMail is a widely used dataset for abstractive text summarization.",
"Following (See et al., 2017), we report results on the non-anonymized version of CNN-DailyMail rather than the entity-anonymized version (Hermann et al., 2015; Nallapati et al., 2016) because the language model was trained on full text.",
"Articles are truncated to 400 tokens (See et al., 2017) and we use a BPE vocabulary of 32K types (Fan et al., 2017).",
"We evaluate in terms of F1-Rouge, that is Rouge-1, Rouge-2 and Rouge-L (Lin, 2004).",
"3 3.2 Language model pre-training We consider two types of architectures: a bidirectional language model to augment the sequence to sequence encoder and a uni-directional model to augment the decoder.",
"Both use self-attention (Vaswani et al., 2017) and the unidirectional model contains N = 12 transformer blocks, followed by a word classifier to predict the next word on the right.",
"The bi-directional model solves a cloze-style token prediction task at training time (Baevski et al., 2019).",
"The model consists of two towers, the forward tower operates left-to-right and the tower operating right-to-left as backward tower; each tower contains N = 12 trans-2 sacreBLEU signatures: BLEU+case.mixed+lang.en{ de,tr } +numrefs.1+smooth.exp+test.wmt18+tok.13a +version.1.2.1 3 We use the following parameters for ROUGE-1.5.5.pl : -m -a -n 2 160K 320K 640K 1280K 2560K 5186K 1 0 1 2 3 4 5 6 Bitext tokens BLEU d e lt a w r t b a s e li n e SHARED SRC-ELMO SRC-FT TGT-ELMO TGT-FT SRC-ELMO + SHDEMB Figure 1: BLEU difference to a bitext-only baseline when adding pre-trained language model representations to a neural machine translation model in different simulated bitext settings.",
"former blocks.",
"The forward and backward representations are combined via a self-attention module and the output of this module is used to predict the token at position i .",
"The model has access to the entire input surrounding the current target token.",
"Models use the standard settings for the Big Transformer (Vaswani et al., 2017).",
"The bi-directional model contains 353M parameters and the unidirectional model 190M parameters.",
"Both models were trained for 1M steps using Nesterov's accelerated gradient (Sutskever et al., 2013) with momentum 0 .",
"99 following Baevski and Auli (2018).",
"The learning rate is linearly warmed up from 10 7 to 1 for 16K steps and then annealed using a co-sine learning rate schedule with a single phase to 0.0001 (Loshchilov and Hutter, 2016).",
"We train on 32 Nvidia V100 SXM2 GPUs and use the NCCL2 library as well as the torch distributed package for inter-GPU communication.",
"Training relies on 16-bit floating point operations (Ott et al., 2018) and it took six days for the bi-directional model and four days for the uni-directional model.",
"We use the transformer implementation of the fairseq toolkit (Ott et al., 2019).",
"The WMT en-de and en-tr experiments are based on the Big Transformer sequence to sequence architecture with 6 blocks in the encoder and decoder.",
"For abstractive summarization we use a base transformer model (Vaswani et al., 2017).",
"We tune dropout values of between 0.1 and 0.4 on the validation set.",
"Models are optimized with Adam (Kingma and Ba, 2015) using 1 = 0 .",
"9 , 2 = 0 .",
"98 , and (cid:15) = 1 e 8 and we use the same learning rate schedule as Vaswani et al. (2017); we perform 10K-200K depending on bitext size.",
"All models use label smoothing with a uniform prior distribution over the vocabulary (cid:15) = 0 .",
"1 (Szegedy et al., 2015; Pereyra et al., 2017).",
"We run experiments on 8 GPUs and generate translations with a beam of size 5.",
"We first present a comparison of the various strategies in different simulated parallel corpus size settings.",
"For each experiment, we tune the dropout applied to the language model representations, and we reduce the number of optimizer steps for smaller bitext setups as models converge faster; all other hyper-parameters are equal between setups.",
"Our baseline is a Big Transformer model and we also consider a variant where we share token embeddings between the encoder and decoder ( SHARED ; Inan et al., 2016; Press & Wolf, 2016).",
"Figure 1 shows results averaged over six test sets relative to the baseline which does not share source and target embeddings (Appendix A shows a detailed breakdown).",
"SHARED performs very well with little labeled data but the gains erode to practically zero in large bitext settings.",
"Pre-trained 160K 640K 5186K baseline 21.4 33.1 40.1 SRC-ELMO 26.6 35.6 41.8 SRC-FT 24.3 34.9 40.8 TGT-ELMO 21.3 31.9 40.5 TGT-FT 24.2 31.4 38.8 SRC-ELMO + SHDEMB 29.0 36.2 41.8 Table 1: BLEU on newstest2018 of WMT English-German in three simulated bitext size scenarios.",
"language model representations are most effective in low bitext setups.",
"The best performing strategy is ELMo embeddings input to the encoder ( SRCELMO ).",
"This improves the baseline by 3.8 BLEU in the 160K bitext setting and it still improves the 5.2M setting by over 1 BLEU.",
"We further improve SRC-ELMO by sharing learned word representations in the decoder by tying input and output embeddings ( SRCELMO + SHDEMB ).",
"This configuration performs even better than SRC-ELMO with a gain of 5.3 BLEU in the 160K setup.",
"Sharing decoder embeddings is equally applicable to SRC-FT .",
"Language model representations are much less effective in the decoder: TGT-FT improves the 160K bitext setup but yields no improvements thereafter and TGT-ELMO performs even worse.",
"We conjecture that pre-trained representations give much easier wins in the encoder.",
"Table 1 shows additional results on newstest2018.",
"Pre-trained representations mostly impacts the training time of the sequence to sequence model (see Appendix B): SRC-ELMO slows throughput during training by about 5.3x and SRC-FT is even slower because of the need to backpropa-gate through the LM for fine-tuning (9.2x).",
"However, inference is only 12-14% slower than the baseline when adding pre-trained embeddings to the encoder ( SRC-ELMO , SRC-FT ).",
"This is because the LM computation can be paralelized for ROUGE 1 2 L Lead-3 40.34 17.70 36.57 See et al. (2017) 39.53 17.28 36.38 Gehrmann et al. (2018) 41.22 18.68 38.34 baseline 40.07 17.61 36.78 SRC-ELMO + SHDEMB 41.56 18.94 38.47 Table 3: Abstractive summarization results on CNN-DailyMail.",
"all input tokens.",
"Inference is much slower when adding representations to the decoder because the LM needs to be invoked repeatedly.",
"Our current implementation does not cache LM operations for the previous state and can be made much faster.",
"The baseline uses a BPE vocabulary estimated on the language model corpora ( 3).",
"Appendix A shows that this vocabulary actually leads to sligtly better performance than a joint BPE code learned on the bitext as is usual.",
"Next, we validate our findings on the WMT'18 English-Turkish task for which the bitext is truly limited (208K sentence-pairs).",
"We use the language model vocab for the the English side of the bitext and a BPE vocabulary learned on the Turkish side.",
"Table 2 shows that ELMo embeddings for the encoder improve English-Turkish translation.",
"Following See et al. (2017), we experiment on the non-anonymized version of CNN-DailyMail.",
"When generating summaries, we follow standard practice of tuning the maximum output length and disallow repeating the same trigram (Paulus et al., 2017; Fan et al., 2017).",
"For this task we train language model representations on the combination of newscrawl and the CNN-DailyMail training data.",
"Table 3 shows that pre-trained embeddings can significantly improve on top of a strong baseline transformer.",
"We also compare to Gehrmann et al. (2018) who use a task-specific architecture compared to our generic sequence to sequence baseline.",
"Pre-trained representations are complementary to their method.",
"We presented an analysis of different strategies to add pre-trained language model representations to sequence to sequence models for neural machine",
"translation and abstractive document summarization.",
"Adding pre-trained representations is very effective for the encoder network and while returns diminish when more labeled data becomes available, we still observe improvements when millions of examples are available.",
"In future research we will investigate ways to improve the decoder with pre-trained representations."
]
| [
"abstain",
"method",
"result",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"result",
"objective"
]
|
[
"Graph neural networks have triggered a resurgence of graph-based text classification methods, defining today's state of the art.",
"We show that a wide multi-layer perceptron (MLP) using a Bag-of-Words (BoW) outperforms the recent graph-based models TextGCN and HeteGCN in an inductive text classification setting and is comparable with HyperGAT.",
"Moreover, we fine-tune a sequence-based BERT and a lightweight DistilBERT model, which both outperform all state-of-the-art models.",
"These results question the importance of synthetic graphs used in modern text classifiers.",
"In terms of efficiency, DistilBERT is still twice as large as our BoW-based wide MLP, while graph-based models like TextGCN require setting up an O ( N 2 ) graph, where N is the vocabulary plus corpus size.",
"Finally, since Transformers need to compute O ( L 2 ) attention weights with sequence length L , the MLP models show higher training and inference speeds on datasets with long sequences.",
"Text categorization is the task of assigning topical categories to text units such as documents, social media postings, or news articles.",
"Research on text categorization is a very active field as just the sheer amount of new methods in recent surveys shows (Bayer et al., 2021; Li et al., 2020; Zhou et al., 2020; Kowsari et al., 2019; Kadhim, 2019).",
"There are approaches based on a Bag of Words (BoW) that perform text categorization purely on the basis of a multiset of tokens.",
"Among them are Deep Averaging Networks (DAN) (Iyyer et al., 2015), a deep Multi-Layer Perceptron (MLP) model with n layers that relies on averaging the BoW, Simple Word Embedding Models (SWEM) (Shen et al., 2018) that explores different pooling strategies for pretrained word embeddings, and fastText (Bojanowski et al., 2017), which uses a linear layer on top of pretrained word embeddings.",
"These models count the occurrence of all tokens in the input sequence, while disregarding word position and order, and then rely on word embeddings and fully connected feedforward layer(s).",
"We call these BoW-based models .",
"Among the most popular recent methods for text categorization are graph-based models such as TextGCN (Yao et al., 2019) that first induce a synthetic word-document co-occurence graph over the corpus and subsequently apply a graph neural network (GNN) to perform the classification task.",
"Besides TextGCN, there are follow-up works like HeteGCN (Ragesh et al., 2021), TensorGCN (Liu et al., 2020), and HyperGAT (Ding et al., 2020), which we collectively call graph-based models .",
"Finally, there is the well-known Transformer (Vaswani et al., 2017) universe with models such as BERT (Devlin et al., 2019) and its size-reduced variants such as DistilBERT (Sanh et al., 2019).",
"Here, the input is a (fixed-length) sequence of tokens, which is then fed into multiple layers of self-attention.",
"Lightweight versions such as DistilBERT and others (Tay et al., 2020; Fournier et al., 2021) use less parameters but operate on the same type of input.",
"Together with recurrent models such as LSTMs, we call these sequence-based models .",
"In this paper, we hypothesize that text categorization can be very well conducted by simple but effective BoW-based models.",
"We investigate this research question in three steps: First, we conduct an in-depth analysis of the literature.",
"We review the key research in the field of text categorization.",
"From this analysis, we derive the different families of methods, the established benchmark datasets, and identify the top performing methods.",
"We decide for which models we report numbers from the literature and which models we run on our own.",
"Overall, we compare 16 different methods from the families of BoW-based models (8 methods), sequence-based models (3 methods), and graph-based models (5 methods).",
"We run our own experi-4038 ments for 7 of these methods on 5 text categorization datasets, while we report the results from the literature for the remaining methods.",
"The result is surprising: Our own BoW-based MLP, called the WideMLP, with only one wide hidden layer, outperforms many of the recent graph-based models for inductive text categorization (Yao et al., 2019; Liu et al., 2020; Ragesh et al., 2021).",
"Moreover, we did not find any reported scores for BERT-based methods from the sequence-based family.",
"Thus, we fine-tuned our own BERT (Devlin et al., 2019) and DistilBERT (Sanh et al., 2019).",
"These models set a new state of the art.",
"On a meta-level, our study shows that MLPs have largely been ignored as competitor methods in experiments.",
"It seems as if MLPs have been forgotten as baseline in the literature, which instead is focusing mostly on other advanced Deep Learning architectures.",
"Considering strong baselines is, however, an important means to argue about true scientific advancement (Shen et al., 2018; Dacrema et al., 2019).",
"Simple models are also often preferred in industry due to lower operational and maintenance costs.",
"Below, we introduce our methodology and results from the literature study.",
"Subsequently, we introduce the families of models in Section",
"3. Thereafter, we describe the experimental procedure in Section",
"4. We present the results of our experiments in Section 5 and discuss our findings in Section 6, before we conclude.",
"Methodology In a first step, we have analyzed recent surveys on text categorization and comparison studies (Minaee et al., 2021; Bayer et al., 2021; Li et al., 2020; Zhou et al., 2020; Kowsari et al., 2019; Kadhim, 2019; Galke et al., 2017; Zhang et al., 2016).",
"These cover the range from shallow to deep classification models.",
"Second, we have screened for literature in key NLP and AI venues.",
"Finally, we have complemented our search by checking results and papers on paperswithcode.com.",
"On the basis of this input, we have determined three families of methods and benchmark datasets (see Table 2).",
"We focus our analysis on identifying models per family showing strong performance and select the methods to include in our study.",
"For all models, we have verified that the same train-test split is used.",
"We check whether modified versions of the datasets have been used (e. g., fewer classes), to avoid bias and wrongfully giving advantages.",
"BoW-based Models Classical machine learning models that operate on a BoW-based input are extensively discussed in two surveys (Kowsari et al., 2019; Kadhim, 2019) and other comparison studies (Galke et al., 2017).",
"Iyyer et al. (2015) proposed DAN, which combine word embeddings and deep feedforward networks.",
"It is an MLP with 1-6 hidden layers, non-linear activation, dropout, and Ada-Grad as optimization method.",
"The results suggest to use pretrained embeddings such as GloVe (Pen-nington et al., 2014) over a randomly initialized neural bag of-words (Kalchbrenner et al., 2014) as input.",
"In fastText (Bojanowski et al., 2017; Joulin et al., 2017) a linear layer on top of pretrained embeddings is used for classification.",
"Furthermore, Shen et al. (2018) explore embedding pooling variants and find that SWEM can rival approaches based on recurrent (RNN) and convolutional neural networks (CNN).",
"We consider fastText, SWEM, and a DAN-like deeper MLP in our comparison.",
"Note that those approaches that rely on logistic regression on top of pretrained word embeddings, e.",
"g., fastText, share a similar architecture as an MLP with one hidden layer.",
"However, the standard training protocol involves pretraining the word embedding on large amounts of unlabeled text and then freezing the word embeddings while training the logistic regression (Mikolov et al., 2013).",
"Graph-based Models Using graphs induced from text for the task of text categorization has a long history in the community.",
"An early work is the term co-occurrence graph of the KeyGraph algorithm (Ohsawa et al., 1998).",
"The graph is split into segments, representing the key concepts in the document.",
"Co-occurence graphs have also been used for automatic keyword extraction such as in RAKE (Rose et al., 2010) and can be also used for classification (Zhang et al., 2021).",
"Modern approaches exploit this idea in combination with graph neural networks (GNN) (Hamil-ton, 2020).",
"Examples of GNN-based methods operating on a word-document co-occurence graph are TextGCN (Yao et al., 2019) and its successor TensorGCN (Liu et al., 2020) as well as HeteGCN (Ragesh et al., 2021), HyperGAT (Ding et al., 2020), and DADGNN (Liu et al., 2020).",
"We briefly discuss these models: In TextGCN, the authors set up a graph based on word-word connections given by window-based pointwise mutual information and word-document TF-IDF scores.",
"They use a one-hot encoding as node features and apply a 4039 two-layer graph convolutional network (Kipf and Welling, 2017) on the graph to carry out the node classification task.",
"HeteGCN combines ideas from Predictive Text Embedding (Tang et al., 2015) and TextGCN and split the adjacency matrix into its word-document and word-word sub-matrices and fuse the different layers' representations when required.",
"TensorGCN uses multiple ways of converting text data into graph data including a semantic graph created with an LSTM, a syntactic graph created by dependency parsing, and a sequential graph based on word co-occurrence.",
"HyperGAT extended the idea of text-induced graphs for text classification to hypergraphs.",
"The model uses graph attention and two kinds of hyperedges.",
"Sequential hyperedges represent the relation between sentences and their words.",
"Semantic hyperedges for word-word connections are derived from topic models (Blei et al., 2001).",
"Finally, DADGNN is a graph-based approach that uses attention diffusion and decoupling techniques to tackle oversmoothing of the GNN and to be able to stack more layers.",
"In TextGCN's original transductive formulation, the entire graph including the test set needs to be known for training.",
"This may be prohibitive in practical applications as each batch of new documents would require retraining the model.",
"When these methods are adapted for inductive learning, where the test set is unseen, they achieve notably lower scores (Ragesh et al., 2021).",
"GNNs for text classification use corpus statistics, e.",
"g., pointwise mutual information (PMI), to connect related words in a graph (Yao et al., 2019).",
"When these were omitted, the GNNs would collapse to bag-of-words MLPs.",
"Thus, GNNs have access to more information than BoW-MLPs.",
"GloVe (Pennington et al., 2014) also captures PMI corpus statistics, which is why we include an MLP on GloVe input representations.",
"Sequence models: RNN and CNN Recurrent neural networks (RNN) are a natural choice for any NLP task.",
"However, it turned out to be challenging to find numbers reported on text categorization in the literature that can be used as references.",
"The bidirectional LSTM with two-dimensional max pooling BLSTM-2DCNN (Zhou et al., 2016) has been applied on a stripped-down to 4 classes version of the 20ng dataset.",
"Thus, the high score of 96 .",
"5 reported for 4ng cannot be compared with papers applied on the full 20ng dataset.",
"Also Text-RCNN (Lai et al., 2015), a model combining recurrence and convolution uses only the 4 major categories in the 20ng dataset.",
"The results of Text-RCNN are identical with BLSTM-2DCNN.",
"For the MR dataset, BLSTM-2DCNN provides no information on the specific split of the dataset.",
"RNN-Capsule (Wang et al., 2018) is a sentiment analysis method reaching an accuracy of 83 .",
"8 on the MR dataset, but with a different train-test split.",
"Lyu and Liu (2020) combine a 2D-CNN with bidirectional RNN.",
"Another work applying a combination of a convolutional layer and an LSTM layer is by Wang et al. (2019b).",
"The authors experiment with five English and two Chinese datasets, which are not in the set of representative datasets we identified.",
"The authors report that their approach outperforms existing models like fastText on two of the five English datasets and both Chinese datasets.",
"Sequence models: Transformers Surprisingly, only few works consider Transformer models for text categorization.",
"A recent work shows that BERT outperforms classic TF-IDF BoW approaches on English, Chinese, and Portuguese text classification datasets (Gonzlez-Carvajal and Garrido-Merchn, 2020).",
"We have not found any results of transformer-based models reported on those text categorization datasets that are commonly used in the graph-based approaches.",
"Therefore, we fine-tune BERT (Devlin et al., 2019) and DistilBERT (Sanh et al., 2019) on those datasets ourselves.",
"BERT is a large pretrained language model on the basis of Transformers.",
"DistilBERT (Sanh et al., 2019) is a distilled version of BERT with 40% reduced parameters while retaining 97% of BERT's language understanding capabilities.",
"TinyBERT (Jiao et al., 2020) and Mo-bileBERT (Sun et al., 2020) would be similarly suitable alternatives, among others.",
"We chose DistilBERT because it can be fine-tuned independently from the BERT teacher.",
"Its inference times are 60% faster than BERT, which makes it more likely to be reusable by labs with limited resources.",
"Summary From our literature survey, we see that all recent methods are based on graphs.",
"BoW-based methods are hardly found in experiments, while, likewise surprisingly, Transformer-based sequence models are extremely scarce in the literature on topical text categorization.",
"The recent surveys on text categorization include both classical and Deep Learning models, but none considered a simple MLP except for the inclusion of DAN (Iyyer et al., 2015) in Li et al. (2020).",
"We formally introduce the three families of models for text categorization, namely the BoW-based, graph-based, and sequence-based models.",
"Table 1 summarizes the key properties of the approaches: whether they require a synthetic graph, whether word position is reflected in the model, whether the model can deal with arbitrary length text, and whether the model is capable of inductive learning.",
"Under pure BoW-based text categorization, we denote approaches that are not order-aware and operate only on the multiset of words from the input document.",
"Given paired training examples ( x , y ) D , each consisting of a bag-of-words x R n vocab and a class label y Y , the goal is to learn a generalizable function y = f (BoW) ( x ) with parameters such that arg max( y ) preferably equals the true label y for input x .",
"As BoW-based model, we consider a one hidden layer WideMLP (i. e., two layers in total).",
"We experiment with pure BoW, TF-IDF weighted, and averaged GloVe input representations.",
"We also use a two hidden layers WideMLP-2.",
"We list the numbers for fastText, SWEM, and logistic regression from Ding et al. (2020) in our comparison.",
"Graph-based text categorization approaches first set up a synthetic graph on the basis of the text corpus D in the form of an adjacency matrix A := make-graph( D ) .",
"For instance, in TextGCN the graph is set up in two parts: word-word connections are modeled by pointwise mutual information and word-document edges resemble that the word occurs in the document.",
"Then, a parameterized function f (graph) ( X , A ) is learned that uses the graph as input, where X are the node features.",
"The graph is composed of word and document nodes, each receiving its own embedding (by setting X = I ).",
"In inductive learning, however, there is no embedding of the test documents.",
"Note that the graph-based approaches from the current literature such as TextGCN also disregard word order, similar to the BoW-based models described above.",
"A detailed discussion of the connection between TextGCN and MLP is provided in Appendix B. We consider top performing graph-based models from the literature, namely TextGCN along with its successors HeteGCN, TensorGCN, HyperGAT, DADGNN, as well as simplified GCN (SGC) (Wu et al., 2019).",
"We do not run our own experiments for the graph-based models but rely on the original work and extensive studies by Ding et al. (2020) and Ragesh et al. (2021).",
"We consider RNNs, LSTMs, and Transformers as sequence-based models.",
"These models are aware of the order of the words in the input text in the sense that they are able to exploit word order information.",
"Thus, the key difference to the BoW-based and graph-based families is that the word order is reflected by sequence-based model.",
"The model signature is y = f (sequence) ( (cid:104) x 1 , x 2 , . . . , x k (cid:105) ) , where k is the (maximum) sequence length.",
"Word position is modeled by a dedicated positional encoding.",
"For instance, in BERT each position is associated with an embedding vector that is added to the word embedding at input level.",
"For the sequence-based models, we run our own experiments with BERT and DistilBERT, while reporting the scores of a pretrained LSTM from Ding et al. (2020) for comparison.",
"We use the same datasets and train-test split as in TextGCN (Yao et al., 2019).",
"Those datasets are 20ng, R8, R52, ohsumed, and MR. Twenty 4041 Newsgroups (20ng) 1 (bydate version) contains long posts categorized into 20 newsgroups.",
"The mean sequence length is 551 words with a standard deviation (SD) of 2,047.",
"R8 and R52 are subsets of the Reuters 21578 news dataset with 8 and 52 classes, respectively.",
"The mean sequence length and SD is 119 128 words for R8, and 126 133 words for R52.",
"Ohsumed 2 is a corpus of medical abstracts from the MEDLINE database that are categorized into diseases (one per abstract).",
"The mean sequence length is 285 123 words.",
"Movie Reviews (MR) 3 (Pang and Lee, 2005), split by Tang et al. (2015), is a binary sentiment analysis dataset on sentence level (mean sequence length and SD: 25 11 ).",
"Table 2 shows the dataset characteristics.",
"In the transductive setup, as used in TextGCN, the test documents are visible and actually used for the preprocessing step.",
"In the inductive setting, the test documents remain unseen until test time (i. e., they are not available for preprocessing).",
"We report the scores of the graph-based models for both setups from the literature, where available.",
"BoW-based and sequence-based models are inherently inductive.",
"Ragesh et al. (2021) have evaluated a variant of TextGCN that is capable of inductive learning, which we include in our results, too.",
"We have extracted accuracy scores from the literature according to our systematic selection from Section 2.",
"Below, we provide a detailed description of the procedure for the models that we have run ourselves.",
"We borrow the tokenization strategy 1 http://qwone.com/~jason/20Newsgroups/ 2 http://disi.unitn.it/moschitti/ corpora.htm 3 https://www.cs.cornell.edu/people/ pabo/movie-review-data/ from BERT (Devlin et al., 2019) along with its uncased vocabulary.",
"The tokenizer relies primarily on WordPiece (Wu et al., 2016) for a high coverage while maintaining a small vocabulary.",
"Training our BoW-Models.",
"Our WideMLP has one hidden layer with 1,024 rectified linear units (one input-to-hidden and one hidden-to-output layer).",
"We apply dropout after each hidden layer, notably also after the initial embedding layer.",
"Only for GloVe+WideMLP, neither dropout nor ReLU is applied to the frozen pretrained embeddings but only on subsequent layers.",
"The variant WideMLP-2 has two ReLU-activated hidden layers (three layers in total) with 1 , 024 hidden units each.",
"While this might be overparameterized for single-label text classification tasks with few classes, we rely on recent findings that overparameterization leads to better generalization (Neyshabur et al., 2018; Nakkiran et al., 2020).",
"In pre-experiments, we realized that MLPs are not very sensitive to hyperparameter choices.",
"Therefore, we optimize cross-entropy with Adam (Kingma and Ba, 2015) and its default learning rate of 10 3 , a linearly decaying learning rate schedule and train for a high amount of steps (Nakkiran et al., 2020) (we use 100 epochs) with small batch sizes (we use 16) for sufficient stochasticity, along with a dropout ratio of 0 .",
"5 .",
"Fine-tuning our BERT models.",
"For BERT and DistilBERT, we fine-tune for 10 epochs with a linearly decaying learning rate of 5 10 5 and an effective batch size of 128 via gradient accumulation of 8 x 16 batches.",
"We truncate all inputs to 512 tokens.",
"To isolate the influence of word order on BERT's performance, we conduct two further ablations.",
"First, we set all position embeddings to zero and disable their gradient ( BERT w/o pos ids ).",
"By doing this, we force BERT to operate on a bag-of-words without any notion of word order or position.",
"Second, we shuffle each sequence to augment the training data.",
"We use this augmentation strategy to increase the number of training examples by a factor of two ( BERT w/ shuf. augm. ).",
"We report accuracy as evaluation metric, which is equivalent to Micro-F1 in single-label classification (see Appendix C).",
"We repeat all experiments five times with different random initialization of the parameters and report the mean and standard deviation of these five runs.",
"Table 3 shows the accuracy scores for the text categorization models on the five datasets.",
"All graph-based models in the transductive setting show similar accuracy scores (maximum difference is 2 points).",
"As expected, the scores decrease in the inductive setting up to a point where they are matched or even outperformed by our WideMLP.",
"In the inductive setting, the WideMLP models perform best among the BoW models, in particular, TFIDF+WideMLP and WideMLP on an unweighted BoW.",
"The best-performing graph-based model is HyperGAT, yet DADGNN has a slight advantage on R8, R52, and MR. For the sequence-based models, BERT attains the highest scores, closely followed by DistilBERT.",
"The strong performance of WideMLP rivals all graph-based techniques reported in the literature, in particular, the recently published graph-inducing methods.",
"MLP only falls behind HyperGAT, which relies on topic models to set up the graph.",
"Another observation is that 1 hidden layer (but wide) is sufficient for the tasks, as the scores for MLP variants with 2 hidden layers are lower.",
"We further observe that both pure BoW and TF-IDF weighted BoW lead to better results than approaches that exploit pretrained word embeddings such as GloVe-MLP, fastText, and SWEM.",
"With its immense pretraining, BERT yields the overall highest scores, closely followed by DistilBERT.",
"DistilBERT outperforms HyperGAT by 7 points on the MR dataset while being on par on the others.",
"BERT outperforms the strongest graph-based competitor, HyperGAT, by 8 points on MR, 1.5 points on ohsumed, 1 point on R52 and R8, and 0.5 points on 20ng.",
"Our results further confirm that position embeddings are important for BERT with a notable decrease when those are omitted.",
"Augmenting the data with shuffled sequences has led to neither a consistent decrease nor increase in performance.",
"Parameter Count of the Models Table 4 lists the parameter counts of the models.",
"Even though the MLP is fully-connected on top of a bag-of-words with the dimensionality of the vocabulary size, it has only half of the parameters as DistilBERT and a quarter of the parameters of BERT.",
"Using TF-IDF does not change the number of model parameters.",
"Due to the high vocabulary size, GloVe-based models have a high number of parameters, but the majority of those is frozen, i.",
"e., does not get gradient updates during training.",
"provide the total running times in Table 5 as observed while conducting the experiments on a single NVIDIA A100-SXM4-40GB card.",
"All WideMLP variants are an order of magnitude faster than DistilBERT when considering the average runtime per epoch .",
"DistilBERT is twice as fast as the original BERT.",
"The transformers are only faster than BoW models on the MR dataset.",
"This is because the sequences in the MR dataset are much shorter and less O ( L 2 ) attention weights have to be computed.",
"Key Insights Our experiments show that our MLP models using BoW outperform the recent graph-based models TextGCN and HeteGCN in an inductive text classification setting.",
"Furthermore, the MLP models are comparable to HyperGAT.",
"Only transformer-based BERT and DistilBERT models outperform our MLP and set a new state-of-the-art.",
"This result is important for two reasons: First, the strong performance of a pure BoW-MLP questions the added value of synthetic graphs in models like TextGCN to the text categorization task.",
"Only HyperGAT, which uses the expensive Latent Dirichlet Allocation for computing the graph, slightly outperforms our BoW-WideMLP in two out of five datasets.",
"Thus, we argue that using strong baseline models for text classification is important to assess the true scientific advancement (Dacrema et al., 2019).",
"Second, in contrast to conventional wisdom (Iyyer et al., 2015), we find that pretrained embeddings, e.",
"g., GloVe, can have a detrimental effect when compared to using an MLP with one wide hidden layer.",
"Such an MLP circumvents the bottleneck of the small dimensionality of word embeddings and has a higher capacity.",
"Furthermore, we experiment with more hidden layers (see WideMLP-2), but do not observe any improvement when the single hidden layer is sufficiently wide.",
"A possible explanation is that already a single hidden layer is sufficient to approximate any compact function to an arbitrary degree of accuracy depending on the width of the hidden layer (Cybenko, 1989).",
"ing. However, as our efficiency analysis shows, the MLPs require only a fraction of the parameters and are faster in their combined training and inference time except for the MR dataset.",
"The attention mechanism of (standard) Transformers is quadratic in the sequence length, which leads to slower processing of long sequences.",
"With larger batches, the speed of the MLP could be increased even further.",
"Detailed Discussion of Results Graph-based models come with high training costs, as not only the graph has to be first computed, but also a GNN has to be trained.",
"For standard GNN methods, the whole graph has to fit into the GPU memory and mini-batching is nontrivial, but possible with dedicated sampling techniques for GNNs (Fey et al., 2021).",
"Furthermore, the original TextGCN is inherently transductive, i.",
"e., it has to be retrained whenever new documents appear.",
"Strictly transductive models are effectively useless in practice (Lu et al., 2019) except for applications, in which a partially labeled corpus needs to be fully annotated.",
"However, recent extensions such as HeteGCN, HyperGAT, and DADGNN already relax this constraint and enable inductive learning.",
"Nevertheless, word-document graphs require O ( N 2 ) space, where N is the number of documents plus the vocabulary size, which is a hurdle for large-scale applications.",
"There are also tasks where the natural structure of the graph data provides more information than the mere text, e.",
"g., citations networks or connections in social graphs.",
"In such cases, the performance of graph neural networks is the state of the art (Kipf and Welling, 2017; Velickovic et al., 2018) and are superior to MLPs that use only the node features and not the graph structure (Shchur et al., 2018).",
"GNNs also find application in various NLP tasks, other than classification (Wu et al., 2021).",
"An interesting factor is the ability of the models to capture word order.",
"BoW models disregard word order entirely and yield good results, but still fall behind order-aware Transformer models.",
"In an extensive study, Conneau et al. (2018) have shown 4044 Table 5: Total runtime (training+inference).",
"that memorizing the word content (which words appear at all) is most indicative of downstream task performance.",
"Sinha et al. (2021) have experimented with pretraining BERT by disabling word order during pretraining and show that it makes surprisingly little difference for fine-tuning.",
"In their study, word order is preserved during fine-tuning.",
"In our experiments, we have conducted complementary experiments: we have used a BERT model that is pretrained with word order, but we have deactivated the position encoding during fine-tuning.",
"Our results show that there is a notable drop in performance but the model does not fail completely.",
"Other NLP tasks such as question answering (Ra-jpurkar et al., 2016) or natural language inference (Wang et al., 2019a) can also be regarded as text classification on a technical level.",
"Here, the positional information of the sequence is more important than for pure topical text categorization.",
"One can expect that BoW-based models perform worse than sequence-based models.",
"Generalizability We expect that similar observations would be made on other text classification datasets because we have already covered a range of different characteristics: long and short texts, topical categorization (20ng, Reuters, and Ohsumed) and sentiment prediction (MR) in the domains of forum postings, news, movie reviews, and medical abstracts.",
"Our results are in line with those from other fields, who have reported a resurgence of MLPs.",
"For example, in business prediction, an MLP baseline outperforms various other Deep Learning models (Venugopal et al., 2021; Yedida et al., 2021).",
"In computer vision, Tolstikhin et al. (2021) and Melas-Kyriazi (2021) proposed attention-free MLP models that are on par with the Vision Transformer (Dosovitskiy et al., 2021).",
"In natural language processing, Liu et al. (2021a) show similar results, while acknowledging that a small attention module is necessary for some tasks.",
"Threats to Validity We acknowledge that the experimental datasets are limited to English.",
"While word order is important in the English language, it is notable that methods that discard word order still work well for text categorization.",
"Another possible bias is the comparability of the results.",
"However, we carefully checked all relevant parameters such as the train/test split, the number of classes in the datasets, if datasets have been pre-processed in such a way that, e.",
"g., makes a task easier like reducing the number of classes, the training procedure, and the reported evaluation metrics.",
"Regarding our efficency analysis, we made sure to report numbers for the parameter count and a measure for the speed other than FLOPs, as recommended by Dehghani et al. (2021).",
"Since runtime is heavily dependant on training parameters such as batch size, we complement this with asymptotic complexity.",
"has an immediate impact on practitioners who seek to employ robust text categorization models in research projects and in industrial operational environments.",
"Furthermore, we advocate to use an MLP baseline in future text categorization research, for which we provide concrete guidelines in Appendix A. As future work, it would be interesting to analyze multi-label classification tasks and to compare with hierarchical text categorization methods (Peng et al., 2018; Xiao et al., 2019).",
"Another interesting yet challenging setting would be few-shot classification (Brown et al., 2020).",
"We argue that a wide multi-layer perceptron enhanced with today's best practices should be considered as a strong baseline for text classification tasks.",
"In fact, the experiments show that our WideMLP is oftentimes on-par or even better than recently proposed models that synthesize a graph structure from the text.",
"The source code is available online: https://github.com/lgalke/ text-clf-baselines",
"The focus of this work is text classification.",
"Potential risks that apply to text classification in general also apply to this work.",
"Nonetheless, we present alternatives to commonly used pretrained language models, which suffer from various sources of bias due to the large and poorly manageable data used for pretraining (Bender et al., 2021).",
"In contrast, the presented alternatives render full control over the training data and, thus, contribute to circumvent the biases otherwise introduced during pretraining."
]
| [
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"method",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"other",
"abstain",
"method",
"method",
"abstain"
]
|
[
"Event coreference resolution is an important research problem with many applications.",
"Despite the recent remarkable success of pretrained language models, we argue that it is still highly beneficial to utilize symbolic features for the task.",
"However, as the input for coreference resolution typically comes from upstream components in the information extraction pipeline, the automatically extracted symbolic features can be noisy and contain errors.",
"Also, depending on the specific context, some features can be more informative than others.",
"Motivated by these observations, we propose a novel context-dependent gated module to adaptively control the information flows from the input symbolic features.",
"Combined with a simple noisy training method, our best models achieve state-of-the-art results on two datasets: ACE 2005 and KBP 2016.",
"1 1 Introduction Within-document event coreference resolution is the task of clustering event mentions in a text that refer to the same real-world events (Lu and Ng, 2018).",
"It is an important research problem, with many applications (Vanderwende et al., 2004; Ji and Grishman, 2011; Choubey et al., 2018).",
"Since the trigger of an event mention is typically the word or phrase that most clearly describes the event, virtually all previous approaches employ features related to event triggers in one form or another.",
"To achieve better performance, many methods also need to use a variety of additional symbolic features such as event types, attributes, and arguments (Chen et al., 2009; Chen and Ji, 2009; Zhang et al., 2015; Sammons et al., 2015; Lu and Ng, 2016; Chen and Ng, 2016; Duncan et al., 2017).",
"Previous neural methods (Nguyen et al., 2016; Choubey and Huang, 2017; Huang et al., 2019) also use noncontextual word embeddings such as word2vec 1 The code is publicly available at https://github.com/ laituan245/eventcoref .",
"(Mikolov et al., 2013) or GloVe (Pennington et al., 2014).",
"With the recent remarkable success of language models such as BERT (Devlin et al., 2019) and SpanBERT (Joshi et al., 2020), one natural question is whether we can simply use these models for coreference resolution without relying on any additional features.",
"We argue that it is still highly beneficial to utilize symbolic features, especially when they are clean and have complementary information.",
"Table 1 shows an example in the ACE 2005 dataset, where our baseline SpanBERT model incorrectly predicts the highlighted event mentions to be coreferential.",
"The event triggers are semantically similar, making it challenging for our model to distinguish.",
"However, notice that the event { head out } ev1 is mentioned as if it was a real occurrence, and so its modality attribute is ASSERTED (LDC, 2005).",
"In contrast, because of the phrase were set to, we can infer that the event { leave } ev2 did not actually happen (i.e., its modality attribute is OTHER).",
"Therefore, our model should be able to avoid the mistake if it utilizes additional symbolic features such as the modality attribute in this case.",
"There are several previous methods that use contextual embeddings together with type-based or argument-based information (Lu et al., 2020; Yu et al., 2020).",
"For example, Lu et al. (2020) proposes a new mechanism to better exploit event type information for coreference resolution.",
"Despite their impressive performance, these methods are specific to one particular type of additional information.",
"symbolic features into event coreference resolution.",
"Simply concatenating symbolic features with contextual embeddings is not optimal, since the features can be noisy and contain errors.",
"Also, depending on the context, some features can be more informative than others.",
"Therefore, we design a novel context-dependent gated module to extract information from the symbolic features selectively.",
"Combined with a simple regularization method that randomly adds noise into the features during training, our best models achieve state-of-the-art results on ACE 2005 (Walker et al., 2006) and KBP 2016 (Mitamura et al., 2016) datasets.",
"To the best of our knowledge, our work is the first to explicitly focus on dealing with various noisy symbolic features for event coreference resolution.",
"We focus on within-document event coreference resolution.",
"The input to our model is a document D consisting of n tokens and k (predicted) event mentions { m 1 , m 2 , . . . , m k } .",
"For each m i , we denote the start and end indices of its trigger by s i and e i respectively.",
"We assume the mentions are ordered based on s i (i.e., If i j then s i s j ).",
"We also assume each m i has K (predicted) categorical features { c (1) i , c (2) i , . . . , c ( K ) i } , with each c ( u ) i { 1 , 2 , . . . , N u } taking one of N u different discrete values.",
"Table 2 lists the symbolic features we consider in this work.",
"The definitions of the features and their possible values are in ACE and Rich ERE guidelines (LDC, 2005; Mitamura et al., 2016).",
"The accuracy scores of the symbolic feature predictors are also shown in Table 2.",
"We use OneIE (Lin et al., 2020) to identify event mentions along with their subtypes.",
"For other symbolic features, we train a joint classification model based on SpanBERT.",
"The appendix contains more details.",
"Given a document D , our model first forms a con-textualized representation for each input token using a Transformer encoder (Joshi et al., 2020).",
"Let X = ( x 1 , ..., x n ) be the output of the encoder, where x i R d .",
"Then, for each mention m i , its trigger's representation t i is defined as the average of its token embeddings: t i = e i (cid:88) j = s i x j e i s i + 1 (1) Dataset Features Acc.",
"Next, by using K trainable embedding matrices, we convert the symbolic features of m i into K vectors { h (1) i , h (2) i , . . . , h ( K ) i } , where h ( u ) i R l .",
"Given two event mentions m i and m j , we define their trigger-based pair representation as:",
"where FFNN t is a feedforward network mapping from R 3 d R p , and is element-wise multiplication.",
"Similarly, we can compute their feature-based pair representations { h (1) ij , h (2) ij , . . . , h ( K ) ij } as follows: h ( u ) ij = FFNN u (cid:0)(cid:2) h ( u ) i , h ( u ) j , h ( u ) i h ( u ) j (cid:3)(cid:1) (3) where u { 1 , 2 , . . . , K } , and FFNN u is a feedforward network mapping from R 3 l R p .",
"Now, the most straightforward way to build the final pair representation f ij of m i and m j is to simply concatenate the trigger-based representation and all the feature-based representations together: f ij = [ t ij , h (1) ij , h (2) ij , . . . , h ( K ) ij ] (4) However, this approach is not always optimal.",
"First, as the symbolic features are predicted, they can be noisy and contain errors.",
"The performance of most symbolic feature predictors is far from perfect (Table 2).",
"Also, depending on the specific context, some features can be more useful than others.",
"Inspired by studies on gated modules (Lin et al., 2019; Lai et al., 2019), we propose Context-Dependent Gated Module (CDGM), which uses a gating mechanism to extract information from the input symbolic features selectively (Figure 1).",
"Given two mentions m i and m j , we use their trigger feature vector t ij as the main controlling context to compute the filtered representation h ( u ) ij : h ( u ) ij = CDGM ( u ) (cid:0) t ij , h ( u ) ij (cid:1) (5) Figure 1: Overall architecture of our mention-pair encoder, which uses CDGMs to incorporate symbolic features.",
"where u { 1 , 2 , . . . , K } .",
"More specifically: g ( u ) ij = (cid:0) FFNN ( u ) g (cid:0)(cid:2) t ij , h ( u ) ij (cid:3)(cid:1)(cid:1) o ( u ) ij , p ( u ) ij = DECOMPOSE (cid:0) t ij , h ( u ) ij (cid:1) h ( u ) ij = g ( u ) ij o ( u ) ij + (cid:0) 1 g ( u ) ij (cid:1) p ( u ) ij (6) where denotes sigmoid function.",
"FFNN ( u ) g is a mapping from R 2 p R p .",
"At a high level, h ( u ) ij is decomposed into an orthogonal component and a parallel component, and h ( u ) ij is simply the fusion of these two components.",
"In order to find the optimal mixture, g ij is used to control the composition.",
"The decomposition unit is defined as: Parallel p ( u ) ij = h ( u ) ij t ij t ij t ij t ij Orthogonal o ( u ) ij = h ( u ) ij p ( u ) ij (7) where denotes dot product.",
"The parallel component p ( u ) ij is the projection of h ( u ) ij on t ij .",
"It can be viewed as containing information that is already part of t ij .",
"In contrast, o ( u ) ij is orthogonal to t ij , and so it can be viewed as containing new information.",
"Intuitively, when the original symbolic feature vector h ( u ) ij is very clean and has complementary information, we want to utilize the new information in o ( u ) ij (i.e., we want g ( u ) ij 1 ), and vice versa.",
"where FFNN a is a mapping from R ( K +1) p",
"2.4 Training and Inference Algorithm 1: Noise Addition for Symbolic Features Input: Document D Hyperparameters : { (cid:15) 1 , (cid:15) 2 , , (cid:15) K } for i = 1 . . . k do for u = 1 . . . K do With prob.",
"Training We use the same loss function as in (Lee et al., 2017).",
"Also, notice that the training accuracy of a feature predictor is typically much higher than its accuracy on the dev/test set (Table 2).",
"If we simply train our model without any regularization, our CDGMs will rarely come across noisy symbolic features during training.",
"Therefore, to encourage our CDGMs to actually learn to distill reliable signals, we also propose a simple but effective noisy training method.",
"Before passing a training data batch to the model, we randomly add noise to the predicted features.",
"More specifically, for each document D in the batch, we go through every symbolic feature of every event mention in D and consider sampling a new value for the feature.",
"The operation is described in Algorithm 1 (we use the same notations mentioned in Section 2.1).",
"{ (cid:15) 1 , (cid:15) 2 , , (cid:15) K } are hyperparameters determined by validation.",
"In general, the larger the discrepancy between the train and test accuracies, the larger (cid:15) .",
"ceding mentions or a dummy antecedent (cid:15) : a i Y ( i ) = { (cid:15), m 1 , m 2 . . . , m i 1 } .",
"Basically, a i = arg max j<i s ( i, j ) .",
"The dummy antecedent (cid:15) represents two possible cases: (1) m i is not actually an event mention (2) m i is indeed an event mention but it is not coreferent with any previous extracted mentions.",
"In addition, we fix s ( i, (cid:15) ) to be 0.",
"Data and Experiments Setup We evaluate our methods on two English datasets: ACE2005 (Walker et al., 2006) and KBP2016 (Ji et al., 2016; Mitamura et al., 2016).",
"We report results in terms of F1 scores obtained using the CoNLL and AVG metrics.",
"By definition, these metrics are the summary of other standard coreference metrics, including B 3 , MUC, CEAF e , and BLANC (Lu and Ng, 2018).",
"We use SpanBERT (spanbert-base-cased) as the Transformer encoder (Wolf et al., 2020a; Joshi et al., 2020).",
"More details about the datasets and hyperparameters are in the appendix.",
"We refer to models that use only trigger features as [Baseline].",
"In a baseline model, f ij is simply t ij (Eq. 2).",
"We refer to models that use only the simple concatenation strategy as [Simple] (Eq. 4), and models that use the simple concatenation strategy and the noisy training method as [Noise].",
"Table 3 and Table 4 show the overall end-to-end results on ACE2005 and KBP2016, respectively.",
"We ACE (Test Data) CoNLL AVG PAIREDRL (2020) 84.65 Baseline 81.62 81.49 Simple (All Features) 75.32 74.94 CDGM + Noise (All Features) 84.76 83.95 Table 5: Results on ACE 2005 using gold triggers and predicted symbolic features.",
"use OneIE (Lin et al., 2020) to extract event mentions and their types.",
"Other features are predicted by a simple Transformer model.",
"Overall, our full model outperforms the baseline model by a large margin and significantly outperforms state-of-the-art on KBP 2016.",
"Our ACE 2005 scores are not directly comparable with previous work, as Peng et al. (2016) conducted 10-fold cross-validation and essentially used more training data.",
"Nevertheless, the magnitude of the differences in scores between our best model and the state-of-the-art methods indicates the effectiveness of our methods.",
"Overall Results (on Ground-truth Triggers) The overall results on ACE 2005 using ground-truth triggers and predicted symbolic features are shown in Table 5.",
"The performance of our full model is comparable with previous state-of-the-art result in (Yu et al., 2020).",
"To better analyze the usefulness of symbolic features as well as the effectiveness of our methods, we also conduct experiments using ground-truth triggers and ground-truth symbolic features (Table 6).",
"First, when the symbolic features are clean, incorporating them using the simple concatenation strategy can already boost the performance significantly.",
"The symbolic features contain information complementary to that in the SpanBERT contextual embeddings.",
"Second, we also see that the noisy training method is not helpful when the symbolic features are clean.",
"Unlike other regularization methods such as dropout (Sri-vastava et al., 2014) and weight decay (Krogh and Hertz, 1992), the main role of our noisy training method is not to reduce overfitting in the traditional sense.",
"Its main function is to help CDGMs learn to distill reliable signals from noisy features.",
"Impact of Different Symbolic Features Table 7 shows the results of incorporating different types of symbolic features on the ACE 2005 dataset.",
"Overall, our methods consistently perform better than the simple concatenation strategy across all feature types.",
"The gains are also larger for more noisy features than clean features (feature prediction accuracies were shown in Table 2).",
"This suggests that our methods are particularly useful in situations where the symbolic features are noisy.",
"Comparison with Multi-Task Learning We also investigate whether we can incorporate symbolic semantics into coreference resolution by simply doing multi-task training.",
"We train our baseline model to jointly perform coreference resolution and symbolic feature prediction.",
"The test AVG score on ACE 2005 is only 56.5.",
"In contrast, our best model achieves an AVG score of 59.76 (Table 3).",
"Qualitative Examples Table 8 shows few examples from the ACE 2005 dataset that illustrate how incorporating symbolic features using our proposed methods can improve the performance of event conference resolution.",
"In each example, our baseline model incorrectly predicts the highlighted event mentions to be coreferential.",
"Remaining Challenges Previous studies suggest that there exist different types and degrees of event coreference (Recasens et al., 2011; Hovy et al., 2013).",
"Many methods (including ours) focus on the full strict coreference task, but other types of coreference such as partial coreference have remained underexplored.",
"Hovy et al. (2013) defines two core types of partial event coreference relations: subevent relations and membership relations.",
"Subevent relations form a stereotypical sequence of events, whereas membership relations represent instances of an event collection.",
"We leave tackling the partial coreference task to future work.",
"Several previous approaches to within-document event coreference resolution operate by first ap-...",
"plying a mention-pair model to compute pairwise distances between event mentions, and then they apply a clustering algorithm such as agglomerative clustering or spectral graph clustering (Chen et al., 2009; Chen and Ji, 2009; Chen and Ng, 2014; Nguyen et al., 2016; Huang et al., 2019).",
"In addition to trigger features, these methods use a variety of additional symbolic features such as event types, attributes, arguments, and distance.",
"These approaches do not use contextual embeddings such as BERT and SpanBERT (Devlin et al., 2019; Joshi et al., 2020).",
"Recently, there are several studies that use contextual embeddings together with type-based or argument-based information (Lu et al., 2020; Yu et al., 2020).",
"These methods design networks or mechanisms that are specific to only one type of symbolic features.",
"In contrast, our work is more general and can be effectively applied to a wide range of symbolic features.",
"In this work, we propose a novel gated module to incorporate symbolic semantics into event coreference resolution.",
"Combined with a simple noisy training technique, our best models achieve competitive results on ACE 2005 and KBP 2016.",
"In the future, we aim to extend our work to address more general problems such as cross-lingual cross-document coreference resolution.",
"This research is based upon work supported in part by U.S. DARPA KAIROS Program No.",
"FA8750-19-2-1004, U.S. DARPA AIDA Program No.",
"FA8750-18-2-0014, and Air Force No.",
"FA8650-17-C-7715.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein."
]
| [
"abstain",
"method",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other"
]
|
[
"Open relation extraction (OpenRE) aims to extract novel relation types from open-domain corpora, which plays an important role in completing the relation schemes of knowledge bases (KBs).",
"Most OpenRE methods cast different relation types in isolation without considering their hierarchical dependency.",
"We argue that OpenRE is inherently in close connection with relation hierarchies.",
"To address the bidirectional connections between OpenRE and relation hierarchy, we propose the task of open hierarchical relation extraction and present a novel OHRE framework for the task.",
"To effectively integrate hierarchy information into relation representations for better novel relation extraction, we propose a dynamic hierarchical triplet objective and hierarchical curriculum training paradigm.",
"We also present a top-down hierarchy expansion algorithm to add the extracted relations into existing hierarchies with reasonable interpretability.",
"Comprehensive experiments show that OHRE outperforms state-of-the-art models by a large margin on both relation clustering and hierarchy expansion.",
"The source code and experiment details of this paper can be obtained from https://github.com/thunlp/OHRE .",
"Open relation extraction (OpenRE) aims to extract novel relations types between entities from open-domain corpora, which plays an important role in completing the relation schemes of knowledge bases (KBs).",
"OpenRE models are mainly categorized into two groups, namely tagging-based and clustering-based methods.",
"Tagging-based methods consider OpenRE as a sequence labeling indicates equal contribution Corresponding author: Z.Liu ([email protected]) significant person relative father participant winner Training instances Representation Learning Test instances OHRE child spouse RelationClustering Novel relations Hierarchy Expansion Figure 1: The workflow of OHRE framework.",
"task, which extracts relational phrases from sentences (Banko et al., 2007; Cui et al., 2018).",
"In contrast, clustering-based methods aim to cluster relation instances into groups based on their semantic similarities, and regard each cluster as a relation (Yao et al., 2011; Wu et al., 2019).",
"However, most OpenRE models cast different relation types in isolation, without considering their rich hierarchical dependencies.",
"Hierarchical organization of relations has been shown to play a central role in the abstraction and generalization ability of human (Tenenbaum et al., 2011).",
"This hierarchical organization of relations also constitutes the foundation of most modern KBs (Auer et al., 2007; Bollacker et al., 2008).",
"Figure 1 illustrates an example of relation hierarchy in Wikidata (Vrandecic and Krtzsch, 2014).",
"Such relation hierarchies are crucial in establishing the relation schemes of KBs, and could also help users better understand and utilize relations in various downstream tasks.",
"expert knowledge and are time-consuming, given the usually large quantity of relations in existing hierarchy and the rapid emergence of novel relations in open domain corpora.",
"1 Since the ultimate goal of OpenRE is to automatically establish and maintain relation schemes for KBs, it is desirable to develop OpenRE methods that can directly add the extracted novel relations into the existing incomplete relation hierarchy.",
"Moreover, incorporating the hierarchical information of existing relations can also help OpenRE methods to model their interdependencies.",
"Such refined semantic connections among existing relations can provide transferable guidance to better extract new relations.",
"Given the inherent bidirectional connections between OpenRE and relation hierarchy, in this work, we aim to introduce relation hierarchy information to improve OpenRE performance, and directly add the extracted new relations into the existing hierarchy, which presents unique challenges.",
"We propose a novel framework OHRE to consider relation hierarchy in OpenRE.",
"The key intuition behind our framework is that distance between relations in hierarchy reflects their semantic similarity.",
"Therefore, nearby relations should share similar representations, and vice versa.",
"Figure 1 shows the framework of OHRE, which consists of two components: (1) In relation representation learning , we design a dynamic hierarchical triplet objective to integrate hierarchy information into relation representations.",
"We also present a hierarchical curriculum learning strategy for progressive and robust training.",
"(2) In relation hierarchy expansion , we first cluster instances into new relation prototypes and then conduct a top-down hierarchy expansion algorithm to locate new relations into hierarchy.",
"In this way, OHRE encodes hierarchical information into relation representations, which improves classical OpenRE and further enables hierarchy expansion.",
"To verify the effectiveness of hierarchical information and the proposed framework, we conduct experiments over two evaluations, including the classical relation clustering task and a novel hierarchy expansion task.",
"Experimental results on two real-world datasets show that our framework can bring significant improvements on the two tasks, even with partially available hierarchy from KBs.",
"The main contributions of this work are concluded as follows: (1) To the best of our knowl-1 E.g., the number of relations in Wikidata has grown to more than 8 , 000 in the last 6 years.",
"edge, we are the first to address bidirectional connections between OpenRE and relation hierarchy.",
"We propose a novel open hierarchical relation extraction task, which aims to provide new relations and their hierarchical structures simultaneously.",
"(2) We present a novel OHRE framework for the proposed task, which integrates hierarchical information into relation representations for better relation clustering, and directly expands existing relation hierarchies with a top-down algorithm.",
"(3) Comprehensive experiments on two real-world datasets demonstrate the effectiveness of OHRE on both relation clustering and hierarchy expansion.",
"Open Relation Extraction.",
"Recent years have witnessed an upsurge of interest in open relation extraction (OpenRE) that aims to identify new relations in unsupervised data.",
"Existing OpenRE methods can be divided into tagging-based methods and clustering-based methods.",
"Tagging-based methods seek to extract surface form of relational phrases from text in unsupervised (Banko et al., 2007; Banko and Etzioni, 2008), or supervised paradigms (Angeli et al., 2015; Cui et al., 2018; Stanovsky et al., 2018).",
"However, many relations cannot be explicitly represented as surface forms, and it is hard to align different relational tokens with the same meanings.",
"In contrast, traditional clustering-based OpenRE methods extract rich features of sentences and cluster features into novel relation types (Lin and Pan-tel, 2001; Yao et al., 2011, 2012; Elsahar et al., 2017).",
"Marcheggiani and Titov (2016) propose discrete-state variational autoencoder (VAE) that optimizes a relation classifier by reconstruction signals.",
"Simon et al. (2019) introduce skewness loss to enable stable training of VAE.",
"Hu et al. (2020) learn relation representations and clusters iteratively via self-training.",
"Wu et al. (2019) improve conventional unsupervised clustering-based methods by combining supervised and unsupervised data via siamese networks, and achieve state-of-the-art performance.",
"However, existing OpenRE methods cast different relation types in isolation without considering their rich hierarchical dependencies.",
"Hierarchy Information Exploitation.",
"Well-organized taxonomy and hierarchies can facilitate many downstream tasks.",
"Hierarchical information derived from concept ontologies can reveal semantic similarity (Leacock and Chodorow, 1998; Ponzetto and Strube, 2007), and is widely applied in enhancing classification models (Rousu et al., 2005; Weinberger and Chapelle, 2009) and knowledge representation learning models (Hu et al., 2015; Xie et al., 2016).",
"Similar to concept hierarchy, some recent works try to exploit semantic connections from relation hierarchy.",
"In the field of relation extraction, Han et al. (2018a) propose a hierarchical attention scheme to alleviate the noise in distant supervision.",
"Zhang et al. (2019) leverage implicit hierarchical knowledge from KBs and propose coarse-to-fine grained attention for long-tail relations.",
"However, these methods are designed to identify pre-defined relations, and cannot be applied to OpenRE that aims to discover novel relations in open-domain corpora.",
"We divide the open hierarchical relation extraction problem into two phases: (1) learning relation representations with hierarchical information and (2) clustering and linking novel relations to existing hierarchies.",
"Learning relation representation is fundamental to open hierarchical relation extraction.",
"We encode sentences into relation representations using a relation embedding encoder.",
"We assume existing relations are organized in hierarchies, which is common in most modern KBs.",
"Note that while Figure 1 shows one hierarchy tree, the relation hierarchies may contain multiple trees.",
"To fully utilize hierarchy information, we design a dynamic hierarchical triplet objective that integrates hierarchy information into relation representations, and hierarchical curriculum learning for robust model training.",
"Pairwise virtual adversarial training is also introduced to improve the representation generalization ability.",
"Relation Embedding Encoder.",
"We adopt CNN to encode sentences into relation representations.",
"Following previous works (Zeng et al., 2014), given a sentence s and target entity pair ( e h , e t ) , each word in the sentence is first transformed into input representations by the concatenation of word embedding and position embedding indicating the position of each entity.",
"Then the input representation is fed into a convolutional layer followed by a max-pooling layer and a fully-connected layer to obtain the relation representation v R d .",
"The relation representation is normalized by L2 norm, i.e., Relation EmbeddingEncoder Curriculum Learning Dynamic Margin !",
"After obtaining relation representations, we measure the similarity of two relation instances by the Euclidean distance between their representations:",
"Dynamic Hierarchical Triplet Loss.",
"To effectively integrate relation hierarchy information into relation representations, we propose a dynamic hierarchical triplet loss for instance representation learning.",
"Triplet loss is widely used in metric learning that encourages a static margin between different categories for distinguishment (Schroff et al., 2015).",
"We argue that good relation representations should also reflect hierarchical information, where relations with close semantics in hierarchy should share similar representations.",
"As the example shown in Figure 2, r 1 i and r 1 j should be closer than r 2 i and r 2 j in representation space, since r 1 i and r 1 j are close to each other in the relation hierarchy.",
"We design a hierarchical triplet objective with a dynamic margin which is determined by the distance between relations in hierarchy.",
"Specifically, the dynamic margin is conducted over the instances of the relations.",
"As shown in Figure 2, given two relations r i and r j sampled by hierarchical curriculum training strategy (which will be introduced later), we randomly sample two instances (namely anchor instance a and positive instance p ) from r i , and an instance (namely negative instance n ) from r j .",
"The hierarchical triplet objective requires model to distinguish the positive pair ( a , p ) from the negative pair ( a , n ) by a distance margin, which is dynamically determined by the length of the shortest path between r i and r j in the hierarchy as follows: L t = (cid:88) r i ,r j T max[0 , d ( v a , v p ) + d l ( r i , r j ) 1 + l ( r i , r j ) d ( v a , v n )] , (3) where d is a hyperparameter, l ( r i , r j ) is the length of the shortest path between r i and r j in the hierarchy, 2 and T is the curriculum training strategy that will be introduced later.",
"Intuitively, the margin increases with the length of the shortest path in the hierarchy, with a relative emphasis on distinguishing nearby relations.",
"Compared to the static margin in vanilla triplet loss, dynamic hierarchical margin can capture the semantic similarities of relations in the hierarchy, leading to representations that can serve not only novel relation clustering but also effective relation hierarchy expansion.",
"Hierarchical Curriculum Learning.",
"In addition to providing direct supervision for representation learning, relation hierarchy can also be useful in providing signals for robust model training.",
"We propose a hierarchical training paradigm, which is a curriculum learning strategy (Bengio et al., 2009) that enables progressive training.",
"The motivation is intuitive: In the early period of training, we choose relations that are easy to distinguish by the model, and gradually transfer to harder ones.",
"Specifically, we sample two relations from the same layer in hierarchy that share ancestor relations (i.e., the relations come from the same tree and are of the same depth), with a gradual transition from shallow to deep layers with respect to their common ancestor, as shown in Figure 2. The training procedure will lead the model to learn relations from coarse to fine grains, since the length of the shortest path between two relations in hierarchy gradually increases as the relation pair goes deeper.",
"3 In experiments, we find it beneficial to warm-up the training of OHRE under the hierarchical training paradigm, and then switch to two random relations in the later phase.",
"Pair-wise Virtual Adversarial Training.",
"Neural metric learning models may suffer from the over-fitting problem by learning very complex decision hyperplanes.",
"In our case, the problem is severe since relation hierarchies provide strong supervision to metric learning.",
"To address this issue, we 2 The margin is 1 if two relations come from different trees.",
"design pair-wise virtual adversarial training that smooths the representation space by penalizing sharp changes in the space.",
"Specifically, for each randomly sampled instance pair, we add worst-case perturbations, such that the distance between the relation pairs reaches the maximum changes.",
"We penalize the loss changes as follows: L v = (cid:88) v 1 ,v 2 (cid:107) d ( v 1 , v 2 ) d ( v 1 , v 2 ) (cid:107) 22 , (4) where v is obtained by adding the worst-case noise to v .",
"Pair-wise virtual adversarial training encourages smooth and robust metric space, thus improving the generalization ability of OpenRE models.",
"Unlike previous works that adopt virtual adversarial training in classification problems (Miyato et al., 2017; Wu et al., 2019), our pair-wise virtual adversarial training is based on distance in Euclidean space instead of classification probability distributions.",
"We refer readers to the appendix for more details about the pair-wise virtual adversarial training.",
"The final loss is defined as the addition of dynamic hierarchical triplet loss L t and pair-wise virtual adversarial loss L v : L = L t + v L v , (5) where v is a hyperparameter.",
"To expand the existing relation hierarchies, we first cluster novel relations in open-domain corpora based on instance representations, and then learn relation prototypes for both relations in the existing hierarchy and novel relations.",
"Finally, new relations are inserted into the existing relation hierarchy by a novel top-down hierarchy expansion algorithm based on relation prototypes.",
"The hierarchy expansion framework is designed based on two key assumptions: (1) A relation prototype is the aggregation of all instances belonging to itself and descendant relations.",
"(2) A relation prototype has the highest similarity with its parent relation prototype, and a lower similarity with its sibling relation prototypes.",
"The rationale of the assumptions is that the semantics of a relation is typically covered by its ancestors.",
"The assump-tion is also aligned with the intuition in relation representation learning, where a relation exhibits the highest similarity with its parent, due to the minimum shortest path length (i.e., the length is 1 ).",
"algorithm (Blondel et al., 2008).",
"Louvain detects communities in a graph by greedily merging data points to clusters based on modularity optimization, and has proven effective in OpenRE (Wu et al., 2019).",
"We construct a weighted undirected graph of the relation instances in the test set, where the connection weight between two instances is determined by the distance between their representations: w ( v 1 , v 2 ) = max[0 , 1 d ( v 1 , v 2 ))] .",
"In experiments, we observe that clusters containing very few instances are typically noisy outliers and are not proper to be regarded as novel relations, which is consistent with Wu et al. (2019).",
"Therefore, we merge instances in these clusters into their closest clusters, measured by the highest connection weight.",
"Then we learn relation prototypes for both relations in the existing hierarchy and novel relations based on the clusters.",
"We represent each relation prototype with instances, where the prototype of a novel relation consists of all its instances, and the prototype of an existing relation contains all instances from itself and all descendant relations.",
"Top-Down Hierarchy Expansion.",
"After obtaining relation prototypes, we link these extracted relations to existing hierarchy by a novel top-down hierarchy expansion algorithm.",
"Following the aforementioned assumptions, for each novel relation, the algorithm finds its parent with the highest similarity in a top-down paradigm.",
"Specifically, for each novel relation, starting from the existing root relations, we iteratively search the relation with the highest similarity in candidates layer by layer.",
"In each layer, the search candidates are obtained by the child relations of the search result in the previous layer.",
"The search process terminates if the similarity decreases compared to the previous layer.",
"The extracted relation will be inserted as the child of the most similar relation, or cast as a singleton if the highest similarity is lower than a threshold, where a higher expansion threshold will lead to more singleton relations.",
"The procedure is shown in Algorithm 1, and we refer readers to experiments for a detailed example.",
"In practice, the similarity between a novel relation and an existing relation is given by the average connection between their prototypes as follows: S ( r i , r j ) = (cid:80) v 1 P i (cid:80) v 2 P j w ( v 1 , v 2 ) | P i | | P j | (cid:113) 1 + | P sj | , (7) Algorithm 1 Top-Down Hierarchy Expansion Require: r : A novel relation Require: W : Expansion threshold 1: Init search candidates C = root relations of trees 2: Init highest similarity in previous layer W = 0 3: while C not empty do 4: Search relation c = arg max c CS ( r, c ) 5: if S ( r, c ) > W then 6: // Move to the next layer 7: Update highest similarity W = S ( r, c ) 8: Update search candidates C = children of c 9: else 10: Stop searching 11: if W W then 12: Expand r as child of c 13: else 14: Cast r as singleton relation where r i is a novel relation and r j is an existing relation, P i and P j are the corresponding relation prototypes, and | P sj | refers to the number of all descendant relations of r j .",
"In experiments, we find that relations containing more descendant relations in hierarchy tend to exhibit lower average connections with novel relations, due to the margins between the contained descendant relations.",
"By introducing (cid:113) 1 + | P sj | , we balance the connection strength and encourage the model to explore wider and deeper hierarchies.",
"The reason for expanding hierarchy with a top-down paradigm is threefold: (1) The coarse-to-fine-grained hierarchy expansion procedure is bio-plausible, as suggested by cognitive neuroscience studies (Tenenbaum et al., 2011).",
"(2) The decision making procedure following the existing hierarchy structure is interpretable.",
"(3) It can achieve better efficiency since the unlikely branches are pruned in the early search stage.",
"To verify the effectiveness of hierarchical information and OHRE, we conduct comprehensive experiments on relation clustering and hierarchy expansion on two real-world datasets.",
"We also conduct a detailed analysis of OHRE to provide a better understanding of our framework.",
"We refer readers to the appendix for more implementation details.",
"Following previous works (Wu et al., 2019; Hu et al., 2020), we evaluate our framework on FewRel (Han et al., 2018b) and New York Times Freebase (NYT-FB) dataset (Marcheggiani and",
"Titov, 2016).",
"However, the original random data splits are not suitable to benchmark open hierarchical relation extraction task, since the test sets do not well cover different topologies in relation hierarchy.",
"In the test sets, for a majority of relations, their parent relations are not labeled with sentences in the dataset, making them singleton relations.",
"It is desirable to include more diverse and challenging relations with complex topologies in the test sets.",
"Thus we re-split these two datasets to better approximate and provide benchmarks for real-world needs.",
"Considering applications where only incomplete relation hierarchies are available, we only use partial hierarchy from KBs, removing the hierarchy of relations beyond the train sets.",
"FewRel Hierarchy.",
"FewRel (Han et al., 2018b) is a supervised dataset created from Wikipedia and Wikidata.",
"Following Wu et al. (2019), the train set includes 64 relations where each relation has 700 instances.",
"The development set and test set share 16 relations, and each set has 1 , 600 instances.",
"We exchange relations from the original train and test set to include three relation typologies in test set: (1) single relation without a parent ( 6 relations), (2) relation with a parent in train set ( 8 relations), and (3) relation with a parent in test set ( 2 relations).",
"We call this dataset FewRel Hierarchy.",
"NYT-FB Hierarchy.",
"NYT-FB (Marcheggiani and Titov, 2016) is a distantly supervised dataset created from New York Times and Freebase.",
"Following Simon et al. (2019), we filter out sentences with non-binary relations.",
"The train set includes 212 relations with 33 , 992 instances.",
"The development set and test set share 50 relations, and have 3 , 835 and 3 , 858 instances respectively.",
"Each relation in development set and test set has at least 10 instances.",
"We call this dataset NYT-FB Hierarchy.",
"We introduce two task settings and corresponding evaluation metrics.",
"(1) Relation clustering setting is widely adopted in previous OpenRE works to evaluate the ability of clustering novel relations (Marcheggiani and Titov, 2016; Wu et al., 2019).",
"(2) We also design the hierarchy expansion setting to thoroughly test the ability of OpenRE models in expanding existing relation hierarchies.",
"Baselines.",
"We compare OHRE with state-of-the-art OpenRE baselines.",
"(1) Relational Siamese Network augmented with conditional entropy and virtual adversarial training ( RSN-CV ) (Wu et al., 2019) is the state-of-the-art OpenRE method that transfers relational knowledge from labeled data to discover relations in unlabeled data.",
"(2) SelfORE (Hu et al., 2020) utilizes self-training to iteratively learn relation representations and clusters.",
"(3) HAC with re-weighted word embeddings ( RW-HAC ) (Elsahar et al., 2017) is the state-of-the-art rich feature-based method.",
"RW-HAC first extracts rich features, such as entity types, then reduces feature dimension via principal component analysis, and finally clusters the features with HAC.",
"(4) Discrete-state variational autoencoder ( VAE ) (Elsa-har et al., 2017) optimizes a relations classifier via reconstruction signals, with rich features including dependency paths and POS tags.",
"Evaluation Metrics.",
"Following Wu et al. (2019); Hu et al. (2020), we adopt instance-level evaluation metrics to evaluate relation clustering, including B 3 (Bagga and Baldwin, 1998), V-measure (Rosen-berg and Hirschberg, 2007) and Adjusted Rand Index (ARI) (Hubert and Arabie, 1985).",
"We refer readers to the appendix for more detailed descriptions about the evaluation metrics.",
"In this setting, models are required to first cluster novel relations, and then further add the extracted relations into the existing hierarchy in train set.",
"Baselines.",
"To the best of our knowledge, there are no existing OpenRE methods designed to directly expand an existing relation hierarchy.",
"We design two strong baselines based on state-of-the-art OpenRE architectures.",
"(1) RW-HAC for hierarchy expansion ( RW-HAC-HE ) links each novel relation cluster given by RW-HAC to the existing relation cluster with the global highest the Ward's linkage score.",
"The novel relation will be a singleton if the highest score is less than a threshold.",
"(2) RSN-CV for hierarchy expansion ( RSN-CV-HE ) obtains clusters using RSN-CV, and links them to the hierarchy using our top-down expansion algorithm.",
"Here without confusion, we omit the -HE suffixes in model names in the experiment results.",
"Evaluation Metrics.",
"We adopt two metrics to evaluate on cluster-level (1) how well a predicted cluster matches the golden cluster by matching metric (Larsen and Aone, 1999), and (2) how well Dataset Model B 3 V-measure ARI F1 Prec.",
"the predicted cluster links to the golden position in hierarchy by taxonomy metric (Dellschaft and Staab, 2006).",
"We also report two overall evaluation metrics that consider both relation clustering and hierarchy expansion results.",
"Specifically, we report the arithmetic mean and harmonic mean of matching F1 and taxonomy F1.",
"Main Results.",
"Table 1 shows relation clustering results on two datasets, from which we observe that: (1) OHRE outperforms state-of-the-art models by a large margin, e.g., with 6 .",
"7% , 4 .",
"3% , 9 .",
"6% improvements in B 3 , V-measure, and ARI respectively on FewRel Hierarchy.",
"Compared with unsupervised methods, the performance gap is even greater, e.g., more than 30% in B 3 on FewRel Hierarchy.",
"This shows that OHRE can effectively leverage existing relation hierarchy for better novel relation clustering.",
"(2) The improvements of OHRE are consistent in both supervised FewRel Hierarchy dataset and distantly supervised NYT-FB Hierarchy dataset.",
"This indicates that the representation learning and relation clustering procedure of OHRE is robust to noisy relation labels and long-tail relations in different domains.",
"We note that although our model adopts CNN as the relation encoder, it outperforms SelfORE equipped with BERT (Devlin et al., 2019).",
"We expect it would be beneficial to enhance the relation representations in OHRE with pre-trained language models, and we leave it for future work.",
"Ablation Study.",
"We conduct ablations to investigate the contribution of different components, as shown in Table 2. For fair comparisons, we also ablate virtual adversarial training from RSN-CV (Wu et al., 2019).",
"Experimental results show that all components contribute to the final performance.",
"This shows that hierarchical information from existing relations can provide transferable guidance for novel relation clustering.",
"The performance drops most significantly when removing pair-wise virtual adversarial training, indicating the importance of space smoothing to the generalization of OHRE.",
"Main Results.",
"Table 3 shows the results of hierarchy expansion, from which we observe that: (1) OHRE outperforms strong baselines on hierarchy expansion.",
"Compared to baselines, OHRE achieves higher match F1, which indicates that relations extracted by OHRE can be better aligned with golden relations on cluster-level.",
"Moreover, the advantage in taxonomy F1 shows that OHRE can better add the extracted relations in the existing hierarchy.",
"The reasonable overall result shows the potential of OHRE in real-world open hierarchical relation extraction applications.",
"(2) We also conduct hierarchy expansion experiments with golden novel clusters.",
"However, experiment results show no obvious improvements for all models.",
"Particularly, we note that while RW-HAC and RSN-CV achieve seemingly reasonable performance, they always cast novel relation as a singleton and are unable to add the relation to the right place in hierarchy.",
"4 4 The proportion of singleton relations is 37 .",
"(b) OHRE first clusters novel relations from open-domain corpora, and learns relation prototypes.",
"(c) OHRE then expands relation hierarchy based on relation prototypes in a top-down paradigm.",
"This is because the inconsistent instance representations within each golden cluster will mislead the expansion procedure on cluster-level, which shows integrating hierarchy information into relation representations is of fundamental importance to hierarchy expansion.",
"Besides, the results also show the necessity of re-splitting FewRel to include more hierarchy topologies in test set for better benchmark.",
"Zoom-in Study.",
"To better understand the performance of models on hierarchy expansion, we divide the relations according to their hierarchy topologies and report the performance on FewRel Hierarchy.",
"Table 4 shows the results on three topologies, including (1) single relations without parents (sgl.), (2) relations with parents in train set (p-trn.), and (3) relations with parents in test set (p-tst.).",
"The results show that although models achieve reasonable performance on clustering in all three topologies, they struggle on hierarchy expansion, especially on relations with parents.",
"In comparison, OHRE can handle some relations with parents in train set.",
"However, there is still ample room for improvement.",
"This shows hierarchy expansion is challenging, and we leave further research for future work.",
"To intuitively show how OHRE expands an existing hierarchy with novel relations from open-domain corpora, we visualize the workflow of OHRE on relation composer , as shown in Figure 3. The average connection score increases as the expansion procedure",
"procedure progress from top to down in hierarchy.",
"The expansion procedure terminates when the connection score decreases.",
"The process is not only better aligned with real-world needs, but also provides better interpretability in decision making.",
"In this work, we make the first attempt to address bidirectional connections between OpenRE and relation hierarchy.",
"In the future, we believe the following directions worth exploring: (1) We use a heuristic method to add new relations into hierarchies based on local similarities between relations.",
"In future, more advanced methods can be designed to model the global interaction between new relations and hierarchy, and learn to effectively add the novel relations.",
"(2) We conduct relation representation learning and hierarchy expansion in a pipeline.",
"In the future, end-to-end models can be developed to jointly optimize these important phases for better open hierarchical relation extraction results.",
"This work is supported by the National Key Research and Development Program of China (No. 2020AAA0106501).",
"Yao is also supported by 2020 Tencent Rhino-Bird Elite Training Program."
]
| [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"objective",
"result",
"objective",
"objective",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"other",
"other"
]
|
[
"Utilizing reviews to learn user and item representations is useful for recommender systems.",
"Existing methods usually merge all reviews from the same user or for the same item into a long document.",
"However, different reviews, sentences and even words usually have different informativeness for modeling users and items.",
"In this paper, we propose a hierarchical user and item representation model with three-tier attention to learn user and item representations from reviews for recommendation.",
"Our model contains three major components, i.e., a sentence encoder to learn sentence representations from words, a review encoder to learn review representations from sentences, and a user/item encoder to learn user/item representations from reviews.",
"In addition, we incorporate a three-tier attention network in our model to select important words, sentences and reviews.",
"Besides, we combine the user and item representations learned from the reviews with user and item embeddings based on IDs as the final representations to capture the latent factors of individual users and items.",
"Extensive experiments on four benchmark datasets validate the effectiveness of our approach.",
"Learning accurate user and item representations is very important for recommender systems (Tay et al., 2018).",
"Many of existing recommendation methods learn user and item representations based on the ratings that users gave to items (Koren et al., 2009; Mnih and Salakhutdinov, 2008).",
"For example, Koren et al. (2009) proposed a matrix factorization method based on SVD to learn latent representations of users and items from the rating matrix between users and items.",
"However, since the numbers of users and items in online platforms are usually huge, and the rating matrix between users and items is usually very sparse, it is quite diffiDefragand cleanup, then you have a great laptop!",
"Style: Laptop Only Verified Purchase I bought this laptop yesterday.",
"This is a great laptop if you immediately run maintenance checks on it (defrag, disk cleanup} and remove a little bloatware.",
"It is not a laptop to game with, but as a working/school laptop, you're getting a great bang for your buck.",
"Only giving it four stars just because of the above mentioned things I did afterward, but by no means is this a ho riblelaptop.",
"106 people found this helpful ***** MULTIMEDIA LAPTOP June 22, 2018 Style: Laptop Only Verified Purchase This Laptop Great!",
"One person found this helpful Figure 1: Two example reviews.",
"cult for those rating based recommendation methods to learn accurate user and item representations (Zheng et al., 2017; Tay et al., 2018).",
"Luckily, in many online platforms such as Amazon and IMDB, there are rich reviews written by the users to express their opinions on items.",
"These reviews can provide rich information of items.",
"For example, if sentences like bad battery life and battery capacity is low frequently appear in the reviews of a smartphone, then we can infer the performance of this item in battery life is not good.",
"The reviews also contain rich information of users.",
"For example, if a user frequently mentions the price is too high and very expensive in his/her reviews for different items, then we can infer this user may be sensitive to price.",
"Thus, these reviews can help enhance the learning of user and item representations especially when ratings are sparse, which is beneficial for improving the performance of recommender systems (Zheng et al., 2017).",
"Utilizing reviews to learn user and item representations for recommendation has attracted increasing attentions (Zheng et al., 2017; Catherine and Cohen, 2017).",
"For example, Zheng et al. (2017) proposed a DeepCoNN method to learn the representations of users and items from reviews using convolutional neural networks (CNN), and achieved huge improvement in recommendation performance.",
"These methods usually concatenate the reviews from the same user or the same item into a long document.",
"However, different reviews usually have different informativeness in representing users and items.",
"For example, in Fig. 1 the first review is much more informative than the second one.",
"Distinguishing informative reviews from noisy ones can help learn more accurate user and item representations.",
"In addition, different sentences in the same review may also have different informativeness.",
"For example, in Fig. 1 the sentence it is not a laptop to game with contains more important information than I bought this laptop yesterday.",
"Besides, different words in the same sentence may also have different importance.",
"For example, in this is a great laptop if you ... the word great is more important than you in modeling this item.",
"In this paper, we propose a hierarchical user and item representation model with three-tier attention ( HUITA ) to learn informative user and item representations from reviews for recommendation.",
"In our approach, the hierarchical user and item representation model contains three major components, i.e., a sentence encoder to learn sentence representations from words, a review encoder to learn review representations from sentences, and a user/item encoder to learn user/item representations from the all reviews posted by this user or for this item.",
"In addition, we propose to incorporate a three-tier attention network into our model to select important words, sentences and reviews to learn more informative user and item representations.",
"Besides, we combine the user and item representations learned from the reviews with the user and item embeddings based on their IDs as the final representations to capture the latent factors of each individual users and items.",
"We conduct extensive experiments on four benchmark datasets.",
"The results show our approach can effectively improve the performance of recommendation and outperform many baseline methods.",
"Learning user and item representations from reviews for recommendation has attracted many attentions (McAuley and Leskovec, 2013; Ling et al., 2014; Bao et al., 2014; Zhang et al., 2014; Diao et al., 2014; He et al., 2015; Tan et al., 2016; Ren et al., 2017).",
"Many of the existing methods focus on extracting topics from reviews to model users and items.",
"For example, McAuley and Leskovec (2013) proposed a Hidden Factors as Topics (HFT) method to use the topic modeling technique LDA to discover the latent aspects of users and items from the reviews.",
"Ling et al. (2014) proposed a Ratings Meet Reviews (RMR) method to enhance the representations of users and items by extracting topics from review texts and aligning the dimensions of these topics with the latent user representations obtained from the rating matrix using matrix factorization.",
"Bao et al. (2014) proposed a TopicMF approach to jointly model user and item representations using rating scores via matrix factorization and using review texts via non-negative matrix factorization (NMF) to obtain topics.",
"However, these methods only extract the topic information from reviews, and a large amount of important semantic information is not captured.",
"In addition, these methods are usually based on topic models and cannot effectively model the contexts and orders of words in reviews, both of which are important for inferring user preferences and item properties.",
"In recent years, several deep learning based methods have been proposed to learn user and item representations from reviews for recommendation (Zhang et al., 2016; Zheng et al., 2017; Catherine and Cohen, 2017; Seo et al., 2017b,a; Chen et al., 2018; Tay et al., 2018).",
"For example, Zheng et al. (2017) proposed a DeepCoNN method which uses CNN to learn representations of users and items from their reviews.",
"Catherine and Cohen (2017) proposed a TransNets method to learn user and item representations from reviews using CNN and regularize these representations to be close to the representations of the review written by the target user to the target item.",
"Seo et al. (2017b) proposed to learn user and item representations via CNN network as well as attention network over word embeddings.",
"These methods concatenate all the reviews from the same user or for the same item into a long document, and cannot distinguish informative reviews from noisy ones.",
"Chen et al. (2018) proposed to model the usefulness of reviews using review-level attention to enhance the learning of user and item representations.",
"However, their method regards each review as a long sentence, and cannot distinguish informative sentences and words from less informative ones.",
"Different from the aforementioned methods, in our approach we propose a hierarchical framework to learn user and item representations from reviews for recommendation.",
"Our model first learns sentence representations from words, then learns review representations from their sentences, and finally learns user/item representations from their reviews.",
"Our model also contains a three-tier attention network to jointly select important words, sentences and reviews to learn more informative user and item representations.",
"Experiments on benchmark datasets validate the advantage of our approach over existing methods in recommendation.",
"In this section, we introduce our HUITA approach to learn user and item representations from reviews for recommendation.",
"The architecture of our approach is shown in Fig. 2.",
"There are three major modules in our approach.",
"The first one is sentence encoder which learns representations of sentences from words.",
"The second one is review encoder which learns representations of reviews from sentences.",
"And the third one is user/item encoder , which learns the representations of users and items from their reviews.",
"Next we introduce each module in detail.",
"The sentence encoder module is used to learn representations of sentences from words.",
"According to Fig. 2, there are three layers in this module.",
"The first layer is word embedding.",
"It is used to convert a sequence of words into a sequence of low-dimensional dense vectors which contain semantic information of these words.",
"Denote a sentence s contains M words [ w 1 , w 2 , ..., w M ] .",
"Through the word embedding layer the sentence s is transformed into a vector sequence [ e 1 , e 2 , ..., e M ] using a word embedding matrix E RV D , where V and D represent the vocabulary size and the word embedding dimension, respectively.",
"The word embedding matrix E is initialized using pretrained word embeddings, and fine-tuned during model training.",
"The second layer is a convolutional neural network (CNN).",
"CNN is an effective neural architecture for capturing local information (LeCun et al., 2015).",
"We employ a word-level CNN to capture the local contexts of words to learn their contextual representations.",
"Denote c w i as the contextual representation of the word w i , which is computed as follows: c wi = ReLU( U w e ( i K w ):( i + K w ) + b w ) , (1) where e ( i K w ):( i + K w ) is the concatenation of the word embedding vectors from the position i K w to i + K w .",
"U w RN w (2 K w +1) D and b w RN w are the parameters of the filters in CNN network, where N w is the number of CNN filters and 2 K w + 1 is the window size.",
"ReLU is the nonlinear activation function (Glorot et al., 2011).",
"The output of the CNN layer is a sequence of contextual word representations [ c w 1 , c w 2 , ..., c wM ] .",
"The third layer is a word-level attention network.",
"Different words in the same sentence may have different informativeness for modeling users and items.",
"For example, in the sentence The laptop I bought yesterday is too heavy, the word heavy is more informative than the word yes-terday in representing this laptop.",
"Thus, we use a word-level attention network to help our model select and attend to important words based on their contextual representations to build more informative sentence representations for user and item modeling.",
"The attention weight of the i th word in the sentence s is computed as follows: a wi = tanh( v w c wi + b w ) , (2) wi = exp( a wi ) (cid:80) Mj =1 exp( a wj ) , (3) where v w RN w and b w R are the parameters in the attention network.",
"i indicates the relative importance of the i th word evaluated by the attention network.",
"The final representation of the sentence s is the summation of the contextual word representations weighted by their attention weights as follows: s = M (cid:88) i =1 wi c wi .",
"The review encoder module aims to build the representations of each review based on the representation of sentences in these reviews.",
"There are two major layers in the review encoder module.",
"The first layer is a sentence-level CNN network.",
"Neighboring sentences usually have some relatedness with each other.",
"For example, in a laptop review It is not a laptop to game with. But as a working laptop, you will get a great bang for your buck, the two neighboring sentences have close relatedness and they both describe the performance of the laptop in different scenarios.",
"Figure 2: The framework of our HUITA approach for recommendation.",
"Thus, we employ a sentence-level CNN network to learn the contextual sentence representations by capturing the local contexts of sentences.",
"Denote a review r contains N sentences [ s 1 , s 2 , ..., s N ] .",
"Denote the contextual representation of sentence s i as c si , which is computed as follows: c si = ReLU( U s s ( i K s ):( i + K s ) + b s ) , (5) where U s RN s (2 K s +1) N w and b s RN s are parameters of the sentence-level CNN filters.",
"s ( i K s ):( i + K s ) is the concatenation of sentence representation vectors from position i K s to i + K s .",
"N s is the number of filters in sentence CNN network and 2 K s + 1 is the window size.",
"The second layer is a sentence-level attention network.",
"Different sentences in a review may have different informativeness for modeling users and items.",
"For example, the sentence it is not a laptop to game with is more informative than the sentence I bought this laptop yesterday in learning the representation of this laptop.",
"Thus, we use sentence-level attention network to help our model select and attend to important sentences to learn more informative review representations.",
"The attention weight of sentence s i in the review r is formulated as follows: a si = tanh( v s c si + b s ) , (6) si = exp( a si ) (cid:80) Nj =1 exp( a sj ) , (7) where v s RN s and b s R are the parameters of the attention network.",
"The final contextual representation of the review r is the summation of the contextual representations of sentences weighted by their attention weights, which is formulated as: r = N (cid:88) i =1 si c si .",
"The user/item encoder module is used to build the representations of users or items based on the representations of their reviews.",
"Different reviews usually have different informativeness in modeling users or items.",
"For example, in Fig. 1, the first review contains much more information of the laptop than the second review, and should has more contributions in building the representation of this laptop.",
"Thus, we use a review-level attention network to distinguish informative reviews from less informative ones.",
"Denote a user u has P reviews [ r 1 , r 2 , ...., r P ] .",
"Then the attention weight of the review r i is computed as follows: a ri = tanh( v r r i + b r ) , (9) ri = exp( a ri ) (cid:80) Pj =1 exp( a rj ) , (10) where v r RN s and b r R are the parameters of the review-level attention network.",
"The user representation learned from the reviews is the summation of the contextual representations of reviews weighted by their attention weights: u r = P (cid:88) i =1 ri r i .",
"Although the user representation u r learned from reviews contain rich information of users, there are some latent characteristics of users which are not described in their reviews but can be inferred from the rating patterns.",
"Thus, we also represent users using the embedding of their IDs to capture the latent factors of users, which are motivated by traditional recommendation methods (Koren et al., 2009).",
"The final representation of user u is the concatenation of the user representation u r learned from reviews and the user embedding u d inferred from user ID, as follows: u = [ u r , u d ] .",
"The representations of items can be computed in a similar way.",
"Denote the representation of item t learned from reviews as t r , and the item embedding inferred from item ID as t d .",
"Then the final representation of this item is as follows: t = [ t r , t d ] .",
"In recommender systems the recommendations are made based on the predicted ratings that a user will give to an item.",
"In our HUITA approach, the rating score of a user-item pair is predicted based on the representations of users and items as follows: y = ReLU( w T ( u (cid:12) t ) + b ) , (14) where (cid:12) is item-wise dot product, w and b are parameters in the rating prediction layer.",
"In the model training stage, we optimize the model parameters to minimize the difference between gold rating and predicted ratings.",
"We use the mean squared error as the loss function: L = 1 NP NP (cid:88) i =1 ( y i y i ) 2 , (15) where NP denotes the number of user-item pairs in training data, y i and y i are the predicted rating score and the gold rating score respectively of the i th user-item pair.",
"We conducted experiments on four widely used benchmark datasets in different domains to evaluate the effectiveness of our approach.",
"Following (Chen et al., 2018), we used three datasets from the Amazon collection 1 (He and McAuley, 2016), i.e., Toys and Games , Kindle Store , and Movies and TV .",
"Another dataset is from Yelp Challenge 2017 2 (denoted as Yelp 2017 ), which is a large-scale restaurant review dataset.",
"Following (Chen et al., 2018), we only kept the users and items which have at least 5 reviews.",
"The detailed statistics of the four datasets are summarized in Table 1.",
"The ratings in these datasets are in [1, 5].",
"In our experiments, the dimension of word embeddings was set to 300.",
"We used the pre-trained Google embedding (Mikolov et al., 2013) to initialize the word embedding matrix.",
"The word-level CNN has 200 filters and their window size is 3. The sentence-level CNN has 100 filters with window size of 3. We applied dropout strategy (Srivastava et al., 2014) to each layer of our model to mitigate overfitting.",
"The dropout rate was set to 0.2.",
"Adam (Kingma and Ba, 2014) was used as the optimization algorithm.",
"The batch size was set to 20.",
"We randomly selected 80% of the user-item pairs in each dataset for training, 10% for validation and 10% for test.",
"All the hyperpa-rameters were selected according to the validation set.",
"We independently repeated each experiment for 5 times and reported the average performance in Root Mean Square Error (RMSE).",
"We evaluate the performance of our approach by comparing it with several baseline methods.",
"The methods to be compared include: PMF : Probabilistic Matrix Factorization, which models users and items based on 1 http://jmcauley.ucsd.edu/data/amazon 2 https://www.yelp.com/dataset challenge PMF NMF SVD++ HFT DeepCoNN Attn+CNN NARRE HUITA Rating score (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) Review text (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) Word context & order (cid:88) (cid:88) (cid:88) (cid:88) Review attention (cid:88) (cid:88) Word/sentence attention (cid:88) * (cid:88) Table 2: Information used in different methods.",
"ratings via matrix factorization (Mnih and Salakhutdinov, 2008).",
"NMF : Non-negative Matrix Factorization for recommendation based on rating scores (Lee and Seung, 2001).",
"SVD++ : The recommendation method based on rating matrix via SVD and similarities between items (Koren, 2008).",
"HFT : Hidden Factor as Topic (HFT), a method to combine reviews with ratings via LDA (McAuley and Leskovec, 2013).",
"DeepCoNN : Deep Cooperative Neural Networks, a neural method to jointly model users and items from their reviews via CNN (Zheng et al., 2017).",
"Attn+CNN : Attention-based CNN, which uses both CNN and attention over word embeddings to learn user and item representation from reviews (Seo et al., 2017b).",
"NARRE : Neural Attentional Rating Regression with Review-level Explanations, which uses attention mechanism to model the informativeness of reviews for recommendation (Chen et al., 2018).",
"HUITA : our proposed hierarchical user and item representation approach with three-tier attention for recommendation with reviews.",
"In Table 2, we show a simple comparison of different methods in terms of the information considered in each method.",
"Traditional recommendation methods such as PMF , NMF and SVD are solely based on rating scores, and other methods HFT , DeepCoNN , Attn+CNN , NARRE and HUITA can exploit both rating scores and reviews for recommendation.",
"Among the latter methods, HFT is based on topic models and cannot capture the contexts and orders of words.",
"DeepCoNN and Attn+CNN simply concatenate reviews into a long document, and cannot model the informativeness of different reviews.",
"Although NARRE can model review helpfulness via attention, it simply merges all sentences in a review together, and does not model the informativeness of different sentences and words.",
"Different from these methods, our HUITA approach learns user and item representations from reviews in a hierarchical manner, and uses a three-tier attention network to select and attend to important words, sentences and reviews.",
"The results of different methods are shown in Table 3. We have several observations from the results.",
"First, the methods which exploit reviews (i.e., HFT , DeepCoNN , Attn+CNN , NARRE and HUITA ) usually perform better than the methods only based on rating scores (i.e., PMF , NMF and SVD++ ).",
"It validates reviews can provide rich information of user preferences and item properties, and is important to learn informative user and item representations and can benefit recommendation.",
"Second, among the method which can exploit reviews, the neural network based methods (e.g., DeepCoNN , Attn+CNN , NARRE and HUITA ) usually outperform the HFT method which is based on topic models.",
"This is probably because in HFT the reviews are represented using bag-of-words features, and the contextual information and the orders of words are lost.",
"This result validates the neural network based method can better capture the semantic information in reviews to model users and items for recommendation.",
"Third, the methods considering review helpfulness (i.e., NARRE ) and word importance (i.e., Attn+CNN ) usually outperform DeepCoNN .",
"This result implies that different words and different reviews have different importance for modeling users and items from reviews.",
"Distinguishing important reviews and words from less important ones is beneficial to learn more accurate user and item representations for recommendation.",
"Fourth, our approach can consistently outperform all the baseline methods compared here.",
"This is because different from baseline methods such as Attn+CNN and DeepCoNN which merge all reviews into a long document and NARRE which merges all sentences into a long sentence, our HUITA approach learns user and item representations in a hierarchical manner.",
"HUITA first learns sentence representations from words, then learns review representations from sentences, and finally learns user/item representations from reviews.",
"Besides, our approach incorporates a three-tier attention network to jointly select and attend to important words, sentences and reviews.",
"Thus, our approach can learn more informative user and item representations from reviews for recommendation.",
"In this section, we conducted experiments to explore the effectiveness of the three-tier attention network in our approach.",
"We compare three variants of our model by removing one kind of attention each time to evaluate its contribution to the performance.",
"The results are shown in Table 4. According to Table 4, the word-level attention can effectively improve the performance of our approach.",
"This is because different words in reviews have different importance in modeling users and items.",
"Therefore, recognizing and highlighting the important words using the word-level attention network can help learn more informative sentence representations.",
"In addition, the sentence-level attention is also useful.",
"This may be because different sentences have different informativeness.",
"For example, in a laptop review the sentence this laptop is expensive is more informative than I bought this laptop yesterday in representing this laptop.",
"The sentence-level attention network can help to select important sentences to build review representations.",
"Besides, the review-level attention is also useful in our HUITA approach.",
"This is because different reviews have different informativeness in representing users and items.",
"And distinguishing informative reviews from the less informative ones can help learn more accurate representations of users and items.",
"Moreover, combining all the three levels of attentions can further improve the performance of our approach, which validates the effectiveness of our three-tier attention architecture.",
"In this section, we conducted several case studies to further explore whether our approach can select informative words, sentences and reviews to learn informative user and item representations for recommendation.",
"First, we want to explore the effectiveness of the wordand sentence-level attention networks.",
"The visualization of the attention weights in the wordand sentence-level attention networks is shown in Fig. 3. From Fig. 3 we can see that our word-level attention network can effectively select and attend to important words.",
"For example, in Fig.",
"3(a) the words Good, quality and recommend are assigned higher attention weights than bought and dad, since Good, quality and recommend can better model the properties of the film.",
"In addition, our model can I bought this for my father.",
"I purchased this book on a whim.",
"I never cared for short stories when I was younger but I'm always willing to give something a try again, if I didn't have a horrible experience before.",
"(b) Kindle Store Figure 3: Visualization of attention weights in two randomly selected reviews from the Movies and TV and Kindle Store datasets respectively.",
"Red boxes to the left of the reviews represent sentence-level attention weights, and blue boxes on the individual words represent word-level attention weights.",
"Darker color represents higher attention weights.",
"effectively select informative sentences using the sentence-level attention network.",
"For example, in Fig.",
"3(b) the sentence From reading I found short stories are still not my style is assigned high attention weight since it is informative for representing this user and is important for recommendation, while the sentence I purchased this book on a whim has low attention weight since it contains limited information of users and items.",
"Thus, these results validate that our approach is effective in selecting informative words and sentences in reviews for recommendation through the wordand sentence-level attention networks.",
"Second, we want to explore the effectiveness of the review-level attention in our HUITA approach.",
"The visualization of the review-level attention weights is shown in Fig. 4. From Fig. 4 we can see that our approach can effectively select and attend to informative reviews.",
"For example, the second review in Fig.",
"4(a) is assigned high attention weight by our approach since it reveals rich information of user preferences.",
"However, the first review in Fig.",
"4(a) receives low attention weight since it contain limited information of users.",
"Thus, these results validate the effectiveness of our approach in selecting informative reviews to learn more accurate representations of users and items from reviews for recommendation.",
"In this paper, we propose a hierarchical user and item representation model with three-tier attention to learn user and item representations from reviews for recommendation.",
"In our approach, we use a sentence encoder to learn sentence representations from words, a review encoder to learn review representations from sentences, and a user/item encoder to learn user/item representations from reviews.",
"In addition, we incorporate a three-tier attention network into our model to select and attend to informative words, sentences and reviews to learn more accurate representations of users and items.",
"Besides, we combine the user and item representations learned from the reviews with the embeddings of user and item IDs as the final representations of users and items to capture the latent factors of individual users and items.",
"The experiments on four benchmark datasets validate that our approach can effectively improve the performance of recommendation and consistently outperform many baseline methods.",
"This work was supported by the National Key Research and Development Program of China under Grant number 2018YFC1604002, and the National Natural Science Foundation of China under Grant numbers U1836204, U1705261, U1636113, U1536201, and U1536207."
]
| [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"method",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"objective",
"method",
"abstain",
"method",
"result",
"other"
]
|
[
"Neural abstractive summarization models are flexible and can produce coherent summaries, but they are sometimes unfaithful and can be difficult to control.",
"While previous studies attempt to provide different types of guidance to control the output and increase faithfulness, it is not clear how these strategies compare and contrast to each other.",
"In this paper, we propose a general and extensible guided summarization framework ( GSum ) that can effectively take different kinds of external guidance as input, and we perform experiments across several different varieties.",
"Experiments demonstrate that this model is effective, achieving state-of-the-art performance according to ROUGE on 4 popular summarization datasets when using highlighted sentences as guidance.",
"In addition, we show that our guided model can generate more faithful summaries and demonstrate how different types of guidance generate qualitatively different summaries, lending a degree of controllability to the learned models.",
"1 1 Introduction Modern techniques for text summarization generally can be categorized as either extractive methods (Nallapati et al., 2017; Narayan et al., 2018b; Zhou et al., 2018), which identify the most suitable words or sentences from the input document and concatenate them to form a summary, or abstractive methods (Rush et al., 2015; Chopra et al., 2016; Nallapati et al., 2016; Paulus et al., 2018), which generate summaries freely and are able to produce novel words and sentences.",
"Compared with extractive algorithms, abstractive algorithms are more flexible, making them more likely to produce fluent and coherent summaries.",
"However, the unconstrained nature of abstractive summarization can also result in problems.",
"First, it can result 1 Code is available at https://github.com/ neulab/guided_summarization .",
"in unfaithful summaries (Kryscinski et al., 2019), containing factual errors as well as hallucinated content.",
"Second, it can be difficult to control the content of summaries; it is hard to pick in advance which aspects of the original content an abstractive system may touch upon.",
"To address the issues, we propose methods for guided neural abstractive summarization: methods that provide various types of guidance signals that",
"1) constrain the summary so that the output content will deviate less from the source document;",
"2) allow for controllability through provision of user-specified inputs.",
"There have been some previous methods for guiding neural abstractive summarization models.",
"For example, Kikuchi et al. (2016) specify the length of abstractive summaries, Li et al. (2018) provide models with keywords to prevent the model from missing key information, and Cao et al. (2018) propose models that retrieve and reference relevant summaries from the training set.",
"While these methods have demonstrated improvements in summarization quality and controllability, each focuses on one particular type of guidance it remains unclear which is better and whether they are complementary to each other.",
"In this paper, we propose a general and extensible guided summarization framework that can take different kinds of external guidance as inWork Guidance Form Tokens Triples Sentences Summaries Kikuchi et al. (2016) (cid:51) (length tokens) (cid:55) (cid:55) (cid:55) Cao et al. (2018) (cid:55) (cid:55) (cid:55) (cid:51) (retrieved sums.) Li et al. (2018) (cid:51) (keywords) (cid:55) (cid:55) (cid:55) Liu et al. (2018a) (cid:55) (cid:55) (cid:51) (highlighted sents.) (cid:55) Liu et al. (2018b) (cid:51) (length tokens) (cid:55) (cid:55) (cid:55) Fan et al. (2018) (cid:51) (length, entity, style tokens) (cid:55) (cid:55) (cid:55) Zhu et al. (2020) (cid:55) (cid:51) (relations) (cid:55) (cid:55) Jin et al. (2020) (cid:55) (cid:51) (relations) (cid:55) (cid:55) Saito et al. (2020) (cid:51) (keywords) (cid:55) (cid:51) (highlighted sents.) (cid:55) Ours (cid:51) (keywords) (cid:51) (relations) (cid:51) (highlighted sents.) (cid:51) (retrieved sums.) Table 1: A comparison of different guided neural abstractive summarization models.",
"put.",
"Like most recent summarization models, our model is based on neural encoder-decoders, instantiated with contextualized pretrained language models, including BERT (Devlin et al., 2019) and BART (Lewis et al., 2020).",
"With this as a strong starting point, we make modifications allowing the model to attend to both the source documents and the guidance signals when generating outputs.",
"As shown in Figure 1, we can provide automatically extracted or user-specified guidance to the model during test time to constrain the model output.",
"At training time, to encourage the model to pay close attention to the guidance, we propose to use an oracle to select informative guidance signals a simple modification that nonetheless proved essential in effective learning of our guided summarization models.",
"Using this framework, we investigate four types of guidance signals: (1) highlighted sentences in the source document, (2) keywords, (3) salient relational triples in the form of (subject, relation, object), and (4) retrieved summaries.",
"We evaluate our methods on 6 popular summarization benchmarks.",
"Our best model, using highlighted sentences as guidance, can achieve state-of-the-art performance on 4 out of the 6 datasets, including 1.28/0.79/1.13 ROUGE-1/2/L improvements over previous state-of-the-art model on the widely-used CNN/DM dataset.",
"In addition, we perform in-depth analyses of different guidance signals and demonstrate that they are complementary to each other in that there is potential to aggregate their outputs together and obtain further improvements.",
"An analysis of the results also reveals that our guided models can generate more faithful summaries and more novel words.",
"Finally, we demonstrate that we can control the output by providing user-specified guidance signals, with different provided signals resulting in qualitatively different summaries.",
"Neural abstractive summarization typically takes a source document x consisting of multiple sentences x 1 , , x | x | , runs them through an encoder to generate representations, and passes them to a decoder that outputs the summary y one target word at a time.",
"Model parameters are trained to maximize the conditional likelihood of the outputs in a parallel training corpus (cid:104)X , Y(cid:105) : arg max (cid:88) (cid:104) x i , y i (cid:105)(cid:104)X , Y(cid:105) log p ( y i | x i ; ) .",
"Several techniques have been proposed to improve the model architecture.",
"For example, models of copying (Gu et al., 2016; See et al., 2017; Gehrmann et al., 2018) allow words to be copied directly from the input to the output, and models of coverage discourage the model from generating repetitive words (See et al., 2017).",
"Within this overall framework, the types of information that go into g and the method for incorporating this information into the model may vary.",
"While there are early attempts at non-neural guided models (Owczarzak and Dang, 2010; Genest and La-palme, 2012), here we focus on neural approaches Input Embedding FeedForward Add & Norm Self Attention Add & Norm N enc Input Embedding FeedForward Add & Norm Self Attention Add & Norm N enc Source Document GuidanceSignal shared shared shared shared FeedForward Add & Norm Self Attention Add & Norm FeedForward Add & Norm Self Attention Add & Norm Output Embedding FeedForward Add & Norm Self Attention Add & Norm N dec Add & Norm Cross Attention Add & Norm Cross Attention Output Probabilities Softmax Linear shared Figure 2: General framework of our model.",
"and summarize recent work in Table 1.",
"For example, Li et al. (2018) first generate a set of keywords , which are then incorporated into the generation process by an attention mechanism.",
"Cao et al. (2018) propose to search the training corpus and retrieve datapoint (cid:104) x j , y j (cid:105) whose input document x j is most relevant to the current input x , and treat y j as a candidate template to guide the summarization process.",
"Besides, Jin et al. (2020) and Zhu et al. (2020) extract relational triples in the form of (subject, relation, object) from source documents and represent them by graph neural networks.",
"The decoders then attend to the extracted relations to generate faithful summaries.",
"A concurrent work by Saito et al. (2020) propose to extract keywords or highlighted sentences using saliency models and feed them to summarization models.",
"There are also works on controlling the summary length (Kikuchi et al., 2016; Liu et al., 2018b) and styles (Fan et al., 2018) by explicitly feeding the desired features to the model.",
"In addition, Liu et al. (2018a) and Chen and Bansal (2018) follow a two-stage paradigm, in which a subset of the source document { x i 1 , , x i n } will first be selected by a pretrained extractor as highlighted sentences and then be fed into the model encoder in the second stage with the rest of the text discarded.",
"Figure 2 illustrates the general framework of our proposed method.",
"We feed both the source documents and various types of guidance signals to the model.",
"Specifically, we experiment with guidance signals including highlighted sentences, keywords, relations, and retrieved summaries, although the framework is general and could be expanded to other varieties of guidance as well.",
"We adopt the Transformer model (Vaswani et al., 2017) as our backbone architecture, instantiated with BERT or BART, which can be separated into the encoder and decoder components.",
"Similar to the Transformer model, each of our encoders is composed of N enc + 1 layers, with each encoding layer containing both a self-attention block and a feed-forward block: x = LN ( x + SELFATTN ( x )) , x = LN ( x + FEEDFORWARD ( x )) , where LN denotes layer normalization.",
"Note the source document and guidance signal do not interact with each other during encoding.",
"We share the parameters of the bottom N enc layers and the word embedding layers between the two encoders, because",
"1) this can reduce the computation and memory requirements;",
"2) we conjecture that the differences between source documents and guidance signals should be high-level, which are captured at top layers of the encoders.",
"Different from the standard Transformer, our decoder has to attend to both the source document and guidance signal instead of just one input.",
"Concretely, our decoder is composed of N dec identical layers, with each layer containing four blocks.",
"After the self-attention block, the decoder will first attend to the guidance signals and generate the corresponding representations, and hence the guidance signal will inform the decoder which part of the source documents should be focused on.",
"Then, the decoder will attend to the whole source document based on the guidance-aware representations.",
"Finally, the output representation will be fed into the feed-forward block: y = LN ( y + SELFATTN ( y )) , y = LN ( y + CROSSATTN ( y , g )) , y = LN ( y + CROSSATTN ( y , x )) , y = LN ( y + FEEDFORWARD ( y )) .",
"Ideally, the second cross-attention block allows the model to fill in the details of the input guidance signal, such as finding the name of an entity by searching through co-reference chains.",
"Before delving into the specifics of the types of guidance signal we used, we first note an important detail in training our model.",
"At test time, there are two ways we can define the guidance signal:",
"1) manual definition where an interested user de-fines the guidance signal g by hand, and",
"2) automatic prediction where an automated system is used to infer the guidance signal g from input x .",
"We demonstrate results for both in experiments.",
"At training time, it is often prohibitively expensive to obtain manual guidance.",
"Hence, we focus on two varieties of generating them:",
"1) automatic prediction using x as detailed above, and",
"2) oracle extraction where we use both x and y to deduce a value g that is most likely useful in generating y .",
"Theoretically, automatic prediction has the advantage of matching the training and testing conditions of a system that will also receive automatic predictions at test time.",
"However, as we will show in experiments, the use of oracle guidance has a large advantage of generating guidance signals that are highly informative, thus encouraging the model to pay more attention to them at test time.",
"With this in mind, we describe the four varieties of guidance signal we experiment with, along with their automatic and oracle extraction methods.",
"Highlighted Sentences.",
"The success of extractive approaches have demonstrated that we can extract a subset of sentences { x i 1 , , x i n } from the source document and concatenate them to form a summary.",
"Inspired by this, we explicitly inform our model which subset of source sentences should be highlighted using extractive models.",
"We perform oracle extraction using a greedy search algorithm (Nallapati et al., 2017; Liu and Lapata, 2019) to find a set of sentences in the source document that have the highest ROUGE scores with the reference (detailed in Appendix) and treat these as our guidance g .",
"At test time, we use pretrained extractive summarization models (BertExt (Liu and Lapata, 2019) or MatchSum (Zhong et al., 2020) in our experiments) to perform automatic prediction.",
"occur in an actual summary, which could distract the model from focusing on the desired aspects of the input.",
"Therefore, we also try to feed our model with a set of individual keywords { w 1 , . . . , w n } from the source document.",
"For oracle extraction, we first use the greedy search algorithm mentioned above to select a subset of input sentences, then use TextRank (Mihalcea and Tarau, 2004) to extract keywords from these sentences.",
"We also filter the keywords that are not in the target summary.",
"The remaining keywords are then fed to our models.",
"For automatic prediction, we use another neural model (BertAbs (Liu and Lapata, 2019) in the experiments) to predict the keywords in the target summary.",
"Relations.",
"Relations are typically represented in the form of relational triples, with each triple containing a subject, a relation, and an object.",
"For example, Barack Obama was born in Hawaii will create a triple (Barack Obama, was born in, Hawaii) .",
"For oracle extraction, we first use Stanford Ope-nIE (Angeli et al., 2015) to extract relational triples from the source document.",
"Similar to how we select highlighted sentences, we then greedily select a set of relations that have the highest ROUGE score with the reference, which are then flattened and treated as guidance.",
"For automatic prediction, we use another neural model (similarly, BertAbs) to predict the relation triples on the target side.",
"Retrieved Summaries.",
"Intuitively, gold summaries of similar documents with the input can provide a reference point to guide the summarization.",
"Therefore, we also try to retrieve relevant summaries from the training data (cid:104)X , Y(cid:105) .",
"For oracle extraction, we directly retrieve five datapoints {(cid:104) x 1 , y 1 (cid:105) , . . . , (cid:104) x 5 , y 5 (cid:105)} from training data whose summaries y i are most similar to the target summary y using Elastic Search.",
"2 For automatic prediction at test time, we retrieve five datapoints whose source documents x i are most similar to each input source document x instead.",
"We experiment on 6 datasets (statistics in Table 2):",
"Reddit (Kim et al., 2019) is a highly abstractive dataset and we use its TIFU-long version.",
"XSum (Narayan et al., 2018a) is an abstractive dataset that contains one-sentence summaries of online articles from BBC.",
"PubMed (Cohan et al., 2018) is relatively extractive and is collected from scientific papers.",
"CNN/DM (Hermann et al., 2015; Nallapati et al., 2016) is a widely-used summarization dataset consisting of news articles and associated highlights as summaries.",
"We use its non-anonymized version.",
"WikiHow (Koupaee and Wang, 2018) is extracted from an online knowledge base and requires high level of abstraction.",
"New York Times (NYT) (Sandhaus, 2008) is a dataset that consists of news articles and their associated summaries.",
"3 We follow Kedzie et al. (2018) to preprocess and split the dataset.",
"Our baselines include the following models: BertExt (Liu and Lapata, 2019) is an extractive model whose parameters are initialized with BERT (Devlin et al., 2019).",
"BertAbs (Liu and Lapata, 2019) is an abstractive model with encoder initialized with BERT and trained with a different optimizer than its decoder.",
"MatchSum (Zhong et al., 2020) is an extractive model that reranks the candidate summaries produced by BertExt and achieves state-of-the-art extractive results on various summarization datasets.",
"BART (Lewis et al., 2020) is an state-of-the-art abstractive summarization model pretrained with a denoising autoencoding objective.",
"We build our models based on both BertAbs and BART, and follow their hyperparameter settings to train our summarizers.",
"For our model built on BertAbs, there are 13 encoding layers, with the top layer randomly initialized and separately trained 3 https://catalog.ldc.upenn.edu/ LDC2008T19 Model Guide R-1 R-2 R-L BertExt (Base) -43.25 20.24 39.63 BertAbs -41.72 19.39 38.76 BertAbs (Ours) -41.58 18.99 38.56 Ours BertAbs + Sentence Auto.",
"between the two encoders.",
"For our model built on BART, there are 24 encoding layers, with the top layer initialized with pretrained parameters yet separately trained between the two encoders.",
"The first cross-attention block of the decoder is randomly initialized whereas the second cross-attention block is initialized with pretrained parameters.",
"BertAbs is used to predict guidance signals of relations and keywords during test time.",
"Unless otherwise stated, we use oracle extractions at training time.",
"We first compare different kinds of guidance signals on the CNN/DM dataset using BertAbs, then evaluate the best guidance on the other five datasets using both BertAbs and BART.",
"Performance of Different Guidance Signals.",
"As shown in Table 3, if we feed the model with automatically constructed signals, feeding either highlighted sentences or keywords can outperform the abstractive summarization baseline by a large margin.",
"Especially, feeding highlighted sentences can outperform the best baseline by more than 1 Model R-1 R-2 R-L Oracle 55.76 33.22 51.83 Extractive BertExt (Base) 43.25 20.24 39.63 BertExt (Large) 43.85 20.34 39.90 MatchSum 44.41 20.86 40.55 Abstractive BertAbs 41.72 19.39 38.76 BertAbs (Ours) 41.58 18.99 38.56 BertExtAbs 42.13 19.60 39.18 BART 44.16 21.28 40.90 BART (Ours) 44.66 21.53 41.35 Ours BertAbs + BertExt 43.78 20.66 40.66 BART + MatchSum 45.94 22.32 42.48 Table 4: Comparisons with state-of-the-art models on CNN/DM.",
"ROUGE-L point.",
"Using relations or retrieved summaries as guidance will not improve the baseline performance, likely because it is hard to predict these signals during test time.",
"If we use an oracle to select the guidance signals, all varieties of guidance can improve the baseline performance significantly, with the best-performing model achieving a ROUGE-1 score of 55.18.",
"The results indicate that",
"1) the model performance has the potential to be further improved given a better guidance prediction model;",
"2) the model does learn to depend on the guidance signals.",
"Comparisons with State of the Art.",
"We then try to build our model on the state-of-the-art model, using highlighted sentences as guidance as it achieves the best performance on CNN/DM.",
"First, we build our model on BART and train it with oracle-extracted highlighted sentences as guidance.",
"Then, we use MatchSum to predict the guidance at test time.",
"From Table 4, we can see that our model can achieve over 1 ROUGE-1/L point improvements compared with the state-of-the-art models, indicating the effectiveness of the proposed methods.",
"Performance on Other Datasets.",
"We report the performance of the highlighted sentence model on all the other five datasets in Table 5.",
"Generally, the model works better when the dataset is more extractive.",
"For abstractive datasets such as Reddit and XSum, our model cannot achieve performance increases when the abstractive summarization base-1-grams 2-grams 3-grams 0 10 20 30 N o v e l n g r a m s % 1-grams 2-grams 3-grams 0 1 2 R ec a ll o f N o v e l n g r a m s % BertAbs Sentence Keyword Relation Retrieve Figure 3: Our model can generate more novel words and achieve higher recall of novel words in the gold reference compared with baseline.",
"line is already rather strong.",
"For extractive datasets such as PubMed and NYT, on the other hand, our model can achieve some improvements over the baselines even though the abstractive baseline outperforms the extractive oracle model in some cases.",
"We perform extensive analyses on CNN/DM to gain insights into our (BERT-based) models.",
"Unless otherwise stated, we use oracle extractions at training time and automatic prediction at test time.",
"Novel n -grams.",
"While we sometimes provide information extracted from the source document as guidance signals, it is unclear whether the model will over-fit to and regurgitate this guidance, or still generate novel expressions.",
"To measure this, we count the number of novel n -grams in the output summaries, namely n -grams that do not appear in the source document.",
"As shown in Figure 3, all of our guided models in fact generate more novel n grams than the baseline, likely because at training time the model is trained to compress and paraphrase the extracted information from the source document into the gold summary.",
"In addition, our models cover more novel n -grams that are in the gold reference than baseline.",
"The results indicate that our guided models can indeed generate novel expressions, and are not referencing the input guidance too strongly.",
"Complementarity of Different Guidance Signals.",
"While some guidance signals achieve worse performance than others, it is still possible to aggregate their outputs and obtain better performance if their outputs are diverse and they complement each-other.",
"To verify this hypothesis, we try to select the best output of the four guidance signals for each test datapoint and investigate if we can aggregate their best outputs and achieve better performance.",
"score of each output of the four guidance signals and pick the best one.",
"As shown in Table 6, despite the fact that the highlighted sentence signal achieves the best overall performance, it still under-performs one of the other three varieties of guidance more than 60% of the time.",
"In addition, by aggregating their best outputs together, we can achieve a ROUGE-1/L point of 48.30/45.15, which significantly outperforms any single guided model.",
"Further, we try to aggregate these guidance signals in a pairwise manner, and Table 7 demonstrates that each guidance signal is complementary to each other to some extent.",
"Thus, we can safely conclude that each type of guidance signal has its own merits and one promising direction is to utilize a system combination method such as Hong et al. (2015) to aggregate the results together.",
"Controllability.",
"It is also of interest what effect this guidance has on the model outputs qualitatively.",
"We sample several generated outputs (Table 8) and find that different provided signals can result in different outputs.",
"Especially, for our sentence-guided model, providing the model with by running tissue paper over his son seth makes him sleep enables the model to generate the exact same sentence, and when the model is fed with one grateful viewer of the video commented... , it will generate one viewer commented... .",
"The examples demonstrate that our model can generate summaries mostly faithful to the guidance signals while also performing abstraction.",
"Faithfulness of Generated Summaries.",
"We also evaluate whether our generated summaries are faithful to the source document.",
"We randomly sample 100 datapoints from the test set and ask 3 people from Amazon Mechanical Turk to evaluate their factual correctness.",
"Each person gives a score between 1 and 3, with 3 being perfectly faithful to the source document.",
"Table 9 shows that our guided model can generate more faithful summaries compared with the baseline.",
"Necessity of Using Oracles During Training.",
"As mentioned previously, we use an oracle to select guidance signals during training.",
"In this part, we investigate if we can provide automatically constructed guidance to the model during training as well.",
"Table 10 shows that this methodology will lead to significantly worse performance.",
"We con-Model Guidance Output Ref. nathan dailo has found a way to get his son to sleep in 42 seconds.",
"jecture that this is because when the relevancy between guidance and reference is weakened, the model will not learn to depend on the guidance signals and thus the model will be reduced to the original abstractive summarization baseline.",
"We propose a general framework for guided neural summarization, using which we investigate four types of guidance signals and achieve state-of-the-art performance on various popular datasets.",
"We demonstrate the complementarity of the four guid-Train Test R-1 R-2 R-L Oracle Auto 43.78 20.66 40.66 Oracle 55.18 32.54 52.06 Auto Auto 41.61 19.04 38.65 Oracle 43.07 20.79 40.13 Table 10: Using automatically constructed guidance during training degrades the performance significantly.",
"ance signals, and find that our models can generate more novel words and more faithful summaries.",
"We also show that we can control the output by providing user-specified guidance signals.",
"Given the generality of our framework, this opens the possibility for several future research directions including",
"1) developing strategies to ensemble models under different guidance signals;",
"2) incorporating sophisticated techniques such as copy or coverage over the source document, the guidance signal, or both; and",
"3) experimenting with other kinds of guidance signals such as salient elementary discourse units.",
"We thank Shruti Rijhwani, Yiran Chen, Jiacheng Xu and anonymous reviewers for valuable feedback and helpful suggestions.",
"This work was supported in part by a grant under the Northrop Grumman SO-TERIA project and the Air Force Research Laboratory under agreement number FA8750-19-2-0200.",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government."
]
| [
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"result",
"objective",
"objective",
"method",
"result",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
]
|
[
"The ability to learn from large unlabeled corpora has allowed neural language models to advance the frontier in natural language understanding.",
"However, existing self-supervision techniques operate at the word form level, which serves as a surrogate for the underlying semantic content.",
"This paper proposes a method to employ weak-supervision directly at the word sense level.",
"Our model, named SenseBERT, is pre-trained to predict not only the masked words but also their WordNet supersenses.",
"Accordingly, we attain a lexical-semantic level language model, without the use of human annotation.",
"SenseBERT achieves sig-nificantly improved lexical understanding, as we demonstrate by experimenting on SemEval Word Sense Disambiguation, and by attaining a state of the art result on the Word in Context' task.",
"Neural language models have recently undergone a qualitative leap forward, pushing the state of the art on various NLP tasks.",
"Together with advances in network architecture (Vaswani et al., 2017), the use of self-supervision has proven to be central to these achievements, as it allows the network to learn from massive amounts of unannotated text.",
"The self-supervision strategy employed in BERT (Devlin et al., 2019) involves masking some of the words in an input sentence, and then training the model to predict them given their context.",
"Other proposed approaches for self-supervised objectives, including unidirectional (Radford et al., 2019), permutational (Yang et al., 2019), or word insertion-based (Chan et al., 2019) methods, operate similarly, over words.",
"However, since a given word form can possess multiple meanings ( e.g. , the word bass' can refer to a fish, a guitar, a type of singer, etc. ), the word itself is merely a surrogate of its actual meaning in a given context, referred to as its sense .",
"Indeed, the word-form level is viewed as a surface level which often introduces challenging ambiguity (Navigli, 2009).",
"In this paper, we bring forth a novel methodology for applying weak-supervision directly on the level of a word's meaning.",
"By infusing word-sense information into BERT's pre-training signal, we explicitely expose the model to lexical semantics when learning from a large unannotated corpus.",
"We call the resultant sense-informed model SenseBERT .",
"Specifically, we add a masked-word sense prediction task as an auxiliary task in BERT's pre-training.",
"Thereby, jointly with the standard word-form level language model, we train a semantic-level language model that predicts the missing word's meaning.",
"Our method does not require sense-annotated data; self-supervised learning from unannotated text is facilitated by using WordNet (Miller, 1998), an expert constructed inventory of word senses, as weak supervision.",
"We focus on a coarse-grained variant of a word's sense, referred to as its WordNet supersense , in order to mitigate an identified brittleness of fine-grained word-sense systems, caused by arbitrary sense granularity, blurriness, and general subjectiveness (Kilgarriff, 1997; Schneider, 2014).",
"WordNet lexicographers organize all word senses into 45 supersense categories, 26 of which are for nouns, 15 for verbs, 3 for adjectives and 1 for adverbs (see full supersense table in the supplementary materi-als).",
"Disambiguating a word's supersense has been widely studied as a fundamental lexical categorization task (Ciaramita and Johnson, 2003; Basile, 2012; Schneider and Smith, 2015).",
"We employ the masked word's allowed supersenses list from WordNet as a set of possible labels for the sense prediction task.",
"The labeling of words with a single supersense ( e.g. , sword' has only the supersense noun.artifact) is straightforward: We train the network to predict this supersense given the masked word's context.",
"As for words with multiple supersenses ( e.g. , bass' can be: noun.food, noun.animal, noun.artifact, noun.person, etc. ), we train the model to predict any of these senses, leading to a simple yet effective soft-labeling scheme.",
"We show that SenseBERT BASE outscores both BERTBASE and BERTLARGE by a large margin on a supersense variant of the SemEval Word Sense Disambiguation (WSD) data set standardized in Raganato et al. (2017).",
"Notably, SenseBERT receives competitive results on this task without fune-tuning, i.e. , when training a linear classifier over the pretrained embeddings, which serves as a tes-tament for its self-acquisition of lexical semantics.",
"Furthermore, we show that SenseBERT BASE surpasses BERTLARGE in the Word in Context (WiC) task (Pilehvar and Camacho-Collados, 2019) from the SuperGLUE benchmark (Wang et al., 2019), which directly depends on word-supersense awareness.",
"A single SenseBERT LARGE model achieves state of the art performance on WiC with a score of 72 .",
"14 , improving the score of BERTLARGE by 2 .",
"5 points.",
"Neural network based word embeddings first appeared as a static mapping (non-contextualized), where every word is represented by a constant pretrained embedding (Mikolov et al., 2013; Pennington et al., 2014).",
"Such embeddings were shown to contain some amount of word-sense information (Iacobacci et al., 2016; Yuan et al., 2016; Arora et al., 2018; Le et al., 2018).",
"Additionally, sense embeddings computed for each word sense in the word-sense inventory (e.g. WordNet) have been employed, relying on hypernymity relations (Rothe and Schutze, 2015) or the gloss for each sense (Chen et al., 2014).",
"These approaches rely on static word embeddings and require a large amount of annotated data per word sense.",
"The introduction of contextualized word embeddings (Peters et al., 2018), for which a given word's embedding is context-dependent rather than precomputed, has brought forth a promising prospect for sense-aware word embeddings.",
"Indeed, visualizations in Reif et al. (2019) show that sense sensitive clusters form in BERT's word embedding space.",
"Nevertheless, we identify a clear gap in this abilty.",
"We show that a vanilla BERT model trained with the current word-level self-supervision, burdened with the implicit task of disambiguating word meanings, often fails to grasp lexical semantics, exhibiting high supersense misclassi-fication rates.",
"Our suggested weakly-supervised word-sense signal allows SenseBERT to signifi-cantly bridge this gap.",
"Moreover, SenseBERT exhibits an improvement in lexical semantics ability (reflected by the Word in Context task score) even when compared to models with WordNet infused linguistic knowledge.",
"Specifically we compare to Peters et al. (2019) who re-contextualize word embeddings via a word-to-entity attention mechanism (where entities are WordNet lemmas and synsets), and to Loureiro and Jorge (2019) which construct sense embeddings from BERT's word embeddings and use the WordNet graph to enhance coverage (see quantitative comparison in table 3).",
"In this section, we present our proposed method for integrating word sense-information within SenseBERT's pre-training.",
"We start by describing the vanilla BERT architecture in subsection 3.1.",
"We conceptually divide it into an internal transformer encoder and an external mapping W which translates the observed vocabulary space into and out of the transformer encoder space [see illustration in figure",
"1(a)].",
"In the subsequent subsections, we frame our contribution to the vanilla BERT architecture as an addition of a parallel external mapping to the words supersenses space, denoted S [see illustration in figure",
"1(b)].",
"Specifically, in section 3.2 we describe the loss function used for learning S in parallel to W , effectively implementing word-form and word-sense multi-task learning in the pre-training stage.",
"Then, in section 3.3 we describe our methodology for adding supersense information in S to the initial Transformer embedding, in parallel to word-level information added by W .",
"In section 3.4 we address the issue of supersense prediction for out-of-vocabulary words, and in section 3.5 we describe our modification of BERT's masking strategy, prioritizing single-supersensed words which carry a clearer semantic signal.",
"The input to BERT is a sequence of words { x ( j ) { 0 , 1 } DW } Nj =1 where 15% of the words are re-+",
"placed by a [MASK] token (see treatment of sub-word tokanization in section 3.4).",
"Here N is the input sentence length, DW is the word vocabulary size, and x ( j ) is a 1-hot vector corresponding to the j th input word.",
"For every masked word, the output of the pretraining task is a word-score vector y words RDW containing the per-word score.",
"BERT's architecture can be decomposed to (1) an internal Transformer encoder architecture (Vaswani et al., 2017) wrapped by (2) an external mapping to the word vocabulary space, denoted by W .",
"1 The Transformer encoder operates over a sequence of word embeddings v ( j ) input R d , where d is the Transformer encoder's hidden dimension.",
"These are passed through multiple attention-based Transformer layers, producing a new sequence of contextualized embeddings at each layer.",
"The Transformer encoder output is the final sequence of contextualized word embeddings v ( j ) output R d .",
"The external mapping W R d DW is effectively a translation between the external word vocabulary dimension and the internal Transformer dimension.",
"Original words in the input sentence are translated into the Transformer block by applying this mapping (and adding positional encoding vectors p ( j ) R d ): v ( j ) input = W x ( j ) + p ( j ) (1) 1 For clarity, we omit a description of the Next Sentence Prediction task which we employ as in Devlin et al. (2019).",
"The word-score vector for a masked word at position j is extracted from the Transformer encoder output by applying the transpose: y words = W (cid:62) v ( j ) output [see illustration in figure",
"1(a)].",
"The use of the same matrix W as the mapping in and out of the transformer encoder space is referred to as weight tying (Inan et al., 2017; Press and Wolf, 2017).",
"Given a masked word in position j , BERT's original masked-word prediction pre-training task is to have the softmax of the word-score vector y words = W (cid:62) v ( j ) output get as close as possible to a 1-hot vector corresponding to the masked word.",
"This is done by minimizing the cross-entropy loss between the softmax of the word-score vector and a 1-hot vector corresponding to the masked word: LLM = log p ( w | context ) , (2) where w is the masked word, the context is composed of the rest of the input sequence, and the probability is computed by: p ( w | context ) = exp (cid:0) y words w (cid:1) (cid:80) w (cid:48) exp (cid:0) y words w (cid:48) (cid:1) , (3) where y words w denotes the w th entry of the word-score vector.",
"Jointly with the above procedure for training the word-level language model of SenseBERT, we train the model to predict the supersense of every masked word, thereby training a semantic-level language model.",
"This is done by adding a parallel external mapping to the words supersenses space, denoted S R d DS [see illustration in figure",
"1(b)], where DS = 45 is the size of supersenses vocabulary.",
"Ideally, the objective is to have the softmax of the sense-score vector y senses RDS := S (cid:62) v ( j ) output get as close as possible to a 1-hot vector corresponding to the word's supersense in the given context.",
"For each word w in our vocabulary, we employ the WordNet word-sense inventory for constructing A ( w ) , the set of its allowed supersenses.",
"Specifically, we apply a WordNet Lemmatizer on w , extract the different synsets that are mapped to the lemmatized word in WordNet, and define A ( w ) as the union of supersenses coupled to each of these synsets.",
"As exceptions, we set A ( w ) = for the following:",
"(i) short words (up to 3 characters), since they are often treated as abbreviations,",
"(ii) stop words, as WordNet does not contain their main synset (e.g. he' is either the element helium or the hebrew language according to WordNet), and",
"(iii) tokens that represent part-of-word (see section 3.4 for further discussion on these tokens).",
"Given the above construction, we employ a combination of two loss terms for the supersense-level language model.",
"The following allowed-senses term maximizes the probability that the predicted sense is in the set of allowed supersenses of the masked word w : L allowedSLM = log p ( s A ( w ) | context ) = log (cid:88) s A ( w ) p ( s | context ) , (4) where the probability for a supersense s is given by: p ( s | context ) = exp( y senses s ) (cid:80) s (cid:48) exp( y senses s (cid:48) ) .",
"(5) The soft-labeling scheme given above, which treats all the allowed supersenses of the masked word equally, introduces noise to the supersense labels.",
"We expect that encountering many contexts in a sufficiently large corpus will reinforce the correct labels whereas the signal of incorrect labels will diminish.",
"To illustrate this, consider the following examples for the food context: 1. This bass is delicious (supersenses: noun.food, noun.artifact, etc. ) 2. This chocolate is delicious (supersenses: noun.food, noun.attribute, etc. ) 3. This pickle is delicious (supersenses: noun.food, noun.state, etc. ) Masking the marked word in each of the examples results in three identical input sequences, each with a different sets of labels.",
"The ground truth label, noun.food, appears in all cases, so that its probability in contexts indicating food is increased whereas the signals supporting other labels cancel out.",
"While L allowedSLM pushes the network in the right direction, minimizing this loss could result in the network becoming overconfident in predicting a strict subset of the allowed senses for a given word, i.e., a collapse of the prediction distribution.",
"This is especially acute in the early stages of the training procedure, when the network could converge to the noisy signal of the soft-labeling scheme.",
"To mitigate this issue, the following regularization term is added to the loss, which encourages a uniform prediction distribution over the allowed supersenses: L regSLM = (cid:88) s A ( w ) 1 | A ( w ) | log p ( s | context ) , (6) i.e. , a cross-entropy loss with a uniform distribution over the allowed supersenses.",
"Overall, jointly with the regular word level language model trained with the loss in eq.",
"2, we train the semantic level language model with a combined loss of the form: LSLM = L allowedSLM + L reg SLM .",
"Though in principle two different matrices could have been used for converting in and out of the Tranformer encoder, the BERT architecture employs the same mapping W .",
"This approach, referred to as weight tying, was shown to yield theoretical and pracrical benefits (Inan et al., 2017; Press and Wolf, 2017).",
"Intuitively, constructing the Transformer encoder's input embeddings from the same mapping with which the scores are computed improves their quality as it makes the input more sensitive to the training signal.",
"We follow this approach, and insert our newly proposed semantic-level language model matrix S in the input in addition to W [as depicted in figure",
"1(b)], such that the input vector to the Transformer encoder (eq. 1) is modified to obey: v ( j ) input = ( W + SM ) x ( j ) + p ( j ) , (8) where p ( j ) are the regular positional embeddings as used in BERT, and M RDS DW is a static 0/1 matrix converting between words and their allowed WordNet supersenses A ( w ) (see construction details above).",
"The above strategy for constructing v ( j ) input allows for the semantic level vectors in S to come into play and shape the input embeddings even for words which are rarely observed in the training corpus.",
"For such a word, the corresponding row in W is potentially less informative, since due to the low word frequency the model did not have sufficient chance to adequately learn it.",
"However, since the model learns a representation of its supersense, the corresponding row in S is informative of the semantic category of the word.",
"Therefore, the input embedding in eq.",
"8 can potentially help the model to elicit meaningful information even when the masked word is rare, allowing for better exploitation of the training corpus.",
"At the pre-processing stage, when an out-of-vocabulary (OOV) word is encountered in the corpus, it is divided into several in-vocabulary sub-word tokens.",
"For the self-supervised word prediction task (eq. 2) masked sub-word tokens are straightforwardly predicted as described in section 3.1.",
"In contrast, word-sense supervision is only meaningful at the word level.",
"We compare two alternatives for dealing with tokenized OOV words for the supersense prediction task (eq. 7).",
"In the first alternative, called 60 K vocabulary , we augment BERT's original 30 K-token vocabulary (which roughly contained the most frequent words) with additional 30K new words, chosen according to their frequency in Wikipedia.",
"This vocabulary increase allows us to see more of the corpus as whole words for which supersense prediction is a meaningful operation.",
"Additionally, in accordance with the discussion in the previous subsection, our sense-aware input embedding mechanism can help the model extract more information from lower-frequency words.",
"For the cases where a sub-word token is chosen for masking, we only propagate the regular word level loss and do not train the supersense prediction task.",
"The above addition to the vocabulary results in an increase of approximately 23 M parameters over the 110 M parameters of BERTBASE and an increase of approximately 30 M parameters over the 340 M parameters of BERTLARGE (due to different embedding dimensions d = 768 and d = 1024 , respec-tively).",
"It is worth noting that similar vocabulary sizes in leading models have not resulted in increased sense awareness, as reflected for example in the WiC task results (Liu et al., 2019).",
"As a second alternative, referred to as average embedding , we employ BERT's regular 30 K-token",
"vocabulary and employ a whole-word-masking strategy.",
"Accordingly, all of the tokens of a tokenized OOV word are masked together.",
"In this case, we train the supersense prediction task to predict the WordNet supersenses of this word from the average of the output embeddings at the location of the masked sub-words tokens.",
"Words that have a single supersense are good anchors for obtaining an unambiguous semantic signal.",
"These words teach the model to accurately map contexts to supersenses, such that it is then able to make correct context-based predictions even when a masked word has several supersenses.",
"We therefore favor such words in the masking strategy, choosing 50% of the single-supersensed words in each input sequence to be masked.",
"We stop if 40% of the overall 15% masking budget is filled with single-supersensed words (this rarly happens), and in any case we randomize the choice of the remaining words to complete this budget.",
"As in the original BERT, 1 out of 10 words chosen for masking is shown to the model as itself rather than replaced with [MASK].",
"A SenseBERT pretrained as described in section 3 (with training hyperparameters as in Devlin et al. (2019)), has an immediate non-trivial bi-product.",
"The pre-trained mapping to the supersenses space, denoted S , acts as an additional head predicting a word's supersense given context [see figure",
"1(b)].",
"We thereby effectively attain a semantic-level lan-SenseBERT BASE SemEval-SS Fine-tuned 30 K no OOV 81.9 30 K average OOV 82.7 60 K no OOV 83 Table 1: Testing variants for predicting supersenses of rare words during SenseBERT's pretraining, as described in section 5.1.",
"guage model that predicts the missing word's meaning jointly with the standard word-form level language model.",
"We illustrate the resultant mapping in figure 2, showing a UMAP dimensionality reduction (McInnes et al., 2018) of the rows of S , which corresponds to the different supersenses.",
"A clear clustering according to the supersense part-of-speech is apparent in figure",
"2(a).",
"We further identify finer-grained semantic clusters, as shown for example in figure",
"2(b) and given in more detail in the supplementary materials.",
"SenseBERT's semantic language model allows predicting a distribution over supersenses rather than over words in a masked position.",
"Figure",
"3(a) shows the supersense probabilities assigned by SenseBERT in several contexts, demonstrating the model's ability to assign semantically meaningful categories to the masked position.",
"Finally, we demonstrate that SenseBERT enjoys",
"Figure",
"3(b) shows example sentences and their supersense prediction by the pretrained model.",
"Where a vanilla BERT would see only the words of the sentence Dan cooked a bass on the grill, SenseBERT would also have access to the supersense abstraction: [Person] [created] [food] on the [artifact].",
"This sense-level perspective can help the model extract more knowledge from every training example, and to generalize semantically similar notions which do not share the same phrasing.",
"In this section, we present quantitative evaluations of SenseBERT, pre-trained as described in section 3. We test the model's performance on a supersense-based variant of the SemEval WSD test sets standardized in Raganato et al. (2017), and on the Word in Context (WiC) task (Pilehvar and Camacho-Collados, 2019) (included in the recently introduced SuperGLUE benchmark (Wang et al., 2019)), both directly relying on the network's ability to perform lexical semantic categorization.",
"We first report a comparison of the two methods described in section 3.4 for predicting the supersenses of rare words which do not appear in BERT's original vocabulary.",
"The first 60 K vocabulary method enriches the vocabulary and the second average embedding method predicts a supersense from the average embeddings of the sub-word tokens comprising an OOV word.",
"During fine-tuning, when encountering an OOV word we predict the supersenses from the rightmost sub-word token in the 60 K vocabulary method and from the average of the sub-word tokens in the average embedding method.",
"As shown in table 1, both methods perform comparably on the SemEval supersense disambiguation task (see following subsection), yielding an improvement over the baseline of learning supersense information only for whole words in BERT's original 30 K-token vocabulary.",
"We continue with the 60 K-token vocabulary for the rest of the experiments, but note the average embedding option as a viable competitor for predicting word-level semantics.",
"We test SenseBERT on a Word Supersense Disambiguation task, a coarse grained variant of the common WSD task.",
"We use SemCor (Miller et al., 1993) as our training dataset ( 226 , 036 annotated examples), and the SenseEval (Edmonds and Cotton, 2001; Snyder and Palmer, 2004) / SemEval (Pradhan et al., 2007; Navigli et al., 2013; Moro and Navigli, 2015) suite for evaluation (over-all 7253 annotated examples), following Raganato et al. (2017).",
"For each word in both training and test sets, we change its fine-grained sense label to its corresponding WordNet supersense, and therefore train the network to predict a given word's supersense.",
"We name this Supersense disambiguation task SemEval-SS.",
"See figure",
"4(a) for an example SemEval-SS Frozen SemEval-SS Fine-tuned Word in Context BERTBASE 65.1 79.2 BERTLARGE 67.3 81.1 69.6 SenseBERT BASE 75.6 83.0 70.3 SenseBERT LARGE 79.5 83.7 72.1 Table 2: Results on a supersense variant of the SemEval WSD test set standardized in Raganato et al. (2017), which we denote SemEval-SS, and on the Word in Context (WiC) dataset (Pilehvar and Camacho-Collados, 2019) included in the recently introduced SuperGLUE benchmark (Wang et al., 2019).",
"We show results on the SemEval-SS task for two different training schemes.",
"In the first, we trained a linear classifier over the frozen' output embeddings of the examined model we do not change the the trained SenseBERT's parameters in this scheme.",
"This Frozen setting is a test for the amount of basic lexical semantics readily present in the pre-trained model, easily extricable by further downstream tasks (reminiscent of the semantic probes employed in Hewitt and Manning (2019); Reif et al. (2019).",
"In the second training scheme we fine-tuned the examined model on the task, allowing its parameters to change during training (see full training details in the supplementary materials).",
"Results attained by employing this training method reflect the model's potential to acquire word-supersense information given its pre-training.",
"Table 2 shows a comparison between vanilla BERT and SenseBERT on the supersense disambiguation task.",
"Our semantic level pretraining signal clearly yields embeddings with enhanced word-meaning awareness, relative to embeddings trained with BERT's vanilla word-level signal.",
"SenseBERT BASE improves the score of BERTBASE in the Frozen setting by over 10 points and SenseBERT LARGE improves that of BERTLARGE by over 12 points, demonstrating competitive results even without fine-tuning.",
"In the setting of model fine-tuning, we see a clear demonstration of the model's ability to learn word-level semantics, as SenseBERT BASE surpasses the score of BERTLARGE by 2 points.",
"We test our model on the recently introduced WiC binary classification task.",
"Each instance in WiC has a target word w for which two contexts are provided, each invoking a specific meaning of w .",
"The task is to determine whether the occurrences of w in the two contexts share the same meaning or not, clearly requiring an ability to identify the word's semantic category.",
"The WiC task is defined over supersenses (Pilehvar and Camacho-Collados, 2019) the negative examples include a word used in two different supersenses and the positive ones include a word used in the same supersense.",
"See figure",
"4(b) for an example from this data set.",
"Results on the WiC task comparing SenseBERT to vanilla BERT are shown in table 2. SenseBERT BASE surpasses a larger vanilla model, BERTLARGE .",
"As shown in table 3, a single SenseBERT LARGE model achieves the state of the art score in this task, demonstrating unprecedented lexical semantic awareness.",
"The General Language Understanding Evaluation (GLUE; Wang et al. (2018)) benchmark is a popular testbed for language understanding models.",
"It consists of 9 different NLP tasks, covering different linguistic phenomena.",
"We evaluate our model on GLUE, in order to verify that SenseBERT gains its lexical semantic knowledge without compromising performance on other downstream tasks.",
"Due to slight differences in the data used for pretraining BERT and SenseBERT (BookCorpus is not publicly available), we trained a BERTBASE model with the same data used for our models.",
"BERTBASE and SenseBERT BASE were both finetuned using the exact same procedures and hyperparameters.",
"The results are presented in table 4. Indeed, SenseBERT performs on par with BERT, achieving an overall score of 77.9, compared to 77.5 achieved by BERTBASE .",
"We introduce lexical semantic information into a neural language model's pre-training objective.",
"This results in a boosted word-level semantic awareness of the resultant model, named SenseBERT, which considerably outperforms a vanilla BERT on a SemEval based Supersense Disambiguation task and achieves state of the art results on the Word in Context task.",
"This improvement was obtained without human annotation, but rather by harnessing an external linguistic knowledge source.",
"Our work indicates that semantic signals extending beyond the lexical level can be similarly introduced at the pre-training stage, allowing the network to elicit further insight without human supervision.",
"We acknowledge useful comments and assistance from our colleagues at AI21 Labs.",
"We would also like to thank the anonymous reviewers for their valuable feedback."
]
| [
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"objective",
"other",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"method",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"other",
"other"
]
|
[
"Language models (LMs) must be both safe and equitable to be responsibly deployed in practice.",
"With safety in mind, numerous detoxification techniques (e.g., Dathathri et al. 2020; Krause et al. 2020) have been proposed to mitigate toxic LM generations.",
"In this work, we show that these detoxification techniques hurt equity: they decrease the utility of LMs on language used by marginalized groups (e.g., African-American English and minority identity mentions).",
"In particular, we perform automatic and human evaluations of text generation quality when LMs are conditioned on inputs with different dialects and group identi-fiers.",
"We find that detoxification makes LMs more brittle to distribution shift, especially on language used by marginalized groups.",
"We identify that these failures stem from detoxification methods exploiting spurious correlations in toxicity datasets.",
"Overall, our results highlight the tension between the controllability and distributional robustness of LMs.",
"Recent neural language models (LMs) have shown enormous improvements in text generation abilities.",
"A key factor behind these improvements is large training corpora that are collected from online sources (Radford et al., 2019).",
"Unfortunately, because such corpora are too large to filter granularly (Roller et al., 2020), they inevitably contain so-called toxic examples: undesirable language such as expletives, slurs, or other offensive and threatening speech.",
"When trained on such data, LMs inevitably learn to generate toxic text (Hen-derson et al., 2018; Wallace et al., 2019).",
"To address this issue, recent work has turned towards detoxifying LMs: reducing toxic generations without affecting perplexity or generation quality on nontoxic inputs.",
"Existing detoxification strategies involve techniques such as finetuning LMs on nontoxic data (Gehman et al., 2020) or incorporating a toxicity discriminator during decoding (Dathathri et al., 2020).",
"Our evaluation of these techniques shows that they are indeed effective at mitigating toxicity, but at what cost?",
"We demonstrate that detoxification can hurt LM utility on language used by minority groups.",
"Concretely, we evaluate detoxified LMs on text with minority identity mentions (e.g., words such as gay or Muslim) and surface markers of African-American English (Green, 2002, AAE).",
"We first show that, compared to text containing White-Aligned English (WAE), detoxification causes a disproportionately large increase in LM perplexity on text with AAE and minority identity mentions.",
"Moreover, increasing the strength of detoxification amplifies this bias.",
"The same trends hold when evaluating the text generation quality of LMs using crowdworkers.",
"When conditioned on WAE text, detoxified LMs can roughly maintain the topic, fluency, and style of an input prompt.",
"However, generation quality deteriorates when models are conditioned on AAE text, i.e., detoxification hurts an LMs' ability to understand and complete AAE text.",
"We identify that these failures are due to the use of biased toxic classification data.",
"In particular, toxicity datasets often contain spurious correlations between the toxic label and the presence of AAE and minority identity mentions (Sap et al., 2019).",
"These correlations cause detoxification techniques to steer generations away from AAE and minority identity mentions because they often consider these aspects of language to be toxic.",
"We conclude by outlining concrete harms and possible solutions to these biases.",
"With regard to harms, we argue that biased systems force marginalized users to code-switch or hide their identity and that these systems can contribute to social stigmas.",
"For solutions, we discuss improved procedures for data annotation and model training that may help debias detoxification techniques.",
"The goal of detoxification is to mitigate the frequency of toxic generations (also called hate speech or offensive language) without affecting an LM's utility or generation quality on nontoxic inputs.",
"We detoxify models using controllable generation techniques that steer outputs away from toxicity.",
"Following past work (Gehman et al., 2020; Xu et al., 2020), we use four techniques that provide state-of-the-art levels of detoxification.",
"DAPT We consider domain-adaptive pretraining (Gururangan et al., 2020, DAPT), i.e., finetuning LMs on nontoxic data.",
"This technique aims to erase an LM's knowledge of toxicity via catastrophic forgetting (McCloskey and Cohen, 1989).",
"PPLM We consider plug and play language models (Dathathri et al., 2020, PPLM).",
"Here, we first train a toxicity classifier using the hidden states of the LM as features.",
"At generation time, the LM's hidden states are iteratively updated using a gradient from the toxicity classifier.",
"GeDi We consider GeDi (Krause et al., 2020), which combines the probabilities from the LM with the probabilities from a second, smaller LM that is trained on nontoxic data (Krause et al., 2020).",
"We finetune GPT-2 small (Radford et al., 2019) for the second LM.",
"Filtering Finally, we consider output filtering, where we generate a fixed number of times (we use 10) from the LM and return the least toxic generation according to a toxicity classifier.",
"We reuse the same toxicity classifier from PPLM.",
"We use GPT-2 medium (Radford et al., 2019) as the base LM for all detoxification techniques.",
"We use the hyperparameters from the original papers for each technique, except we generate using top-k sampling (Fan et al., 2018) with k = 50 for all methods to enable a fair comparison.",
"For training data, we use the commonly-studied English Jigsaw Civil Comments dataset.",
"1 We remove examples where between 10% and 50% of the annotations are the toxic label (i.e., examples with low inter-annotator agreement).",
"We publicly release our code.",
"2 3 Detoxifying LMs Introduces Biases In this section, we evaluate the detoxification methods and show that they introduce biases into LMs that may harm marginalized groups.",
"1 https://www.kaggle.com/c/ jigsaw-unintended-bias-in-toxicity-classification 2 https://github.com/albertkx/detoxifying-lms/ Detoxification Topicality Fluency Style 0% 10% 20% 30% 40% 50% 60% 70% 80% P e r ce n t P r e f e rr e d O v e r GPT2 1 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 DAPT WAE DAPT AAE PPLM WAE PPLM AAE GeDi WAE GeDi AAE Filtering WAE Filtering AAE Figure 3: We use the detoxified LMs to generate completions of WAE or AAE prompts.",
"We first perform intrinsic evaluations of each detoxification technique by computing the perplexity of detoxified models on various datasets.",
"Note that we are not generating from the LM in this evaluation.",
"3 White-Aligned English Perplexity We first evaluate the perplexity on White-Aligned English (WAE) text that is either toxic or nontoxic.",
"We use WAE tweets from Groenwold et al. (2020).",
"4 The detoxification techniques are effective at removing toxicity: the perplexity on toxic data increases substantially (Figure 1, toxic evaluation set).",
"All techniques also cause a (smaller) increase in the perplexity on nontoxic WAE tweets, which shows that detoxification comes at some cost to the LM's utility.",
"Part of this increase likely results from distribution shift: the detoxification methods are trained on comments data, but our evaluation sets come from Twitter.",
"Identity Mentions and AAE Perplexity We next evaluate the perplexity of the detoxified LMs on nontoxic language that may be used by marginalized groups.",
"Concretely, we use text that contains minority identity mentions (e.g., words such as gay or Muslim) or surface markers of African-American English (Green, 2002, AAE).",
"We form two evaluation sets using tweets.",
"First, we collect tweets from the Twitter API that contain specific 3 The filtering detoxification method has the same perplexity as the baseline LM because it is applied post-decoding.",
"We do not report it here.",
"For GeDi, we set to 0 .",
"3 because the default value of 30 results in nearly infinite perplexities.",
"4 We split this data into toxic and nontoxic sets by scoring the WAE-AAE pairs using the Perspective API at https:// www.perspectiveapi.com/ .",
"identity mentions.",
"5 Second, we use the nontoxic data from Groenwold et al. (2020), which are the AAE equivalents of the nontoxic WAE tweets we used for the previous evaluation.",
"We find that there is a disproportionately large increase in LM perplexity on the AAE and minority identity mention tweets (Figure 1, AAE and identity mentions).",
"For example, when using PPLM, the perplexity increases by a factor of 2.1 on nontoxic WAE data and a factor of 4.3 on minority identity mention data.",
"Stronger Detoxification Amplifies Biases We also find that stronger detoxification amplifies the gap in perplexity between text with WAE and text with AAE or minority identity mentions.",
"This occurs for all detoxification techniques, for example, in Figure 2 we vary a parameter in GeDi that increases the degree of detoxification ( ).",
"As more detoxification is applied, the ratio of AAE perplexity to WAE perplexity increases dramatically, reaching upwards of 400.",
"As an extrinsic evaluation, we measure the generation quality of each detoxification method using crowdworkers on Amazon Mechanical Turk.",
"We provide a short prompt as input to the detoxified LMs and then generate 30 additional tokens.",
"For the prompts, we tokenize the aforementioned AAE and WAE tweets and extract the first half of each tweet.",
"We sample 50 prompts from each set of tweets, producing 100 total prompts.",
"Annota-5 See Appendix A for our word list.",
"tors are shown the prompt and asked to select the better of two model-generated continuations: one from the baseline GPT-2 model and one from a randomly selected detoxification technique. They evaluate the model continuations based on toxicity and three measures of generation quality: topicality, fluency, and style. See Appendix B for screen-shots of the setup (including concrete definitions of topicality, fluency, and style). Each example is evaluated by three different crowdworkers.",
"Figure 3 shows the results split by WAE and AAE prompts, and Table 1 shows examples of generations. All detoxification methods generate less toxicity than the baseline GPT-2 model. 6 However, this detoxification typically comes at a degradation in generation quality. For example, more than 80% of annotators found GeDi less topical than the GPT-2 baseline, and all of the techniques except DAPT were rated as less fluent. 7",
"Worse yet, when models are conditioned on AAE texts (hatched bars in Figure 3), the generation quality is consistently lower across all metrics. The drop is most significant in topicality, where all detoxified models prefer to change the topic when asked to generate text conditioned on AAE prompts (e.g., GeDi was preferred only half as often for topicality on AAE prompts than on WAE prompts).",
"In this section, we explain why detoxification causes the utility of LMs to degrade on text that contains AAE and minority identity mentions. First, note that all detoxification techniques make use of labeled toxic/nontoxic data. For example, DAPT uses this data directly: it finetunes the LM on nontoxic examples. PPLM, GeDi, and Filtering use this data indirectly: they train a classifier or LM on the toxicity data and then incorporate this model into the LM's decoding strategy.",
"Unfortunately, there are spurious correlations between the toxic label and the presence of AAE and minority identity mentions (Sap et al., 2019; Dixon et al., 2018). These correlations arise from annotation and sampling biases. Annotation bias occurs because crowdworkers are often unfamiliar with AAE and consequently misjudge it as toxic (Sap et al., 2019). Sampling bias occurs because many toxic comments are directed towards marginalized groups (RWJF, 2017). The result of these two biases is that text which contains AAE and minority identity mentions is labeled as toxic at disproportionately high rates (Sap et al., 2019).",
"Detoxification techniques inherit these undesirable biases. For example, DAPT will train LMs to not only forget toxicity but also forget AAE and minority identity mentions. Similarly, the discriminators used by PPLM, GeDi, and Filtering will guide the generated text away from AAE and identity mentions because the discriminators typically consider such text as toxic (Dixon et al., 2018; Sap et al., 2019; Oliva et al., 2020). Also note that in all of the above cases, increasing the detoxification",
"detoxification strength (e.g., longer finetuning for DAPT or higher for GeDi) exacerbates these problems.",
"In our experiments, we test multiple detoxification methods to show that this bias is not linked to a specific technique, but instead to the process of detoxification in the presence of biased supervised data. In fact, other controllable generation techniques, including prompts (Wallace et al., 2019; Sheng et al., 2020; Shin et al., 2020) or conditional LMs (Keskar et al., 2019) will likely exhibit the same type of biases.",
"Our results demonstrate that the current state of detoxification poses representational harms (Blod-gett et al., 2020) to minority groups. We discuss the concrete impacts of these harms below.",
"In-group Harms Detoxified LMs are deployed in downstream NLP systems in which they directly engage with end users. In addition to LMs not being able to generate minority identity mentions and minority dialects, our results suggest that detoxified LMs also struggle to understand these aspects of language. This could lead to scenarios where end users who are AAE speakers must code-switch to WAE to ensure that NLP systems work effectively for them. Aside from being an annoyance, this is also a microaggression that poses psychological harms and may discourage AAE speakers from engaging with NLP systems whatsoever.",
"Stigmatization of Language Detoxified models also have a propensity to avoid certain topics, e.g., mentioning a minority identity term. As a practical example, the (detoxified) Microsoft Zo chatbot was capable of discussing Christianity but could not discuss Islam (Stuart-Ulin, 2018). Failures like these further two types of stigma. First, having one's identity silenced by an NLP system can lead to self-stigmatization and long-term health consequences. Second, a lack of informed, conscious discussion on topics of identity or dialect can magnify existing societal stigmas. For example, aligning an LM solely with WAE stigmatizes AAE as incorrect or bad English (Flores and Rosa, 2015).",
"In the technology industry, this can perpetuate a dangerous expectation that AAE users are not consumers who matter, stymieing progress on equitable NLP systems.",
"Biases Are Not Limited to Detoxification Although we have focused on problems with detoxification in this paper, similar failures will occur whenever controllable generation methods are used.",
"For example, a common goal is to control the sentiment of generated text (Dathathri et al., 2020; Krause et al., 2020).",
"Unfortunately, since sentiment datasets are often biased against certain racial groups (Kiritchenko and Mohammad, 2018), controlling the sentiment of text will also affect which races are discussed.",
"The harms that we have identified occur largely due to spurious correlations in toxicity datasets.",
"A natural direction for future work is to thus improve datasets, for example, by changing the annotation procedure (Sap et al., 2019) or labeling scheme (Kennedy et al., 2020; Sap et al., 2020).",
"Unfortunately, this can also make collecting annotations more expensive.",
"As an alternative or in addition to higher quality data, there is growing interest in training accurate models in the presence of biased data (Oren et al., 2019; Clark et al., 2019).",
"Unfortunately, state-of-the-art debiasing methods are still far from perfect (Zhou et al., 2021).",
"We plan to explore new methods for debiasing both datasets and models in future work."
]
| [
"abstain",
"abstain",
"result",
"method",
"result",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
]
|
[
"Cross-domain sentiment analysis has received significant attention in recent years, prompted by the need to combat the domain gap between different applications that make use of sentiment analysis.",
"In this paper, we take a novel perspective on this task by exploring the role of external commonsense knowledge.",
"We introduce a new framework, KinGDOM , which utilizes the ConceptNet knowledge graph to enrich the semantics of a document by providing both domain-specific and domain-general background concepts.",
"These concepts are learned by training a graph convolutional autoencoder that leverages inter-domain concepts in a domain-invariant manner.",
"Conditioning a popular domain-adversarial baseline method with these learned concepts helps improve its performance over state-of-the-art approaches, demonstrating the efficacy of our proposed framework.",
"Sentiment Analysis (SA) is a popular NLP task used in many applications (Zhang et al., 2018).",
"Current models trained for this task, however, cannot be reliably deployed due to the distributional mismatch between the training and evaluation domains (Daume III and Marcu, 2006).",
"Domain adaptation , a case of transductive transfer learning, is a widely studied field of research that can be effectively used to tackle this problem (Wilson and Cook, 2018).",
"Research in the field of cross-domain SA has proposed diverse approaches, which include learning domain-specific sentiment words/lexicons (Sarma et al., 2018; Hamilton et al., 2016b), co-occurrence based learning (Blitzer et al., 2007a), domain-adversarial learning (Ganin et al., 2016), among sketch Source domain: Books Target domain: Electronics T y p e O f R e l a t e dT o R e v i e w D o c u m e n t s wallpaper design C o n ce p t n e t Conceptual bridge source-specific domain-general target-specific Concepts : draw RelatedTo screen saver T y p e O f Figure 1 : ConceptNet provides networks with background concepts that enhance their semantic understanding.",
"For example, for a target sentence from electronics domain, The software came with decent screen savers , comprising domain-specific terms like screen saver or wallpaper , ConceptNet helps connecting them to general concepts like design , thus allowing a network better understand their meaning.",
"Furthermore, inter-domain conceptual bridge can also be established to connect source and target domains ( wallpaper sketch have similar conceptual notions under the link design ).",
"others.",
"In this work, we adopt the domain-adversarial framework and attempt to improve it further by infusing commonsense knowledge using ConceptNet a large-scale knowledge graph (Speer et al., 2017).",
"Augmenting neural models with external knowledge bases (KB) has shown benefits across a range of NLP applications (Peters et al., 2019; Li et al., 2019; IV et al., 2019; liu et al., 2019; Bi et al., 2019).",
"Despite their popularity, efforts to incorporate KBs into the domain-adaptation framework has been sporadic (Wang et al., 2008; Xiang et al., 2010).",
"To this end, we identify multiple advantages of using commonsense KBs for domain adaptation.",
"First, KBs help in grounding text to real entities, factual knowledge, and commonsense concepts.",
"Commonsense KBs, in particular, provide a rich source of background conceptsrelated by commonsense linkswhich can enhance the semantics of a piece of text by providing both domain-specific and domain-general concepts (Yang et al., 2019; Zhong et al., 2019; Agarwal et al., 2015; Zhong et al., 2019) (see Fig. 1).",
"For cross-domain SA, word polarities might vary among different domains.",
"For example, heavy can be a positive feature for a truck, but a negative feature for a smartphone.",
"It is, however, difficult to assign contextual-polarities solely from data, especially when there is no supervision (Boia et al., 2014).",
"In this domain-specific scenario, commonsense knowledge provides a dynamic way to enhance the context and help models understand sentiment-bearing terms and opinion targets through its structural relations (Cambria et al., 2018).",
"They also often aid in unearthing implicitly expressed sentiment (Balahur et al., 2011).",
"Second, domains often share relations through latent semantic concepts (Kim et al., 2017a).",
"For example, notions of wallpaper (from electronics) and sketch (from books) can be associated via related concepts such as design (see Fig. 1).",
"Multi-relational KBs provide a natural way to leverage such inter-domain relationships.",
"These connections can help models understand target-specific terms by associating to known domain-general or even source-specific concepts.",
"Following these intuitions, we propose a two-step modular framework, KinGDOM ( K nowledge-G uided Dom ain adaptation), which utilizes commonsense KB for domain adaptation.",
"KinGDOM first trains a shared graph autoencoder using a graph convolution network (GCN) on ConceptNet, so as to learn: 1) inter-domain conceptual links through multiple inference steps across neighboring concepts; and 2) domain-invariant concept representations due to shared autoencoding.",
"It then extracts document-specific sub-graph embeddings and feeds them to a popular domain-adversarial model DANN (Ganin et al., 2016).",
"Additionally, we also train a shared autoencoder on these extracted graph embeddings to promote further domain-invariance (Glorot et al., 2011).",
"Our main contributions in this work are: 1. We propose KinGDOM , a domain-adversarial framework that uses an external KB (Concept-Net) for unsupervised domain adaptation.",
"KinGDOM learns domain-invariant features of KB concepts using a graph autoencoding strategy.",
"2. We demonstrate, through experiments, that KinGDOM surpasses state-of-the-art methods on the Amazon-reviews dataset (Blitzer et al., 2007b), thus validating our claim that external knowledge can aid the task of cross-domain SA.",
"In the remaining paper, 2 explains related works and compares KinGDOM to them; 3 presents task definition and preliminaries; 4 introduces our proposed framework, KinGDOM ; 5 discusses experimental setup followed by results and extensive analyses in 6; finally, 7 concludes this paper.",
"Domain adaptation methods can be broadly categorized into three approaches:",
"a) instance-selection (Jiang and Zhai, 2007; Chen et al., 2011; Cao et al., 2018),",
"b) self-labeling (He and Zhou, 2011) and",
"c) representation learning (Glorot et al., 2011; Chen et al., 2012; Tzeng et al., 2014).",
"Our focus is on the third category which has emerged as a popular approach in this deep representation learning era (Ruder, 2019; Poria et al., 2020).",
"Domain-adversarial Training.",
"Our work deals with domain-adversarial approaches (Kouw and Loog, 2019), where we extend DANN Ganin et al. (2016).",
"Despite its popularity, DANN cannot model domain-specific information (e.g. indicators of tasty , delicious for kitchen domain) (Peng et al., 2018b).",
"Rectifications include shared-private encoders that model both domain-invariant and specific features (Li et al., 2012; Bousmalis et al., 2016a; Kim et al., 2017b; Chang et al., 2019), using adversarial and orthogonality losses (Liu et al., 2017; Li et al., 2018).",
"Although we do not use private encoders, we posit that our model is capable of capturing domain-specificity via the sentence-specific concept graph.",
"Also, our approach is flex-ible enough to be adapted to the setup of shared-private encoders.",
"External Knowledge.",
"Use of external knowledge has been explored in both inductive and transductive settings (Banerjee, 2007; Deng et al., 2018).",
"Few works have explored external knowledge in domain adaptation based on Wikipedia as auxiliary information, using co-clustering (Wang et al., 2008) and semi-supervised learning (SSL) (Xiang et al., 2010).",
"SSL has also been explored by Alam et al. (2018) in the Twitter domain.",
"Although we share a similar motivation, there exist crucial differences.",
"Primarily, we learn graph embeddings at the concept level, not across complete instances.",
"Also, we do not classify each concept node in the graph, which renders SSL inapplicable to our setup.",
"Domain Adaptation on Graphs.",
"With the ad-vent of graph neural networks, graph-based methods have become a new trend (Ghosal et al., 2019) in diverse NLP tasks such as emotion recognition in conversations (Poria et al., 2019).",
"Graph-based domain adaptation is categorized based on the availability of cross-domain connections.",
"For domain-exclusive graphs, approaches include SSL with GCNs (Shen and Chung, 2019) and domain-adversarial learning (Dai et al., 2019).",
"For cross-domain connected graphs, co-regularized training (Ni et al., 2018) and joint-embedding (Xu et al., 2017) have been explored.",
"We also utilize GCNs to learn node representations in our cross-domain ConceptNet graph.",
"However, rather than using explicit divergence measures or domain-adversarial losses for domain invariance, we uniquely adopt a shared-autoencoder strategy on GCNs.",
"Such ideas have been explored in vector-based approaches (Glorot et al., 2011; Chen et al., 2012).",
"Sentiment Analysis.",
"One line of work models domain-dependent word embeddings (Sarma et al., 2018; Shi et al., 2018; K Sarma et al., 2019) or domain-specific sentiment lexicons (Hamilton et al., 2016a), while others attempt to learn representations based on co-occurrences of domain-specific with domain-independent terms (Blitzer et al., 2007a; Pan et al., 2010; Sharma et al., 2018).",
"Our work is related to approaches that address domain-specificity in the target domain (Peng et al., 2018b; Bhatt et al., 2015).",
"Works like Liu et al. (2018) attempts to model target-specificity by mapping domain-general information to domain-specific representations by using domain descriptor vectors.",
"In contrast, we address relating domain-specific terms by modeling their relations with the other terms in knowledge bases like ConceptNet.",
"Domain adaptation deals with the training of models that can perform inference reliably in multiple domains.",
"Across domains, it is assumed that the feature and label spaces are the same but with discrepancies in their feature distributions.",
"In our setup, we consider two domains: source D s and target domain D t with different marginal data distributions, i.e., PD s ( x ) PD t ( x ) .",
"This scenario, also known as the covariate shift (Elsahar and Galle, 2019), is predominant in SA applications and arises primarily with shifts in topics causing a difference in vocabulary usage and their corresponding semantic and sentiment associations.",
"We account for unsupervised domain adaptation, where we are provided with labeled instances from the source domain D ls = {( x i , y i )} N s i = 1 and unlabeled instances from the target domain D ut = {( x i )} N t i = 1 .",
"1 This is a realistic setting as curating annotations for the target domain is often expensive as well as time consuming.",
"Given this setup, our goal is to train a classifier that can achieve good classification performance on the target domain.",
"We base our framework on the domain-adversarial neural network (DANN) proposed by Ganin et al. (2016).",
"DANN learns a shared mapping of both source and target domain instances M ( x s / t ) such that a classifier C trained for the source domain can be directly applied for the target domain.",
"Training of C is performed using the cross-entropy loss: L cls = E ( x s ,y s ) ( K k = 1 1 [ k = y s ] log C ( M ( x s ))) , where K is the number of labels.",
"Adversarial Loss.",
"The core idea of DANN is to reduce domain gap by learning common representations that are indistinguishable to a domain discriminator.",
"To learn a domain-invariant mapping, DANN uses an adversarial discriminator D adv with parameters D , whose job is to distinguish between source and target instances, M ( x s ) vs. M ( x t ) .",
"It is trained using the cross-entropy loss: L adv D = E x s ( log D adv ( M ( x s ))) E x t ( log ( 1 D adv ( M ( x t )))) .",
"The mapping function then learns domain invariance by pitting against the discriminator in a minimax optimization with loss L adv M = L adv D (Tzeng et al., 2017).",
"This setup forces the features to become discriminative to the main 1 For our case, each instance is a review document domain-specific concepts source target domain-general concepts } D o m a i n a gg r e g a t e d G r a p h C o n c e p t n e t Filtering with seed concepts R-GCND i s t M u l t E d g e L o ss encoder decoder B a g o f w o r d s Task Classifier C Domaindiscriminator D adv Commonsense Graph Feature Extraction GCN autoencoder gradient-reversal Review Document Step 1: Knowledge Graph Training Step 2: Domain-adversarial Training Graph Feature Reconstructor D recon x cg x graph feature encoder M cls adv D recon adv M = adv D z grp DANN e n c o d e r M z dann Figure 2 : Illustration of KinGDOM : Step 1 uses GCN to learn concept representations.",
"Step 2 feeds concept features to DANN.",
"Table 1 : Top-10 relations of G based on frequency.",
"Top relations for each domain are also mentioned.",
"learning task and indistinguishable across domains.",
"The point estimates of the parameters are decided at a saddle point using the minimax objective: = arg min M,C max D (L cls + L adv D ) , where is a hyper-parameter.",
"The minimax objective is realized by reversing the gradients of L adv D when back-propagating through M .",
"KinGDOM aims to improve the DANN approach by leveraging an external knowledge source i.e., ConceptNet.",
"Such a knowledge base is particularly useful for domain adaptation as it contains both domain specific and domain general knowledge.",
"Unlike traditional word embeddings and semantic knowledge graphs (e.g. WordNet), ConceptNet is unique as it contains commonsense related information.",
"We posit that both these properties of ConceptNet will be highly useful for domain adaptation.",
"KinGDOM follows a two-step approach described below: Step 1: This step deals with training a domain-aggregated sub-graph of ConceptNet.",
"In particular, it involves:",
"a) Creating a sub-graph of ConceptNet based on all domains (4.1).",
"b) Training a graph-convolutional autoencoder to learn concept embeddings (Schlichtkrull et al., 2018) (4.2).",
"Step 2: After the graph autoencoder is trained,",
"a) we extract and pool document-relevant features from the trained graph for each instance in the dataset (4.3).",
"b) The corresponding graph feature vector is then fed into the DANN architecture for adversarial training (Ganin et al., 2016).",
"To further enforce domain invariance, we also introduce a shared autoencoder to reconstruct the graph features (4.4).",
"We construct our domain-aggregated graph from ConceptNet (Speer et al., 2017).",
"First, we introduce the following notation: the ConceptNet graph is represented as a directed labeled graph G = (V , E , R) , with concepts/nodes 2 v i V and labeled edges ( v i , r ij , v j ) E , where r ij R is the relation type of the edge between v i and v j .",
"The concepts in ConceptNet are unigram words or n-gram phrases.",
"For instance one such triplet from ConceptNet is [ baking-oven , AtLocation , kitchen ].",
"ConceptNet has approximately 34 million edges, from which we first extract a subset of edges.",
"From the training documents of all domains in our dataset, we first extract the set of all the unique nouns, adjectives, and adverbs.",
"3 These extracted words are treated as the seeds that we use to fil-ter ConceptNet into a sub-graph.",
"In particular, we extract all the triplets from G which are within a distance of 1 to any of those seed concepts, resulting in a sub-graph G = (V , E , R ) , with approximately 356 k nodes and 900 k edges.",
"This sub-graph would thus contain concepts across all domains along with inter-concept links.",
"Looking at the sub-graph G from the lens of each domain, we can observe the top-10 relations within the domain in Table 1. 4.2 Step 1b) Knowledge Graph Pre-training To utilize G in our task, we first need to compute a representation of its nodes.",
"We do this by training a graph autoencoder model to perform link prediction.",
"The model takes as input an incomplete set of edges E from E in G and then assign scores to possible edges ( c 1 , r, c 2 ) , determining how likely are these edges to be in E .",
"Following Schlichtkrull et al. (2018), our graph autoencoder model consists of: a R-GCN entity encoder and a DistMult scoring decoder.",
"Encoder Module.",
"We employ the Relational Graph Convolutional Network (R-GCN) encoder from Schlichtkrull et al. (2018) as our graph encoder network.",
"The power of this model comes from its ability to accumulate relational evidence in multiple inference steps from the local neighborhood around a given concept.",
"The neighborhood-based convolutional feature transformation process always ensures that distinct domains are connected 2 We use node , concept , and entity interchangeably 3 We use the Spacy POS Tagger: https://spacy.io/ usage/linguistic-features#pos-tagging via underlying concepts and influence each other to create enriched domain-aggregated feature vectors.",
"Precisely, our encoder module consists of two R-GCN encoders stacked upon one another.",
"The initial concept feature vector g i is initialized randomly and thereafter transformed into the domain-aggregated feature vector h i R d using the two-step graph convolution process.",
"The transformation process is detailed below: f ( x i , l ) = ( r R j N ri 1 c i,r W ( l ) r x j + W ( l ) 0 x i ) , h i = h ( 2 ) i = f ( h ( 1 ) i , 2 ) ; h ( 1 ) i = f ( g i 1 ) , where N ri denotes the neighbouring concepts of concept i under relation r R ; c i,r is a normalization constant which either can be set in advance, such that, c i,r = N ri , or can be learned in a gradient-based learning setup.",
"is an activation function such as ReLU, and W ( 1 / 2 ) r , W ( 1 / 2 ) 0 are learnable parameters of the transformation.",
"This stack of transformations effectively accumulates the normalized sum of the local neighborhood i.e. the neighborhood information for each concept in the graph.",
"The self-connection ensures self-dependent feature transformation.",
"Decoder Module.",
"DistMult factorization (Yang et al., 2014) is used as the scoring function.",
"For a triplet ( c i , r, c j ) , the score s is obtained as follows: s ( c i , r, c j ) = ( h Tc i R r h c j ) , where is the logistic function; h c i , h c j R d are the R-GCN encoded feature vectors for concepts c i , c j .",
"Each relation r R is also associated with a diagonal matrix R r R d d .",
"Training.",
"We train our graph autoencoder model using negative sampling (Schlichtkrull et al., 2018).",
"For triplets in E (positive samples), we create an equal number of negative samples by randomly corrupting the positive triplets.",
"The corruption is performed by randomly modifying either one of the constituting concepts or the relation, creating the overall set of samples denoted by T .",
"The task is set as a binary classification between the positive/negative triplets, where the model is trained with the standard cross-entropy loss: LG = 1 2 E ( c i ,r,c j ,y )T ( y log s ( c i , r, c j )+ ( 1 y ) log ( 1 s ( c i , r, c j ))) .",
"Once we train the autoencoder graph model, it will ensure that target domain-specific concepts (crucial for KG) can possibly be explained via domain-general concepts and further via inter-domain knowledge.",
"In other words, the encoded node representations h i will capture commonsense graph information in the form of domain-specific and domain-general features and thus will be effective for the downstream task when there is a distributional shift during evaluation.",
"The trained graph autoencoder model as explained in the previous section 4.2, can be used for feature extraction.",
"We now describe the methodology to extract the document-specific commonsense graph features for a particular document x : 1) The first step is to extract the set of all unique nouns, adjectives, and adverbs present in the document.",
"We call this set W .",
"2) Next, we extract a subgraph from G , where we take all triplets for which both the constituting nodes are either in W or are within the vicinity of radius 1 of any of the words in W .",
"We call this graph G W .",
"3) We then make a forward pass of G W through the encoder of the pre-trained graph autoencoder model.",
"This results in feature vectors h j for all unique nodes j in G W .",
"4) Finally, we average over the feature vectors h j for all unique nodes in G W , to obtain the commonsense graph features x cg for document x .",
"We surmise that since most documents will have both domain-specific and domain-general words in W , x cg will inherently capture the commonsense information likely to be helpful during domain adaptation.",
"We feed the commonsense graph feature x cg pooled from G W for document x (4.3) into the DANN architecture (see 3.2).",
"We proceed by learning a encoder function for the graph vector z grp = M G ( x cg ) and combine its representation with the DANN encoder z dann = M M ( x ) to get the final feature representation [ z dann ; z grp ] , of the document x .",
"Here, [ a ; b ] represents concatenation.",
"The task classifier C and domain-discriminator D adv now takes this modified representation, [ z dann ; z grp ] , as its input instead of only z dann .",
"To further enforce domain-invariance into the encoded graph representation z grp , we consider it as a hidden code in a traditional autoencoder and consequently add a shared decoder D recon (with parameters R ) with a reconstruction loss (mean-squared error): L recon ( X s , X t ) = L recon ( X s ) + L recon ( X t ) , s.t. L recon = E x cg ( D recon ( z grp ) x cg 22 ) .",
"We hypothesize that if R can reconstruct graph features for both domains, then it would ensure stronger domain invariance constraints in z grp .",
"The final optimization of this domain-adversarial setup is based on the minimax objective: = arg min G,M,C,R max D (L cls + L adv D + L recon ) , where and are hyper-parameters.",
"We consider the Amazon-reviews benchmark dataset for domain adaptation in SA (Blitzer et al., 2007b).",
"This corpus consists of Amazon product reviews and ranges across four domains: Books, DVDs, Electronics, and Kitchen appliances.",
"Each review is associated with a rating denoting its sentiment polarity.",
"Reviews with rating up to 3 stars are considered to contain negative sentiment and 4 or 5 stars as positive sentiment.",
"The dataset follows a balanced distribution between both labels yielding 2 k unlabelled training instances for each domain.",
"Testing contains 3 k 6 k samples for evaluation.",
"We follow similar pre-processing as bone by Ganin et al. (2016); Ruder and Plank (2018) where each review is encoded into a 5000 -dimensional tf-idf weighted bag-of-words (BOW) feature vector of unigrams and bigrams.",
"We follow Ganin et al. (2016) in training our network.",
"Our neural layers i.e., DANN encoder ( M ), graph feature encoder ( M ), graph feature reconstructor ( D recon ), task classifier ( C ) and domain discriminator ( D adv ) are implemented with 100 dimensional fully connected layers.",
"We use a cyclic as per (Ganin et al., 2016) and = 1 after validating with { 0 .",
"5 , 1 , 2 } .",
"25% dropout is used in 70 77 84 91 B -> D K -> D E -> D E -> B K -> B D -> B B -> E K -> E D -> E B -> K D -> K E -> K Avg.",
"Figure 3 : Results of DANN vs DANN+ vs KinGDOM across different target domains.",
"Best viewed in colour.",
"the fully connected layers and the model is trained with Adam (Kingma and Ba, 2015) optimizer.",
"In this paper, to inspect the role of external commonsense knowledge and analyze the improvement in performance it brings, we intentionally use BOW features and compare them against other baseline models that also use BOW features.",
"This issue has also been addressed by Poria et al. (2020).",
"The flexibility of KinGDOM allows other approaches, such as mSDA, CNN, etc. to be easily incorporated in it, which we plan to analyze in the future.",
"We compare KinGDOM with the following unsupervised domain adaptation baseline methods: DANN (Ganin et al., 2016) is a domain-adversarial method, based on which we develop KinGDOM (3.2); DANN+ The DANN model where we use an Adam optimizer instead of the original SGD optimizer.",
"The network architecture and the rest of the hyperparameters are kept same; Variational Fair Autoencoder ( VFAE ) (Louizos et al., 2015) learns latent representations independent from sensitive domain knowledge, while retaining enough task information by using a MMD-based loss; Central Moment Discrepancy ( CMD ) (Zellinger et al., 2017) is a regularization method which minimizes the difference between feature representations by utilizing equivalent representation of probability distributions by moment sequences; Asym (Saito et al., 2017) is the asymmetric tri-training framework that uses three neural networks asymmetrically for domain adaptation; MT-Tri (Ruder and Plank, 2018) is similar to Asym , but uses multi-task learning; Domain Separation Networks ( DSN ) (Bousmalis et al., 2016b) learns to extract shared and private components of each domain.",
"As per Peng et al. (2018a), it stands as the present state-of-the-art method for unsupervised domain adaptation; Task Refinement Learning ( TRL ) (Ziser and Reichart, 2019) Task Refinement Learning is an unsupervised domain adaptation framework which iteratively trains a Pivot Based Language Model to gradually increase the information exposed about each pivot; TAT (Liu et al., 2019) is the transferable adversarial training setup to generate examples which helps in modelling the domain shift.",
"TAT adversari-ally trains classifiers to make consistent predictions over these transferable examples; CoCMD (Peng et al., 2018a) is a co-training method based on the CMD regularizer which trains a classifier on simultaneously extracted domain specific and invariant features.",
"CoCOMD, however, is SSL-based as it uses labeled data from the target domain.",
"Although it falls outside the regime of unsupervised domain adaptation, we report its results to provide a full picture to the reader.",
"As mentioned in 5.3, we reimplemented the baseline DANN model using Adam optimizer and observed that its results has been notably under-reported in many of the unsupervised domain adaptation literature for sentiment analysis (see Table 2).",
"In the original DANN implementation (Ganin et al., 2016), Stochastic Gradient Descent (SGD) was used as the optimizer.",
"However, in DANN+, using Adam optimizer leads to substantial performance jump that outperforms many of the recent advanced domain adaptation methods CMD (Zellinger et al., 2017), VFAE (Louizos et al., 2015), ASym (Saito et al., 2017), and MT-Tri (Ruder and Plank, 2018).",
"We compare the performance of KinGDOM with its base models DANN and DANN+.",
"As observed in Fig. 3, KinGDOM surpasses DANN+ by 1.4% which asserts the improvement in domain-invariance due to the incorporation of external com-Source Target DANN ( 5k ) DANN + ( 5k ) VFAE ( 5k ) CMD ( 5k ) A s y m ( 5k ) MTT r i ( 5k ) TRL * ( 5k ) DSN ( 5k ) C o CMD * ( 5k ) K i n GDOM ( 5 k ) DANN + ( 30k ) TAT ( 30k ) K i n GDOM ( 30 k ) B D 78.4 82.6 79.9 80.5 80.7 81.2 82.2 82.8 83.1 83.1 84.7 84.5 85.0 B E 73.3 79.9 79.2 78.7 79.8 78.0 -81.9 83.0 82.2 83.0 80.1 83.9 B K 77.9 81.8 81.6 81.3 82.5 78.8 82.7 84.4 85.3 85.0 84.0 83.6 86.6 D B 72.3 80.3 75.5 79.5 73.2 77.1 -80.1 81.8 81.4 82.7 81.9 82.7 D E 75.4 79.9 78.6 79.7 77.0 81.0 -81.4 83.4 81.7 83.4 81.9 83.9 D K 78.3 83.0 82.2 83.0 82.5 79.5 -83.3 85.5 84.6 85.3 84.0 87.1 E B 71.3 74.9 72.7 74.4 73.2 73.5 -75.1 76.9 76.9 77.1 83.2 78.4 E D 73.8 78.6 76.5 76.3 72.9 75.4 75.8 77.1 78.3 78.8 79.6 77.9 80.3 E K 85.4 88.6 85.0 86.0 86.9 87.2 -87.2 87.3 88.4 89.0 90.0 89.4 K B 70.9 75.9 72.0 75.6 72.5 73.8 72.1 76.4 77.2 78.2 77.1 75.8 80.0 K D 74.0 79.2 73.3 77.5 74.9 77.8 -78.0 79.6 80.7 81.3 77.7 82.3 K E 84.3 86.9 83.8 85.4 84.6 86.0 -86.7 87.2 87.4 88.0 88.2 88.6 Avg.",
"Table 2 : Comparison with different baseline and state-of-the-art models (5.3).",
"TRL* reported results on four combinations.",
"CoCMD* is a semi-supervised domain adaptation method.",
"DSN is the current state-of-the-art for unsupervised domain adaptation on the Amazon reviews dataset.",
"Scores for MT-Tri are extrapolated from the graphs illustrated in Ruder and Plank (2018).",
"Note: B : Books, D : DVD, E :Electronics, and K : Kitchen domains.",
"5k, 30k signify 5000 and 30,000 dimensional BOW features.",
"Next, we look at Table 2 where comparisons are made with other baselines, including the state-of-the-art DSN approach.",
"As observed, KinGDOM outperforms DSN in all the task scenarios, indicating the efficacy of our approach.",
"Blitzer et al. (2007b), in their original work, noted that domain transfer across the two groups of DVD , Books and Electronics , Kitchen is particularly challenging.",
"Interestingly, in our results, we observe the highest gains when the source and target domains are from these separate groups (e.g., Kitchen DVD, Kitchen Books, Electronics Books).",
"In Table 2, we also compare KinGDOM against CoCMD and TAT.",
"Although CoCMD is a semi-supervised method, KinGDOM surpasses its performance in several of the twelve domain-pair combinations and matches its overall result without using any labelled samples from the target domain.",
"TAT is the state-of-the-art method for unsupervised domain adaptation in the Amazon reviews dataset when used with 30,000 Bag-Of-Words (BOW) features.",
"Interestingly, KinGDOM used with 5000 BOW features can match TAT with 30,000 BOW features and outperforms TAT by around 1.6% overall when used with the same 30,000 BOW features.",
"The reimplementation of DANN DANN+ with 30,000 BOW also surpasses the result of TAT by 0.5%.",
"The results indicate that external knowledge, when added to a simple architecture such as DANN, can surpass sophisticated state-of-the-art models, such as DSN and TAT.",
"Our primary intention to utilize DANN as the base model is to highlight the role of knowledge base infusion in domain adaptation, devoid of sophisticated models, and complex neural maneuvering.",
"Nevertheless, the flexibility of KinGDOM allows it to be associated with advanced models too (e.g., DSN, TAT), which we believe could perform even better.",
"We intend to analyze this in the future.",
"We further analyze our framework and challenge our design choices.",
"Specifically, we consider three variants of our architecture based on alternative ways to condition DANN with the graph features.",
"Each of these variants reveals important clues regarding the invariance properties and task appropriateness of z grp .",
"Variant 1 denotes separate decoders D recon for source and target domains.",
"In Variant 2 , domain classifier D adv takes only z dann as input whereas the sentiment classifier C takes the concatenated feature [ z dann ; z grp ] .",
"Finally, in Variant 3 , D adv takes input [ z dann ; z grp ] whereas C only takes z dann .",
"As seen in Fig. 4, all the variants perform worse than KinGDOM .",
"For Variant 1, the performance drop indicates that having a shared decoder D recon in KinGDOM facilitates 74 76 78 80 82 Books Dvd 80.9 78.8 77.2 75.1 79.9 77 80.2 77.9 80.3 78 Variant 1 Variant 2 Variant 3 Glove-DANN KinGDOM 75 78 82 85 88 Electronics Kitchen 86 83.8 82.4 77.1 82.2 81 84.9 83.1 85.5 82.4 Figure 4 : Average accuracy ( % ) on target domains across different variants defined in 6.1.",
"learning invariant representations and helps target domain classification.",
"For Variant 2, removal of z grp from domain classifier diminishes the domain-invariance capabilities, thus making the domain classifier stronger and leading to a drop in sentiment classification performance.",
"For Variant 3, removal of z grp from sentiment classifier C degrades the performance.",
"This indicates that in KinGDOM , z grp contain task appropriate features retrieved from external knowledge (see 1).",
"Besides ablations, we also look at alternatives to the knowledge graph and bag-of-words representation used for the documents.",
"For the former, we consider replacing ConceptNet with WordNet (Fell-baum, 2010), which is a lexical knowledge graph with conceptual-semantic and lexical connections.",
"We find the performance of KinGDOM with WordNet to be 1% worse than ConceptNet in terms of average accuracy score.",
"This indicates the compatibility of ConceptNet with our framework.",
"However, the competitive performance with WordNet also suggests the usability of our framework with any structural resource comprising inter-domain connections.",
"For the latter, we use Glove-averaged embeddings with DANN.",
"Glove is a popular word embedding method which captures semantics using co-occurrence statistics (Pennington et al., 2014).",
"Results in Fig. 4 show that using only Glove does not provide the amount of conceptual semantics available in ConceptNet.",
"We delve further into our results and qualitatively analyze KinGDOM .",
"We look at a particular test document from DVD domain, for which KinGDOM predicts the correct sentiment, both when the target domain: DVD s o u r ce d o m a i n : E l ec t r o n i c s s o u r ce d o m a i n : B oo k s CGI film graphic graphics card computer graphic graphic novel writing R e l a t e d To S y n o n y m R e l a t e d T o R e l a t e d To R e l a t e d To R e l a t e d T o Figure 5 : Domain-general term graphic bridges the commonsense knowledge between domain-specific terms in Electronics, Books and DVD.",
"source domain is Electronics and also Books.",
"In similar settings, DANN mispredicts the same document.",
"Looking at the corresponding document-specific sub-graph for this document, we observe conceptual links to both domain-general concepts and domain-specific concepts from the source domain.",
"In Fig. 5, we can see the domain-specific terms CGI and film to be related to the general concept graphic which is further linked to domain-specific concepts like graphics card , writing , etc. from Electronics, Books, respectively.",
"This example shows how KinGDOM might use these additional concepts to enhance the semantics as required for sentiment prediction.",
"In this paper, we explored the role of external commonsense knowledge for domain adaptation.",
"We introduced a domain-adversarial framework called KinGDOM , which relies on an external commonsense KB (ConceptNet) to perform unsupervised domain adaptation.",
"We showed that we can learn domain-invariant features for the concepts in the KB by using a graph convolutional autoencoder.",
"Using the standard Amazon benchmark for domain adaption in sentiment analysis, we showed that our framework exceeds the performance of previously proposed methods for the same task.",
"Our experiments demonstrate the usefulness of external knowledge for the task of cross-domain sentiment analysis.",
"Our code is publicly available at https://github.com/declare-lab/kingdom .",
"This research is supported by A*STAR under its RIE 2020 Advanced Manufacturing and Engineering (AME) programmatic grant, Award No.",
"-A19E2b0098."
]
| [
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"other",
"method",
"other",
"method",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"other",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"result",
"abstain",
"objective",
"method",
"result",
"objective",
"objective",
"other",
"other",
"other"
]
|
[
"Generalization and reliability of multilingual translation often highly depend on the amount of available parallel data for each language pair of interest.",
"In this paper, we focus on zero-shot generalizationa challenging setup that tests models on translation directions they have not been optimized for at training time.",
"To solve the problem, we",
"(i) reformulate multilingual translation as probabilistic inference,",
"(ii) define the notion of zero-shot consistency and show why standard training often results in models unsuitable for zero-shot tasks, and",
"(iii) introduce a consistent agreement-based training method that encourages the model to produce equivalent translations of parallel sentences in auxiliary languages.",
"We test our multilingual NMT models on multiple public zero-shot translation benchmarks (IWSLT17, UN corpus, Europarl) and show that agreement-based learning often results in 2-3 BLEU zero-shot improvement over strong baselines without any loss in performance on supervised translation directions.",
"Machine translation (MT) has made remarkable advances with the advent of deep learning approaches (Bojar et al., 2016; Wu et al., 2016; Crego et al., 2016; Junczys-Dowmunt et al., 2016).",
"The progress was largely driven by the encoder-decoder framework (Sutskever et al., 2014; Cho et al., 2014) and typically supplemented with an attention mechanism (Bahdanau et al., 2014; Luong et al., 2015b).",
"Compared to the traditional phrase-based systems (Koehn, 2009), neural machine translation (NMT) requires large amounts of data in order to reach high performance (Koehn and Knowles, 2017).",
"Using NMT in a multilingual setting exacerbates the problem by the fact that given k languages Work done at Google.",
"Figure 1 : Agreement-based training of a multilingual NMT.",
"At training time, given English-French ( En Fr ) and English-German ( En De ) parallel sentences, the model not only is trained to translate between the pair but also to agree on translations into a third language.",
"In an effort to address the problem, different multilingual NMT approaches have been proposed recently.",
"Luong et al. (2015a); Firat et al. (2016a) proposed to use O ( k ) encoders/decoders that are then intermixed to translate between language pairs.",
"Johnson et al. (2016) proposed to use a single model and prepend special symbols to the source text to indicate the target language, which has later been extended to other text preprocessing approaches (Ha et al., 2017) as well as language-conditional parameter generation for encoders and decoders of a single model (Platanios et al., 2018).",
"Johnson et al. (2016) also show that a single multilingual system could potentially enable zero-shot translation, i.e. , it can translate between language pairs not seen in training.",
"For example, given 3 languagesGerman ( De ), English ( En ), and French ( Fr )and training parallel data only for ( De , En ) and ( En , Fr ), at test time, the system could additionally translate between ( De , Fr ).",
"Zero-shot translation is an important problem.",
"Solving the problem could significantly improve data efficiencya single multilingual model would be able to generalize and translate between any of the O ( k 2 ) language pairs after being trained only on O ( k ) parallel corpora.",
"However, performance on zero-shot tasks is often unstable and significantly lags behind the supervised directions.",
"Moreover, attempts to improve zero-shot performance by fine-tuning (Firat et al., 2016b; Sestorain et al., 2018) may negatively impact other directions.",
"In this work, we take a different approach and aim to improve the training procedure of Johnson et al. (2016).",
"First, we analyze multilingual translation problem from a probabilistic perspective and define the notion of zero-shot consistency that gives insights as to why the vanilla training method may not yield models with good zero-shot performance.",
"Next, we propose a novel training objective and a modified learning algorithm that achieves consistency via agreement-based learning (Liang et al., 2006, 2008) and improves zero-shot translation.",
"Our training procedure encourages the model to produce equivalent translations of parallel training sentences into an auxiliary language (Figure 1) and is provably zero-shot consistent.",
"In addition, we make a simple change to the neural decoder to make the agreement losses fully differentiable.",
"We conduct experiments on IWSLT17 (Mauro et al., 2017), UN corpus (Ziemski et al., 2016), and Europarl (Koehn, 2017), carefully removing complete pivots from the training corpora.",
"Agreement-based learning results in up to +3 BLEU zero-shot improvement over the baseline, compares favorably (up to +2.4 BLEU) to other approaches in the literature (Cheng et al., 2017; Sestorain et al., 2018), is competitive with pivoting, and does not lose in performance on supervised directions.",
"A simple (and yet effective) baseline for zero-shot translation is pivoting that chain-translates, first to a pivot language, then to a target (Cohn and Lapata, 2007; Wu and Wang, 2007; Utiyama and Isahara, 2007).",
"Despite being a pipeline, pivoting gets better as the supervised models improve, which makes it a strong baseline in the zero-shot setting.",
"Cheng et al. (2017) proposed a joint pivoting learning strategy that leads to further improvements.",
"Lu et al. (2018) and Arivazhagan et al. (2018) proposed different techniques to obtain neural in-terlingual representations that are passed to the decoder.",
"Sestorain et al. (2018) proposed another fine-tuning technique that uses dual learning (He et al., 2016), where a language model is used to provide a signal for fine-tuning zero-shot directions.",
"Another family of approaches is based on distillation (Hinton et al., 2014; Kim and Rush, 2016).",
"Along these lines, Firat et al. (2016b) proposed to fine tune a multilingual model to a specified zero-shot-direction with pseudo-parallel data and Chen et al. (2017) proposed a teacher-student framework.",
"While this can yield solid performance improvements, it also adds multi-staging overhead and often does not preserve performance of a single model on the supervised directions.",
"We note that our approach (and agreement-based learning in general) is somewhat similar to distillation at training time, which has been explored for large-scale single-task prediction problems (Anil et al., 2018).",
"A setting harder than zero-shot is that of fully unsupervised translation (Ravi and Knight, 2011; Artetxe et al., 2017; Lample et al., 2017, 2018) in which no parallel data is available for training.",
"The ideas proposed in these works ( e.g. , bilingual dictionaries (Conneau et al., 2017), backtranslation (Sen-nrich et al., 2015a) and language models (He et al., 2016)) are complementary to our approach, which encourages agreement among different translation directions in the zero-shot multilingual setting.",
"We start by establishing more formal notation and briefly reviewing some background on encoder-decoder multilingual machine translation from a probabilistic perspective.",
"Languages.",
"We assume that we are given a collection of k languages, L 1 , . . . , L k , that share a common vocabulary, V .",
"A language, L i , is defined by the marginal probability P ( x i ) it assigns to sentences ( i.e. , sequences of tokens from the vocab-ulary), denoted x i := ( x 1 , . . . , x l ) , where l is the length of the sequence.",
"All languages together define a joint probability distribution, P ( x 1 , . . . , x k ) , over k -tuples of equivalent sentences .",
"Corpora.",
"While each sentence may have an equivalent representation in all languages, we assume that we have access to only partial sets of equivalent sentences, which form corpora .",
"In this work, we consider bilingual corpora, denoted C ij , that contain pairs of sentences sampled from P ( x i , x j ) and monolingual corpora, denoted C i , that contain sentences sampled from P ( x i ) .",
"Figure 2 : Translation graph: Languages (nodes), parallel corpora (solid edges), and zero-shot directions (dotted edges).",
"Translation.",
"Finally, we define a translation task from language L i to L j as learning to model the conditional distribution P ( x j | x i ) .",
"The set of k languages along with translation tasks can be represented as a directed graph G ( V , E ) with a set of k nodes, V , that represent languages and edges, E , that indicate translation directions.",
"We further distinguish between two disjoint subsets of edges:",
"(i) supervised edges, E s , for which we have parallel data, and",
"(ii) zero-shot edges, E 0 , that correspond to zero-shot translation tasks.",
"Figure 2 presents an example translation graph with supervised edges ( En Es , En Fr , En Ru ) and zero-shot edges ( Es Fr , Es Ru , Fr Ru ).",
"We will use this graph as our running example.",
"First, consider a purely bilingual setting, where we learn to translate from a source language, L s , to a target language, L t .",
"We can train a translation model by optimizing the conditional log-likelihood of the bilingual data under the model: := arg max (cid:88) C st log P ( x t | x s ) (1) where are the estimated parameters of the model.",
"where f enc ( x s ) is the encoder that maps a source sequence to a sequence of latent representations, u , and the decoder defines P dec ( x t | u ) .",
"1 Note that u is usually deterministic with respect to x s and accurate representation of the conditional distribution highly depends on the decoder.",
"In neural machine translation, the exact forms of encoder and decoder are specified using RNNs (Sutskever et al., 2014), 1 Slightly abusing the notation, we use to denote all parameters of the model: embeddings, encoder, and decoder.",
"CNNs (Gehring et al., 2016), and attention (Bah-danau et al., 2014; Vaswani et al., 2017) as building blocks.",
"The decoding distribution, P dec ( x t | u ) , is typically modeled autoregressively.",
"In the multilingual setting, we would like to learn to translate in all directions having access to only few parallel bilingual corpora.",
"In other words, we would like to learn a collection of models, { P ( x j | x i ) } i,j E .",
"We can assume that models are independent and choose to learn them by maximizing the following objective: L ind ( ) = (cid:88) i,j E s (cid:88) ( x i , x j ) C ij log P ( x j | x i ) (3) In the statistics literature, this estimation approach is called maximum composite likelihood (Besag, 1975; Lindsay, 1988) as it composes the objective out of (sometimes weighted) terms that represent conditional sub-likelihoods (in our example, P ( x j | x i ) ).",
"Composite likelihoods are easy to construct and tractable to optimize as they do not require representing the full likelihood, which would involve integrating out variables unobserved in the data (see Appendix A.1).",
"Johnson et al. (2016) proposed to train a multilingual NMT systems by optimizing a composite likelihood objective (3) while representing all conditional distributions, P ( x j | x i ) , with a shared encoder and decoder and using language tags, l t , to distinguish between translation directions: P ( x t | x s ) = P dec ( x t | u st = f enc ( x s , l t )) (4) This approach has numerous advantages including:",
"(a) simplicity of training and the architecture (by slightly changing the training data, we convert a bilingual NMT into a multilingual one),",
"(b) sharing parameters of the model between different translation tasks that may lead to better and more robust representations.",
"Johnson et al. (2016) also show that resulting models seem to exhibit some degree of zero-shot generalization enabled by parameter sharing.",
"However, since we lack data for zero-shot directions, composite likelihood (3) misses the terms that correspond to the zero-shot models, and hence has no statistical guarantees for performance on zero-shot tasks.",
"2 2 In fact, since the objective (3) assumes that the models are independent, plausible zero-shot performance would be more indicative of the limited capacity of the model or artifacts in the data ( e.g. , presence of multi-parallel sentences) rather than zero-shot generalization.",
"Multilingual MT systems can be evaluated in terms of zero-shot performance , or quality of translation along the directions they have not been optimized for ( e.g. , due to lack of data).",
"We formally define zero-shot generalization via consistency.",
"Definition 1 (Expected Zero-shot Consistency) Let E s and E 0 be supervised and zero-shot tasks, respectively.",
"Let (cid:96) ( ) be a non-negative loss function and M be a model with maximum expected supervised loss bounded by some > 0 : max ( i,j ) E s E x i , x j [ (cid:96) ( M )] < We call M zero-shot consistent with respect to (cid:96) ( ) if for some ( ) > 0 max ( i,j ) E 0 E x i , x j [ (cid:96) ( M )] < ( ) , where ( ) 0 as 0 .",
"In other words, we say that a machine translation system is zero-shot consistent if low error on supervised tasks implies a low error on zero-shot tasks in expectation ( i.e. , the system generalizes).",
"We also note that our notion of consistency somewhat resembles error bounds in the domain adaptation literature (Ben-David et al., 2010).",
"In practice, it is attractive to have MT systems that are guaranteed to exhibit zero-shot generalization since the access to parallel data is always limited and training is computationally expensive.",
"While the training method of Johnson et al. (2016) does not have guarantees, we show that our proposed approach is provably zero-shot consistent.",
"We propose a new training objective for multilingual NMT architectures with shared encoders and decoders that avoids the limitations of pure composite likelihoods.",
"Our method is based on the idea of agreement-based learning initially proposed for learning consistent alignments in phrase-based statistical machine translation (SMT) systems (Liang et al., 2006, 2008).",
"In terms of the final objective function, the method ends up being reminiscent of distillation (Kim and Rush, 2016), but suitable for joint multilingual training.",
"To introduce agreement-based objective, we use the graph from Figure 2 that defines translation tasks between 4 languages ( En , Es , Fr , Ru ).",
"In particular, consider the composite likelihood objective (3) for a pair of En Fr sentences, ( x En , x Fr ) : L ind EnFr ( ) (5) = log [ P ( x Fr | x En ) P ( x En | x Fr )] = log (cid:88) z (cid:48) Es , z (cid:48) Ru P (cid:0) x Fr , z (cid:48) Es , z (cid:48) Ru | x En (cid:1) (cid:88) z (cid:48)(cid:48) Es , z (cid:48)(cid:48) Ru P (cid:0) x En , z (cid:48)(cid:48) Es , z (cid:48)(cid:48) Ru | x Fr (cid:1) where we introduced latent translations into Spanish ( Es ) and Russian ( Ru ) and marginalized them out (by virtually summing over all sequences in the corresponding languages).",
"Again, note that this objective assumes independence of En Fr and Fr En models.",
"Following Liang et al. (2008), we propose to tie together the single prime and the double prime latent variables, z Es and z Ru , to encourage agreement between P ( x En , z Es , z Ru | x Fr ) and P ( x Fr , z Es , z Ru | x En ) on the latent translations.",
"We interchange the sum and the product operations inside the log in (5), denote z := ( z Es , z Ru ) to simplify notation, and arrive at the following new objective function: L agree EnFr ( ) := (6) log (cid:88) z P ( x Fr , z | x En ) P ( x En , z | x Fr ) Next, we factorize each term as: P ( x , z | y ) = P ( x | z , y ) P ( z | y ) Assuming P ( x Fr | z , x En ) P ( x Fr | x En ) , 3 the objective (6) decomposes into two terms: L agree EnFr ( ) (7) log P ( x Fr | x En ) + log P ( x En | x Fr ) (cid:124) (cid:123)(cid:122) (cid:125) composite likelihood terms + log (cid:88) z P ( z | x En ) P ( z | x Fr ) (cid:124) (cid:123)(cid:122) (cid:125) agreement term 3 This means that it is sufficient to condition on a sentence in one of the languages to determine probability of a translation in any other language.",
"We call the expression given in (7) agreement-based likelihood .",
"Intuitively, this objective is the likelihood of observing parallel sentences ( x En , x Fr ) and having sub-models P ( z | x En ) and P ( z | x Fr ) agree on all translations into Es and Ru at the same time.",
"Lower bound.",
"Summation in the agreement term over z ( i.e. , over possible translations into Es and Ru in our case) is intractable.",
"Switching back from z to ( z Es , z Ru ) notation and using Jensen's inequality, we lower bound it with cross-entropy: 4 log (cid:88) z P ( z | x En ) P ( z | x Fr ) E z Es | x En [log P ( z Es | x Fr )] + (8) E z Ru | x En [log P ( z Ru | x Fr )] We can estimate the expectations in the lower bound on the agreement terms by sampling z Es P ( z Es | x En ) and z Ru P ( z Ru | x En ) .",
"In practice, instead of sampling we use greedy, continuous decoding (with a fixed maximum sequence length) that also makes z Es and z Ru differentiable with respect to parameters of the model.",
"We argue that models produced by maximizing agreement-based likelihood (7) are zero-shot consistent.",
"Informally, consider again our running example from Figure 2. Given a pair of parallel sentences in ( En , Fr ) , agreement loss encourages translations from En to { Es , Ru } and translations from Fr to { Es , Ru } to coincide.",
"Note that En { Es , Fr , Ru } are supervised directions.",
"Therefore, agreement ensures that translations along the zero-shot edges in the graph match supervised translations.",
"Formally, we state it as: Theorem 2 (Agreement Zero-shot Consistency) Let L 1 , L 2 , and L 3 be a collection of languages with L 1 L 2 and L 2 L 3 be supervised while L 1 L 3 be a zero-shot direction.",
"Let P ( x j | x i ) be sub-models represented by a multilingual MT system.",
"If the expected agreement-based loss, E x 1 , x 2 , x 3 [ L agree12 ( ) + L agree23 ( )] , is bounded by some > 0 , then, under some mild technical assumptions on the true distribution of the equivalent translations, the zero-shot cross-entropy 4 Note that expectations in (8) are conditional on x En .",
"loss is bounded as follows:",
"For discussion of the assumptions and details on the proof of the bound, see Appendix A.2.",
"Note that Theorem 2 is straightforward to extend from triplets of languages to arbitrary connected graphs, as given in the following corollary.",
"Corollary 3 Agreement-based learning yields zero shot consistent MT models (with respect to the cross entropy loss) for arbitrary translation graphs as long as supervised directions span the graph.",
"Alternative ways to ensure consistency.",
"Note that there are other ways to ensure zero-shot consistency, e.g. , by fine-tuning or post-processing a trained multilingual model.",
"For instance, pivoting through an intermediate language is also zero-shot consistent, but the proof requires stronger assumptions about the quality of the supervised source-pivot model.",
"5 Similarly, using model distillation (Kim and Rush, 2016; Chen et al., 2017) would be also provably consistent under the same assumptions as given in Theorem 2, but for a single, pre-selected zero-shot direction.",
"Note that our proposed agreement-based learning framework is provably consistent for all zero-shot directions and does not require any post-processing.",
"For discussion of the alternative approaches and consistency proof for pivoting, see Appendix A.3.",
"5 Intuitively, we have to assume that source-pivot model does not assign high probabilities to unlikely translations as the pivot-target model may react to those unpredictably.",
"Figure 3 : A. Computation graph for the encoder.",
"The representations depend on the input sequence and the target language tag.",
"B. Computation graph for the agreement loss.",
"First, encode source and target sequences with the auxiliary language tags.",
"Next, decode z Es from both x En and x Fr using continuous greedy decoder.",
"Finally, evaluate log probabilities, log P ( z Es ( x En ) | x Fr ) and log P ( z Es ( x Fr ) | x En ) , and compute a sample estimate of the agreement loss.",
"Having derived a new objective function (7), we can now learn consistent multilingual NMT models using stochastic gradient method with a couple of extra tricks (Algorithm 1).",
"The computation graph for the agreement loss is given in Figure 3. Subsampling auxiliary languages.",
"Computing agreement over all languages for each pair of sentences at training time would be quite computationally expensive (to agree on k translations, we would need to encode-decode the source and target sequences k times each).",
"However, since the agreement lower bound is a sum over expectations (8), we can approximate it by subsampling: at each training step (and for each sample in the mini-batch), we pick an auxiliary language uniformly at random and compute stochastic approximation of the agreement lower bound (8) for that language only.",
"This stochastic approximation is simple, unbiased, and reduces per step computational overhead for the agreement term from O ( k ) to O (1) .",
"6 Overview of the agreement loss computation.",
"Given a pair of parallel sentences, x En and x Fr , and an auxiliary language, say Es , an estimate of the lower bound on the agreement term (8) is computed as follows.",
"First, we concatenate Es language tags to both x En and x Fr and encode the sequences so that both can be translated into Es (the encoding 6 In practice, note that there is still a constant factor overhead due to extra encoding-decoding steps to/from auxiliary languages, which is about 4 when training on a single GPU. Parallelizing the model across multiple GPUs would easily compensate this overhead. process is depicted in Figure 3A).",
"Next, we decode each of the encoded sentences and obtain auxiliary translations, z Es ( x En ) and z Es ( x Fr ) , depicted as blue blocks in Figure 3B.",
"Note that we now can treat pairs ( x Fr , z Es ( x En )) and ( x En , z Es ( x Fr )) as new parallel data for En Es and Fr Es .",
"| using encoding-decoding with teacher forcing (same way as typically done for the supervised directions).",
"Crucially, note that z Es ( x En ) corresponds to a supervised direction, En Es , while z Es ( x Fr ) corresponds to zero-shot, Fr Es .",
"We want each of the components to",
"(i) improve the zero-shot direction while",
"(ii) minimally affecting the supervised direction.",
"To achieve",
"(i), we use continuous decoding, and for",
"(ii) we use stop-gradient-based protection of the supervised directions.",
"Both techniques are described below.",
"Greedy continuous decoding.",
"In order to make z Es ( x En ) and z Es ( x Fr ) differentiable with respect to (hence, continuous decoding), at each decoding step t , we treat the output of the RNN, h t , as the key and use dot-product attention over the embedding vocabulary, V , to construct z t Es : z t Es := softmax (cid:110) ( h t ) (cid:62) V (cid:111) V (10) In other words, auxiliary translations, z Es ( x En ) and z Es ( x Fr ) , are fixed length sequences of differentiable embeddings computed in a greedy fashion.",
"Protecting supervised directions.",
"Algorithm 1 scales agreement losses by a small coefficient .",
"We found experimentally that training could be sensitive to this hyperparameter since the agreement loss also affects the supervised sub-models.",
"For example, agreement of En Es (supervised) and Fr Es (zero-shot) may push the former towards a worse translation, especially at the beginning of training.",
"To stabilize training, we apply the stop gradient operator to the log probabilities and samples produced by the supervised sub-models before computing the agreement terms (9), to zero-out the corresponding gradient updates.",
"We evaluate agreement-based training against baselines from the literature on three public datasets that have multi-parallel evaluation data that allows assessing zero-shot performance.",
"We report results in terms of the BLEU score (Papineni et al., 2002) that was computed using mteval-v13a.perl .",
"UN corpus.",
"Following the setup introduced in Sestorain et al. (2018), we use two datasets, UNcorpus-1 and UNcorpus-2 , derived from the United Nations Parallel Corpus (Ziemski et al., 2016).",
"UNcorpus-1 consists of data in 3 languages, En , Es , Fr , where UNcorpus-2 has Ru as the 4th language.",
"For training, we use parallel corpora between En and the rest of the languages, each about 1M sentences, sub-sampled from the official training data in a way that ensures no multi-parallel training data.",
"The dev and test sets contain 4,000 sentences and are all multi-parallel.",
"Europarl v7 7 .",
"We consider the following languages: De , En , Es , Fr .",
"For training, we use parallel data between En and the rest of the languages (about 1M sentences per corpus), preprocessed to avoid multi-parallel sentences, as was also done by Cheng et al. (2017) and Chen et al. (2017) and described below.",
"The dev and test sets contain 2,000 multi-parallel sentences.",
"IWSLT17 8 .",
"We use data from the official multilingual task: 5 languages ( De , En , It , Nl , Ro ), 20 translation tasks of which 4 zero-shot ( De Nl and It Ro ) and the rest 16 supervised.",
"Note that this dataset has a significant 7 http://www.statmt.org/europarl/ 8 https://sites.google.com/site/ iwsltevaluation2017/TED-tasks overlap between parallel corpora in the supervised directions (up to 100K sentence pairs per direc-tion).",
"This implicitly makes the dataset multi-parallel and defeats the purpose of zero-shot evaluation (Dabre et al., 2017).",
"To avoid spurious effects, we also derived IWSLT17 (cid:63) dataset from the original one by restricting supervised data to only En { De , Nl , It , Ro } and removing overlapping pivoting sentences.",
"We report results on both the official and preprocessed datasets.",
"Preprocessing.",
"To properly evaluate systems in terms of zero-shot generalization, we preprocess Europarl and IWSLT (cid:63) to avoid multi-lingual parallel sentences of the form source-pivot-target , where source-target is a zero-shot direction.",
"To do so, we follow Cheng et al. (2017); Chen et al. (2017) and randomly split the overlapping pivot sentences of the original source-pivot and pivot-target corpora into two parts and merge them separately with the non-overlapping parts for each pair.",
"Along with each parallel training sentence, we save information about source and target tags, after which all the data is combined and shuffled.",
"Finally, we use a shared multilingual subword vocabulary (Sennrich et al., 2015b) on the training data (with 32K merge ops), separately for each dataset.",
"Data statistics are provided in Appendix A.5.",
"Models.",
"We use a smaller version of the GNMT architecture (Wu et al., 2016) in all our experiments: 512-dimensional embeddings (separate for source and target sides), 2 bidirectional LSTM layers of 512 units each for encoding, and GNMT-style, 4-layer, 512-unit LSMT decoder with residual connections from the 2nd layer onward.",
"Training.",
"We trained the above model using the standard method of Johnson et al. (2016) and using our proposed agreement-based training (Algo-rithm 1).",
"In both cases, the model was optimized using Adafactor (Shazeer and Stern, 2018) on a machine with 4 P100 GPUs for up to 500K steps, with early stopping on the dev set.",
"Evaluation.",
"We focus our evaluation mainly on zero-shot performance of the following methods:",
"(a) Basic , which stands for directly evaluating a multilingual GNMT model after standard training (Johnson et al., 2016).",
"Table 1 : Results on UNCorpus-1.",
"Table 2 : Results on UNCorpus-2.",
"(b) Pivot , which performs pivoting-based inference using a multilingual GNMT model (after standard training); often regarded as gold-standard.",
"(c) Agree , which applies a multilingual GNMT model trained with agreement losses directly to zero-shot directions.",
"To ensure a fair comparison in terms of model capacity, all the techniques above use the same multilingual GNMT architecture described in the previous section.",
"All other results provided in the tables are as reported in the literature.",
"Implementation.",
"All our methods were implemented using TensorFlow (Abadi et al., 2016) on top of tensor2tensor library (Vaswani et al., 2018).",
"Our code will be made publicly available.",
"9 6.3 Results on UN Corpus and Europarl UN Corpus.",
"Tables 1 and 2 show results on the UNCorpus datasets.",
"Our approach consistently outperforms Basic and Dual-0 , despite the latter being trained with additional monolingual data (Sestorain et al., 2018).",
"We see that models trained with agreement perform comparably to Pivot , outperforming it in some cases, e.g. , when the target is Russian, perhaps because it is quite 9 www.cs.cmu.edu/mshediva/code/ Previous work Our baselines Soft Distill Basic Pivot Agree En Es 34.69 34.69 33.80 En De 23.06 23.06 22.44 En Fr 31.40 33.87 33.87 32.55 Es En 31.96 34.77 34.77 34.53 De En 26.55 29.06 29.06 29.07 Fr En 33.67 33.67 33.30 Supervised (avg.) 31.52 31.52 30.95 Es De 18.23 20.14 20.70 De Es 20.28 26.50 22.45 Es Fr 30.57 33.86 27.99 32.56 30.94 Fr Es 27.12 32.96 29.91 De Fr 23.79 27.03 21.36 25.67 24.45 Fr De 18.57 19.86 19.15 Zero-shot (avg.) 22.25 26.28 24.60 Soft pivoting (Cheng et al., 2017).",
"Table 4 : Results on the official IWSLT17 multilingual task.",
"Table 5 : Results on our proposed IWSLT17 (cid:63)",
"Furthermore, unlike Dual-0 , Agree maintains high performance in the supervised directions (within 1 BLEU point compared to Basic ), indicating that our agreement-based approach is effective as a part of a single multilingual system.",
"Europarl.",
"Table 3 shows the results on the Europarl corpus.",
"On this dataset, our approach consistently outperforms Basic by 2-3 BLEU points but lags a bit behind Pivot on average (except on Es De where it is better).",
"Cheng et al. (2017) 10 and Chen et al. (2017) have reported zero-resource results on a subset of these directions and our approach outperforms the former but not the latter on these pairs.",
"Note that both Cheng et al. (2017) and Chen et al. (2017) train separate models for each language pair and the approach of Chen et al. (2017) would require training O ( k 2 ) models to encompass all the pairs.",
"In contrast, we use a single multilingual architecture which has more limited model capacity (although in theory, our approach is also compatible with using separate models for each direction).",
"10 We only show their best zero-resource result in the table since some of their methods require direct parallel data.",
"Figure 4 : BLEU on the dev set for Agree and the baselines trained on smaller subsets of the Europarl corpus.",
"Table 4 presents results on the original IWSLT17 task.",
"We note that because of the large amount of data overlap and presence of many supervised translation pairs (16) the vanilla training method (Johnson et al., 2016) achieves very high zero shot performance, even outperforming Pivot .",
"While our approach gives small gains over these baselines, we believe the dataset's pecularities make it not reliable for evaluating zero-shot generalization.",
"On the other hand, on our proposed preprocessed IWSLT17 (cid:63) that eliminates the overlap and reduces the number of supervised directions (8), there is a considerable gap between the supervised and zero-shot performance of Basic .",
"Agree performs better than Basic and is slightly worse than Pivot .",
"To better understand the dynamics of different methods in the small data regime, we also trained all our methods on subsets of the Europarl for 200K steps and evaluated on the dev set.",
"The training set size varied from 50 to 450K parallel sentences.",
"From Figure 4, Basic tends to perform extremely poorly while Agree is the most robust (also in terms of variance across zero-shot directions).",
"We see that Agree generally upper-bounds Pivot , except for the ( Es , Fr ) pair, perhaps due to fewer cascading errors along these directions.",
"In this work, we studied zero-shot generalization in the context of multilingual neural machine translation.",
"First, we introduced the concept of zero-shot consistency that implies generalization.",
"Next, we proposed a provably consistent agreement-based learning approach for zero-shot translation.",
"Empirical results on three datasets showed that agreement-based learning results in up to +3 BLEU zero-shot improvement over the Johnson et al. (2016) baseline, compares favorably to other approaches in the literature (Cheng et al., 2017; Sestorain et al., 2018), is competitive with pivoting, and does not lose in performance on supervised directions.",
"We believe that the theory and methodology behind agreement-based learning could be useful beyond translation, especially in multi-modal settings.",
"For instance, it could be applied to tasks such as cross-lingual natural language inference (Conneau et al., 2018), style-transfer (Shen et al., 2017; Fu et al., 2017; Prabhumoye et al., 2018), or multilingual image or video captioning.",
"Another interesting future direction would be to explore different hand-engineered or learned data representations, which one could use to encourage models to agree on during training ( e.g. , make translation models agree on latent semantic parses, summaries, or potentially other data representations available at training time).",
"We thank Ian Tenney and Anthony Platanios for many insightful discussions, Emily Pitler for the helpful comments on the early draft of the paper, and anonymous reviewers for careful reading and useful feedback."
]
| [
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"method",
"other",
"method",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"method",
"other",
"abstain",
"abstain",
"other",
"other",
"method",
"objective",
"method",
"other",
"other",
"method",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"other"
]
|
[
"This paper studies the relative importance of attention heads in Transformer-based models to aid their interpretability in cross-lingual and multi-lingual tasks.",
"Prior research has found that only a few attention heads are important in each mono-lingual Natural Language Processing (NLP) task and pruning the remaining heads leads to comparable or improved performance of the model.",
"However, the impact of pruning attention heads is not yet clear in cross-lingual and multi-lingual tasks.",
"Through extensive experiments, we show that (1) pruning a number of attention heads in a multilingual Transformer-based model has, in general, positive effects on its performance in cross-lingual and multi-lingual tasks and (2) the attention heads to be pruned can be ranked using gradients and identified with a few trial experiments.",
"Our experiments focus on sequence labeling tasks, with potential applicability on other cross-lingual and multi-lingual tasks.",
"For comprehensiveness, we examine two pre-trained multi-lingual models, namely multi-lingual BERT (mBERT) and XLM-R, on three tasks across 9 languages each.",
"We also discuss the validity of our findings and their extensibility to truly resource-scarce languages and other task settings.",
"Prior research on mono-lingual Transformer-based (Vaswani et al., 2017) models reveals that a subset of their attention heads makes key contributions to each task, and the models perform comparably well (Voita et al., 2019; Michel et al., 2019) or even better (Kovaleva et al., 2019) with the remaining heads pruned 1 .",
"While multi-lingual TransformerEqual contribution.",
"Work done when interning at the Minds, Machines, and Society Lab at Dartmouth College.",
"1 We regard single-source machine translation as a monolingual task since the inputs to the models are mono-lingual.",
"based models, e.g. mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020), are widely applied in cross-lingual and multi-lingual NLP tasks 2 (Wang et al., 2019; Keung et al., 2019; Eskander et al., 2020), no attempt has been made to extend the findings on the aforementioned mono-lingual research to this context.",
"In this paper, we explore the roles of attention heads in cross-lingual and multi-lingual tasks for two reasons.",
"First, better understanding and interpretability of Transformer-based models leads to efficient model designs and parameter tuning.",
"Second, head-pruning makes Transformer-based models more applicable to truly resource-scarce languages if it does not negatively affect model performance significantly.",
"The biggest challenge we face when studying the roles of attention heads in cross-lingual and multi-lingual tasks is locating the heads to prune.",
"Existing research has shown that each attention head is specialized to extract a collection of linguistic features, e.g., the middle layers of BERT mainly extract syntactic features (Vig and Belinkov, 2019; Hewitt and Manning, 2019) and the fourth head on the fifth layer of BERT greatly contributes to the coreference resolution task (Clark et al., 2019).",
"Thus, we hypothesize that important feature extractors for a task should be shared across languages and the remaining heads can be pruned.",
"We evaluate two approaches used to rank attention heads, the first of which is layer-wise relevance propagation (LRP, Ding et al. (2017)).",
"Voita et al. (2019) interpreted the adaptation of LRP in Transformer-based models on machine translation.",
"Motivated by Feng et al. (2018) and Serrano and Smith (2019), we design a second ranking method based on gradients since the gradients on each attention head 2 We define a cross-lingual task as a task whose test set is in a different language from its training set.",
"A multi-lingual task is a task whose training set is multi-lingual and the languages of its test set belong to the languages of the training set.",
"We study the effects of pruning attention heads on three sequence labeling tasks, namely part-of-speech tagging (POS), named entity recognition (NER), and slot filling (SF).",
"We focus on sequence labeling tasks since they are more difficult to annotate than documentor sentence-level classification datasets and require more treatment in cross-lingual and multi-lingual research.",
"We choose POS and NER datasets in 9 languages, where English (EN), Chinese (ZH), and Arabic (AR) are candidate source languages.",
"The MultiAtis++ corpus (Xu et al., 2020) is used in the SF evaluations with EN as the source language.",
"We do not include syntactic chunking and semantic role labeling tasks due to lack of availability of manually written and annotated corpora.",
"In these experiments, we rank attention heads based only on the source language(s) to ensure the extensibility of the learned knowledge to cross-lingual tasks and resource-poor languages.",
"In our preliminary experiments comparing the gradient-based method and LRP, the average F1 score improvements on NER with mBERT are 0.69 (cross-lingual) and 0.24 (multi-lingual) for LRP and 0.81 (cross-lingual) and 0.31 (multi-lingual) for the gradient-based method, though both methods rank attention heads similarly.",
"Thus we choose the gradient-based method to rank attention heads in all our experiments.",
"Our evaluations confirm that only a subset of attention heads in each Transformer-based model makes key contributions to each cross-lingual or multi-lingual task and that these heads are shared across languages.",
"Performance of models generally drop when the highest-ranked or randomly selected heads are pruned, validating the head rankings generated by our gradient-based method.",
"We also observe performance improvements on tasks with multiple source languages by pruning attention heads.",
"Our findings potentially apply to truly resource-scarce languages since we show that the models perform better with attention heads pruned when fewer training instances are available in the target languages.",
"The contributions of this paper are three-fold: We explore the roles of attention heads in multilingual Transformer-based models and find that pruning certain heads leads to comparable or better performance in cross-lingual and multilingual sequence labeling tasks.",
"We adapt a gradient-based method to locate atten-LC Language Family Training Size POS NER EN IE, Germanic 12,543 14,987 DE IE, Germanic 13,814 12,705 NL IE, Germanic 12,264 15,806 AR Afro-Asiatic, Semitic 6,075 1,329 HE Afro-Asiatic, Semitic 5,241 2,785 ZH Sino-Tibetan 3,997 20,905 JA Japanese 7,027 800 UR IE, Indic 4,043 289,741 FA IE, Iranian 4,798 18,463 Table 1: Details of POS and NER datasets in our experiments.",
"tion heads that can be pruned without exhaustive experiments on all possible combinations.",
"We show the correctness, robustness, and extensibility of the findings and our head ranking method under a wide range of settings through comprehensive experiments.",
"We use human-written and manually annotated datasets in experiments to avoid noise from machine translation and automatic label projection.",
"We choose POS and NER datasets in 9 languages, namely EN, ZH, AR, Hebrew (HE), Japanese (JA), Persian (FA), German (DE), Dutch (NL), and Urdu (UR).",
"As Table 1 shows, these languages fall in diverse language families and the datasets are very different in size.",
"EN, ZH, and AR are used as candidate source languages since they are resource-rich in many NLP tasks.",
"Our POS datasets are all from Universal Dependencies (UD) v2.7 3 .",
"These datasets are labeled with a common label set containing 17 POS tags.",
"For NER, we use NL, EN, and DE datasets from CoNLL-2002 and 2003 challenges (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003).",
"Additionally, we use the People's Daily dataset 4 , iob2corpus 5 , AQMAR (Mohit et al., 2012), ArmanPerosNERCorpus (Poostchi et al., 2016), MK-PUCIT (Kanwal et al., 2020), and a news-based NER dataset (Mordecai and Elhadad, 2012) for the languages CN, JA, AR, FA, UR, and 3 http://universaldependencies.org/ 4 http://github.com/OYE93/Chinese-NLP-C orpus/tree/master/NER/People'sDaily 5 http://github.com/Hironsan/IOB2Corpus HE, respectively.",
"Since the NER datasets are individually constructed in each language, their label sets do not fully agree.",
"As there are four NE types (PER, ORG, LOC, MISC) in the three source-language datasets, we merge other NE types into the MISC class to allow cross-lingual evaluations.",
"We evaluate SF models on MultiAtis++ with EN as the source language and Spanish (ES), Portuguese (PT), DE, French (FR), ZH, JA, Hindi (HI), and Turkish (TR) as target languages.",
"There are 71 slot types in the TR dataset, 75 in the HI dataset, and 84 in the other datasets.",
"We do not use the intent labels in our evaluations since we study only sequence labeling tasks.",
"Thus our results are not directly comparable with Xu et al. (2020).",
"Here, we introduce the gradient-based method we use in the experiments to rank the attention heads.",
"Feng et al. (2018) claim that gradients measure the importance of features to predictions.",
"Since each head functions similarly as a standalone feature extractor in a Transformer-based model, we use gradients to approximate the importance of the feature set extracted by each head and rank the heads accordingly.",
"Michel et al. (2019) determine importance of heads with accumulated gradients at each head in a training epoch.",
"Different from their approach, we fine-tune the model on the training set and rank the heads using gradients on the development set to ensure that the head importance rankings are not significantly correlated with the training instances in one source language.",
"Specifically, our method generates head rankings for each language in three steps: (1) We fine-tune a Transformer-based model on a mono-lingual task for three epochs.",
"(2) We re-run the fine-tuned model on the development partition of the dataset with back-propagation but not parameter updates to obtain gradients.",
"(3) We sum up the absolute gradients on each head, layer-wise normalize the accumulated gradients, and scale them into the range [0, 1] globally.",
"We show Spearman's rank correlation coefficients (Spearman's ) between head rankings of each language pair generated by our method on POS, NER, and SF in Figure 1.",
"The highest-ranked heads largely overlap in all three tasks, while the rankings of unimportant heads vary more in mBERT than XLM-R.",
"After ranking the attention heads, we fine-tune the model, with the lowest-ranked head in the source language pruned.",
"We keep increasing the number of heads to prune until it reaches a preset limit or when the performance starts to drop.",
"We limit the number of trials to 12 since the models mostly show improved performance within 12 attempts 6 .",
"This section displays and explains experimental results on cross-lingual and multi-lingual POS, NER, and SF tasks.",
"Training sets in target languages are not used to train the model under the cross-lingual setting.",
"Our experiments are based on the Hugging-face (Wolf et al., 2020) implementations of mBERT 6 On average 7.52 and 6.58 heads are pruned for POS, 7.54 and 7.28 heads for NER, and 6.19 and 6.31 heads for SF, respectively in mBERT and XLM-R models.",
"and XLM-R.",
"Specifically, we use the pre-trained bert-base-multilingual-cased and xlm-roberta-base models for their comparable model sizes.",
"The models are fine-tuned for 3 epochs with a learning rate of 5e-5 in all the experiments.",
"We use the official dataset splits and load training instances with sequential data samplers, so the reported evaluation scores are robust to randomness.",
"Table 2 shows the evaluation scores on POS with three source language choices.",
"In the majority (88 out of 96 pairs) of experiments, pruning up to 12 attention heads improves mBERT and XLM-R performance.",
"Results are comparable in the other 8 experiments with and without head pruning.",
"Average F-1 score improvements are 0.91 for mBERT and 1.78 for XLM-R in cross-lingual tasks, and 0.15 for mBERT and 0.17 for XLM-R in multilingual tasks.",
"These results support that pruning heads generally has positive effects on model performance in cross-lingual and multi-lingual tasks, and that our method correctly ranks the heads.",
"Consistent with Conneau et al. (2020), XLM-R usually outperforms mBERT, with exceptions in cross-lingual experiments where ZH and JA datasets are involved.",
"Word segmentation in ZH and JA is different from the other languages we choose, e.g. words are not separated by white spaces and unpaired adjacent word pieces often make up a new word.",
"As XLM-R applies the SentencePiece tokenization method (Kudo and Richardson, 2018), it is more likely to detect wrong word boundaries and make improper predictions than mBERT in cross-lingual experiments involving ZH or JA datasets.",
"We note that the performance improvements are solid regardless of the SL TL mBERT XLM-R Unpruned Pruned Unpruned Pruned CrLing MulLing CrLing MulLing CrLing MulLing CrLing MulLing EN ZH 47.64 93.24 51.61 93.71 29.97 90.99 32.33 91.11 AR 38.81 70.55 38.93 73.32 41.21 71.77 43.78 74.28 FA 40.12 96.70 39.81 96.97 54.90 96.62 55.72 96.98 DE 56.43 79.11 58.27 79.19 63.71 82.31 66.48 83.10 HE 46.92 89.18 46.55 88.49 56.96 88.02 56.87 89.67 JA 42.45 84.91 44.14 84.34 33.87 81.48 37.88 82.35 NL 64.51 84.90 65.56 85.17 77.15 90.21 77.66 90.38 UR 37.34 99.29 40.60 99.22 58.25 99.15 58.68 99.07 ZH EN 38.58 87.65 41.40 87.99 56.40 90.72 58.55 91.05 AR 36.43 72.27 36.99 72.86 34.31 74.84 36.11 75.68 FA 45.68 96.21 46.57 96.23 51.60 95.63 51.51 95.66 DE 29.07 79.04 33.81 78.67 56.22 82.33 55.51 82.54 HE 47.14 88.20 47.68 89.35 48.52 85.95 48.94 87.79 JA 49.21 82.02 51.69 83.20 46.18 80.19 47.06 82.63 NL 29.75 84.61 31.46 85.28 49.59 89.56 52.27 90.56 UR 44.61 99.26 46.33 99.28 48.98 98.99 55.95 99.10 AR EN 19.29 87.86 20.07 87.82 51.33 90.37 51.00 91.01 ZH 41.70 93.46 40.43 93.54 25.78 90.51 31.03 91.00 FA 46.57 96.82 46.87 96.87 53.35 96.55 52.60 96.74 DE 24.47 75.78 25.62 78.04 50.87 82.63 50.00 82.73 HE 47.15 86.77 46.72 87.64 49.52 87.37 50.85 89.28 JA 41.49 79.90 42.11 83.17 36.98 81.72 38.87 80.92 NL 26.00 84.83 26.34 85.24 49.27 90.73 48.87 91.11 UR 46.47 99.26 45.66 99.31 48.48 99.10 53.51 99.15 Table 3: F-1 scores of mBERT and XLM on NER.",
"source language selection and severe differences of training data sizes in EN, ZH, and AR.",
"This demonstrates the correctness of the head rankings our method generates and that the important attention heads for a task are almost language invariant.",
"We also examine to what extent the score improvements are affected by the relationships between source and target languages, e.g. language families, URIEL language distance scores (Littell et al., 2017), and the similarity of the head ranking matrices.",
"There are three non-exclusive clusters of language families (containing more than one language) in our choice of languages, namely Indo-European (IE), Germanic, and Semitic languages.",
"Average score improvements between models with and without head pruning are 0.40 (IE), 0.16 (Ger-manic), and 0.91 (Semitic) for mBERT and 0.19 (IE), 0.18 (Germanic), and 0.19 (Semitic) for XLM-R.",
"In comparison, the overall average score improvements are 0.53 for mBERT and 0.97 for XLM-R.",
"Despite the generally higher performance of models when the source and target languages are in the same family, the score improvements by pruning heads are not necessarily associated with language families.",
"Additionally, we use Spearman's to measure the correlations between improved F-1 scores and URIEL language distances.",
"The correlation scores are 0.11 (cross-lingual) and 0.12 (multi-lingual) for mBERT, and -0.40 (cross-lingual) and 0.23 (multi-lingual) for XLM-R.",
"Similarly, the Spearman's between score improvements and similarities in head ranking matrices shown in Figure 1 are -0.34 (cross-lingual) and 0.25 (multi-lingual) for mBERT, and -0.52 (cross-lingual) and -0.10 (multi-lingual) for XLM-R.",
"This indicate that except in the cross-lingual XLM-R model which faces word segmentation issues on ZH or JA experiments, pruning attention heads SL TL mBERT XLM-R Unpruned Pruned Unpruned Pruned CrLing MulLing CrLing MulLing CrLing MulLing CrLing MulLing EN ZH 69.83 94.11 71.84 94.25 62.58 93.97 67.98 94.29 DE 60.69 94.60 66.97 94.95 82.85 94.81 83.50 95.35 HI 44.28 85.93 45.84 87.08 58.32 86.72 66.39 87.16 FR 60.44 93.96 67.13 94.18 76.53 93.51 77.59 93.77 ES 72.27 87.71 73.96 88.17 81.70 89.10 81.88 88.83 JA 68.28 93.73 68.32 93.78 32.39 93.65 36.68 93.71 PT 59.37 90.83 63.23 90.82 77.42 90.76 77.54 91.24 TR 28.11 83.41 32.21 84.31 45.91 83.20 52.64 84.30 EN 95.43 95.27 94.59 94.87 Table 4: Slot F-1 scores on the MultiAtis++ corpus.",
"improves model performance regardless of the distances between source and target languages.",
"Thus our findings are potentially applicable to all cross-lingual and multi-lingual POS tasks.",
"As Table 3 shows, pruning attention heads generally has positive effects on our cross-lingual and multi-lingual NER models.",
"Even in the multilingual AR-UR experiment where the full mBERT model achieves an F-1 score of 99.26, the score is raised to 99.31 by pruning heads.",
"Scores are comparable with and without head pruning in the 19 cases where model performances are not improved.",
"This also lends support to the specialized role of important attention heads and the consistency of head rankings across languages.",
"In NER experiments, performance drops mostly happen when the source and target languages are from different families.",
"This is likely caused by the difference between named entity (NE) representations across language families.",
"We show in Section 5.2 that the gap is largely bridged when a language from the same family as the target language is added to the source languages.",
"Average score improvements are comparable on mBERT (0.81 under cross-lingual and 0.31 under multi-lingual settings) and XLM-R (1.08 under cross-lingual and 0.67 under multi-lingual settings) in NER experiments.",
"The results indicate that the performance improvements introduced by head-pruning are not sensitive to the pre-training corpora of models.",
"The correlations between F-1 score improvements and URIEL language distances are small, with Spearman's of -0.05 (cross-lingual) and -0.27 (multi-lingual) for mBERT and 0.10 (cross-lingual) and 0.12 (multi-lingual) for XLM-R.",
"Similarities between head ranking matrices do not greatly affect score improvements either, the Spearman's of which are -0.08 (cross-lingual) and 0.06 (multi-lingual) for mBERT and 0.05 (cross-lingual) and 0.12 (multi-lingual) for XLM-R.",
"The findings in POS and NER experiments are consistent, supporting our hypothesis that important heads for a task are shared by arbitrary source-target language selections.",
"We report SF evaluation results in Table",
"4. In 31 out of 34 pairs of experiments, pruning up to 12 heads results in performance improvements, while the scores are comparable in the other three cases.",
"These results agree with those in POS and NER experiments, showing that only a subset of heads in each model makes key contributions to cross-lingual or multi-lingual tasks.",
"We also evaluate the correlations between score changes and the closeness of source and target languages.",
"In terms of URIEL language distances, the Spearman's are 0.69 (cross-lingual) and 0.14 (multi-lingual) for mBERT and -0.59 (cross-lingual) and 0.14 (multi-lingual) for XLM-R.",
"The coefficients are -0.25 (cross-lingual) and -0.73 (multi-lingual) for mBERT and -0.70 (cross-lingual) and -0.14 (multi-lingual) between score improvements and similarities in head ranking matrices.",
"While these coefficients are generally higher than those in POS and NER evaluations, their p-values are also high (0.55 to 0.74), indicating the correlations between the score changes and source-NER Max-Pruning Rand-Pruning TL CrLing MulLing CrLing MulLing ZH -1.74 +0.08 -2.44 +0.26 AR -3.17 -2.42 -2.09 -0.43 DE +0.88 -0.62 +0.57 -0.38 NL -2.76 -0.23 +0.29 +0.36 FA -0.86 -0.31 -2.52 -0.74 HE -2.50 -2.15 -0.49 -4.21 JA -1.48 -1.08 -2.65 -2.40 UR -0.15 -0.10 -0.60 -0.12 POS Max-Mask Rand-Mask TL CrLing MulLing CrLing MulLing ZH +0.03 -0.39 -0.14 -0.20 AR -0.65 -0.04 -0.66 -0.12 DE -0.64 -0.04 -0.64 -0.14 NL -0.13 -0.13 -0.11 -0.16 FA -0.75 -0.03 -0.53 -0.25 HE -1.27 -0.28 -1.06 +0.05 JA -22.29 -0.05 -1.23 -0.05 UR -1.78 -0.11 -0.77 -0.07 Table 5: F-1 score differences from the full mBERT model on NER (upper) and POS (lower) by pruning highest ranked (Max-Pruning) or random (Rand-Pruning) heads in the ranking matrices.",
"target language closeness are not statistically sig-nificant.",
"7 5 Discussions In this section, we perform case studies to confirm the validity of our head ranking method.",
"We also illustrate the extensibility of the knowledge we learn from the main experiments to a wider range of settings, e.g. when the training dataset is limited in size or constructed over multiple source languages.",
"We evaluate the correctness of our head ranking method through comparisons between results in Tables 2 and 3 and those produced by pruning (1) randomly sampled heads and (2) highest ranked heads.",
"Specifically, we repeat the head-pruning experiments with mBERT on NER and POS using 7 The p-values for all the other Spearman's we report are lower than 0.01, showing that those correlation scores are statistically significant.",
"EN as the source language and display the score differences from the the full models in Table",
"5. Same as in the main experiments, we pick the best score from pruning 1 to 12 heads in each experiment.",
"A random seed of 42 is used for sampling attention heads to prune under the random sampling setting.",
"In 14 out of 16 NER experiments, pruning the heads ranked highest by our method results in noticeable performance drops compared to the full model.",
"Consistently, pruning the highest-ranked attention heads harms the performance of mBERT in 15 out of 16 POS experiments.",
"Though score changes are slightly positive for cross-lingual ENDE and multi-lingual EN-ZH NER tasks and in the cross-lingual EN-ZH POS experiment, improvements introduced by pruning lowest-ranked heads are more significant, as Table 2 and Table 3 show.",
"Pruning random attention heads also has mainly negative effects on the performance of mBERT.",
"These results indicate that while pruning attention heads potentially boosts the performance of models, reasonably choosing the heads to prune is important.",
"Our gradient-based method properly ranks the heads by their priority to prune.",
"Training cross-lingual models on multiple source languages is a practical way to improve their performance, due to enlarged training data size and supervision from source-target languages closer to each other (Wu et al., 2020; Moon et al., 2019; Chen et al., 2019; Rahimi et al., 2019; Tackstrom, 2012).",
"We also explore the effects of pruning attention heads under the multi-source settings.",
"In this section, we experiment with mBERT on EN, DE, AR, HE, and ZH datasets for both NER and POS tasks.",
"These languages fall into three mutually exclusive language families, enabling our analysis on the influence of training cross-lingual models with source languages belonging to the same family as the target language.",
"Similar to related research, the model is fine-tuned on the concatenation of training datasets in all the languages but the one on which the model is tested.",
"Since the head ranking matrices are not identical across languages, we design three heuristics to rank the heads in the multi-source experiments.",
"The first method merges the head ranking matrices of all the source languages into one matrix and re-generates the rankings.",
"The second method ranks the attention heads after summing up the head ranking 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 target data usage 68.8 70.6 72.4 74.2 76.0 77.8 F1 prunedunpruned",
"matrices.",
"We also examine the efficacy of pruning heads based on the head rankings from a single language.",
"For this heuristic, we run experiments using the head ranking matrix from each language and report the highest score.",
"We refer to the three heuristics as MD, SD, and EC, respectively.",
"Table 6 displays the results.",
"We note that in the NER evaluations, the performance of mBERT on all the languages but ZH are higher than those in the single-source experiments.",
"This supports our hypothesis that supervision from languages in the same family as the target language helps improve model performance.",
"Different from NER, the evaluation results on POS are not much higher than the single-source evaluation scores, implying that syntactic features are more consistent across languages than appearances of named entities.",
"However, it is consistent on both tasks that pruning attention heads brings performance boosts to all the multi-source experiments.",
"While the EC heuristic provides the largest improvement margin in 3 out of 5 experiments, it requires a lot more trial experiments.",
"MD and SD perform comparably well in most cases so they are also promising heuristics for ranking attention heads under the multi-source setting.",
"The results support that pruning attention heads is beneficial to Transformer-based models in cross-lingual tasks even if the training dataset is already large and diverse in languages.",
"While the languages we use in the main experiments are not truly resource-poor, we examine our findings when training sets in the target languages are smaller.",
"We design experiments under the multilingual setting with subsampled training datasets in target languages.",
"Specifically, we randomly divide the training set of each target language into 10 disjoint subsets and compare model performance, with and without head pruning, using 1 to 9 sub-0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 target data usage 94.0 94.4 94.8 95.2 95.6 96.0 F1 prunedunpruned",
"sets.",
"We do not use 0 or 10 subsets since they correspond to cross-lingual and fully multi-lingual settings, respectively.",
"We run the evaluations on NER and POS tasks.",
"These datasets vary greatly in size, allowing us to validate our findings on target-language datasets with as few as 80 training examples.",
"The UR NER dataset is excluded from this case study since its training set is overly large.",
"We note that the score differences with and without head pruning are, in the main experiments, consistent for all the choices of models and source languages.",
"Thus, we only display the mBERT performance with EN as the source language on NER in Figure 2 and that on POS in Figure 3.",
"The evaluation results are consistent with those in our main experiments, where the model with up to 12 attention heads pruned generally outperforms the full mBERT model.",
"This further supports our hypothesis that pruning lower-ranked attention heads has positive effects on the performance of Transformer-based models in truly resource-scarce languages.",
"It is also worth noting that pruning attention heads often causes the mBERT model to reach peak evaluation scores with less training data in the target language.",
"For example, in the EN-JA NER experiments, the full model achieves the highest F-1 score when all the 800 training instances in the JA dataset are used while the model with heads pruned achieves a comparable score with 20% less data.",
"This suggests that pruning attention heads makes deep Transformer-based models easier to train with less training data and thus more applicable to truly resource-poor languages.",
"This paper studied the contributions of attention heads in Transformer-based models.",
"Past research has shown that in mono-lingual tasks, pruning a large number of attention heads can achieve comparable or higher performance than the full models.",
"However, we were the first to extend these findings to cross-lingual and multi-lingual sequence labeling tasks.",
"Using a gradient-based method, we identified the heads to prune and showed that pruning attention heads generally has positive effects on mBERT and XLM-R performances.",
"Additional case studies empirically demonstrated the validity of our findings and showed further extensibility of them to a wider range of task settings.",
"In addition to better understanding of Transformer-based models under crossand multi-lingual settings, our findings can be applied to existing models to achieve better performance with reduced training data and resource consumption.",
"Future work could include improving model interpretability in other cross-lingual and multi-lingual tasks, e.g. XNLI (Conneau et al., 2018) and other passage-level classification tasks."
]
| [
"method",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"objective",
"result",
"abstain"
]
|
[
"Recently, the retrieval models based on dense representations have been gradually applied in the first stage of the document retrieval tasks, showing better performance than traditional sparse vector space models.",
"To obtain high efficiency, the basic structure of these models is Bi-encoder in most cases.",
"However, this simple structure may cause serious information loss during the encoding of documents since the queries are agnostic.",
"To address this problem, we design a method to mimic the queries on each of the documents by an iterative clustering process and represent the documents by multiple pseudo queries (i.e., the cluster cen-troids).",
"To boost the retrieval process using approximate nearest neighbor search library, we also optimize the matching function with a two-step score calculation procedure.",
"Experimental results on several popular ranking and QA datasets show that our model can achieve state-of-the-art results.",
"Given a query and a collection of documents, the document retrieval task is to rank the documents based on their relevance with the query.",
"To retrieve the target documents efficiently, most existing work adopts a two-stage fashion which retrieves a subset of candidate documents from the whole corpus by a recall model and then re-rank them by a sophisticated ranking model.",
"In the first stage, many approaches use traditional information retrieval methods including BM25 based on sparse bag-of-word representation.",
"Since the recall of the first-stage model determines the upper bound of the ranking quality, there is lots of work focusing on improving the recall performance(Dai and Callan, 2019; Nogueira et al., 2020; Nogueira and Lin, 2020).",
"In contrast to the sparse representations, dense representations encoding semantic information can enhance the retrieval performance by overcoming the limitations like term mismatching.",
"They are usually produced by neural encoders whose parameters are learnable.",
"Recently, inspired by the great success of pre-trained language models like BERT/RoBERTa(Devlin et al., 2018; Liu et al., 2019) in NLP applications, the dense passage retriever is proposed which encodes the documents by fine-tuning the huge language models (Karpukhin et al., 2020) and achieves state-of-the-art results benefiting from their powerful contextual semantic representative ability.",
"Following the typical fine-tuning paradigm on many NLP tasks(Devlin et al., 2018), a BERT encoder usually takes the concatenation of the query and document text as input and performs a full self-attention across the input tokens.",
"Such architecture is called Cross-encoder (Humeau et al., 2019).",
"Although it can achieve better performance than other architectures, it is infeasible in the recall stage since it needs to recompute the representation of each document in the corpus once a new query is provided.",
"In contrast, Bi-encoder(Humeau et al., 2019) encodes the queries and documents separately and computes the matching scores between their dense representations.",
"Since the documents in the corpus keep unchanged most of the time, the representation of the documents can be stored in advance for future use.",
"With the help of Approximate Nearest Neighbor (ANN) search approaches(Johnson et al., 2017), the retrieval process can be further boosted.",
"Although gaining retrieval efficiency, Bi-encoder sacrifices retrieval accuracy comparing to the Cross-encoder.",
"To enrich the representations of the documents produced by Bi-encoder, some researchers extend the original Bi-encoder by employing more delicate structures like later-interaction(Khattab and Zaharia, 2020), Poly-Encoder(Humeau et al., 2019), multi-embedding model(Luan et al., 2020).",
"Increasing a little computational overhead, these models can gain much improvement of the encoding quality while remaining the fast retrieval characteristic of Bi-encoder.",
"Similar to these models, we focus on improving the effectiveness of Bi-encoder.",
"In this work, we think that the limitation of the Bi-encoder origins from the agnostic nature of query when encoding the documents independently, i.e., the encoder cannot know what query could be potentially answered by the input document.",
"As it is very common that a document with hundreds of tokens contains several distinct topics, some important semantic information might be easily missed or biased by each other without knowing the query.",
"To alleviate the query agnostic problem, we propose a novel approach that mimics multiple potential queries corresponding to the input document and we call them pseudo query embeddings.",
"Ideally, each of the pseudo query embeddings corresponds to a semantic salient fragment in the document which is similar to a semantic cluster of the document.",
"Thus, we implement the process by a clustering algorithm (i.e., K-means in this work) and regard the cluster centroids as the pseudo query embeddings.",
"We generate and store all of the embeddings in an offline manner, thereby not only improving the encoding quality but also remaining the online computation unchanged.",
"During the inference, the multiple pseudo query embeddings should be first aggregated through a softmax function and then the relevance score with the query embedding is computed.",
"Unfortunately, directly applying softmax aggregation is not supported in the existing ANN search library.",
"Thus, we first filter some documents in which all of the embeddings have low relevance scores and then perform the whole aggregation and score function using the filtered embeddings.",
"We propose a novel approach to represent the document with multiple pseudo query embeddings which are generated by a clustering process.",
"We modify the embedding aggregation during the inference in order to directly utilize the off-the-shelf ANN search library.",
"and OpenQA datasets.",
"Experimental results show that our approach achieves state-of-the-art retrieval performance while still remaining efficient computation.",
"An in-depth analysis on gradients shows how the cluster centroids improve the performance.",
"In this section, we will review the existing work related with the first-stage retrieval.",
"According to the representations of text, the first stage retrieval approaches can be classified into two categories.",
"One is based on the high-dimensional sparse representation and the other is based on the low-dimensional continuous representation.",
"Traditional sparse vector space models weight the terms by their frequency information.",
"In last few years, some researchers intend to weight the document and query terms adaptively by a neural network which could leverage some semantical information (Dehghani et al., 2017; Zheng and Callan, 2015).",
"Recently, a trend of leveraging the deep pre-trained language models to weight or augment the document/query terms is emerged.",
"DeepCT(Dai and Callan, 2019) uses BERT to learn the term importance and weight all of the terms.",
"DocT5query(Nogueira and Lin, 2020) augments the document with possible query terms which are generated by a sequence-to-sequence model.",
"In contrast, the dense retrieval approaches map the text to continuous vectors which are mostly generated by neural networks.",
"Models like DSSM(Huang et al., 2013),CLSM(Shen et al., 2014), DESM(Mitra et al., 2016) encode the query and document using their n-gram features or word embeddings independently and then compute their similarities.",
"Recently, the dense retrieval approaches also tend to make use of the pre-trained language models.",
"Sentence-BERT(Reimers and Gurevych, 2019) is a typical Bi-encoder model which encodes the text using BERT and calculates the similarity scores by the combination of several basic operations.",
"Inspired by the interaction-based neural re-rankers, Khattab and Zaharia(2020) propose a later-interaction mechanism.",
"Later on, some variants(Gao et al., 2020; Chen et al., 2020) are proposed.",
"Xiong et",
"al.(2020) identify that the negative samples during training may not be representative, lowering the training difficulty.",
"Therefore, they propose a model to construct hard negative samples dynamically during training.",
"Comparing to existing work, our work serves the first stage of document retrieval and presents a new method to generate document representations which borrows the clustering technique to generate pseudo query embeddings from documents.",
"In this section, we introduce the original Bi-encoder architecture and several existing variants.",
"Then, we present our model in detail and describe the similarities and differences between our model and those Bi-encoder variants.",
"Independent Aggregator We start with a Bi-encoder using BERT as its backbone neural network as shown in Figure 1.",
"(a).",
"Given a query with n tokens and a document with m tokens, a typical Bi-encoder encodes the query and the document separately, producing query token embeddings { q i } ni =1 R n h and document token embeddings { d i } mi =1 R m h which are the hidden states of the last layer in most cases.",
"Next, a module is needed to compute the matching score by aggregating the generated query and document representations.",
"We call it Aggregator in the following sections.",
"The simplest aggregator is the independent aggregator shown in Figure 1",
"(b).",
"This aggregator uses a pooler to reduce the query and document token embeddings to fixed-length embeddings e q and e d respectively and then calculates the score by dot product/Euclidean distance between them.",
"For example, Karpukhin et",
"al.(2020) directly adopt the embedding of the [CLS] token.",
"RepBERT(Zhan et al., 2020) leverages the mean value of the encoded embeddings.",
"Although efficient to compute, compressing m or n ( m, n >> 1 ) embeddings to one may lose information.",
"Late Interaction Aggregator Col-BERT model(Khattab and Zaharia, 2020) employs a late interaction paradigm to reduce the loss of information.",
"As shown in Figure 1",
"(c), the model preserves all of the document token embeddings { d i } mi =1 in the cache until a new query is given.",
"It then computes token-wise matching scores using all of the document and query embeddings.",
"The final matching score is generated by pooling the m n scores.",
"This model preserves document semantics as much as possible and leaves the full query-document interaction during the inference.",
"Experimental results show that Col-BERT is highly effective, improving the accuracy in a large margin.",
"However, the time complexity of the score computation arises from constant O (1) to quadratic O ( mn ) .",
"Meanwhile, Lin et",
"al.(2020) point out that the storage space occupation also arises rapidly along with the length of documents since Col-BERT needs to store all of the embeddings.",
"Semi-interactive Aggregator Figure",
"1(d) shows another kind of aggregator which compresses the document token embeddings to a constant number k much smaller than the document length m ( k << m ).",
"Since there are multiple but not all document token embeddings participating the interaction with query, we call the aggregator as a semi-interactive aggregator.",
"(Humeau et al., 2019; Luan et al., 2020) adopt this aggregator in their model.",
"Specifically, Poly-Encoder(learnt-k) (Humeau et al., 2019) model employs k learnable code-vectors as the parameters and attend them with all of the document token embeddings { d i } mi =1 , representing global features of the document.",
"Besides, Poly-Encoder(first-k) (Humeau et al., 2019) and ME-BERT(Luan et al., 2020) both adopt the first k document token embeddings as the compressed document representation.",
"Obviously, the semi-interactive aggregator further makes time/space complexity and accuracy trade-offs over the independent aggregator and late interaction aggregator.",
"However, there still exists some problem when applying current compressing strategies in the document retrieval task, which we would point out in the next section.",
"The primary limitation of Bi-encoder is that we cannot know which part of the document would be asked during the encoding process.",
"Preserving multiple semantic representations has been proved effective in the variants of Bi-encoder.",
"However, existing models are still not perfect, leading to expensive computation or underfit problem.",
"In this work, we intend to improve the semantic representations by mimicing the real matching process using the documents alone, generating a constant number of pseudo query embeddings.",
"In this way, the model can preserve self-adaptive document embeddings representing different semantics.",
"Actually, the whole procedure is analogous to the steps of the K-means clustering algorithm and the Figure 1: Bi-encoder and different aggregators cluster centroids are treated as the pseudo query embeddings.",
"In the following, we will interpret the approach using the K-means algorithm in detail.",
"Firstly, following the semi-interactive aggregator, we feed the document tokens into BERT and use the last layer hidden states as the document token embeddings { d i } mi =1 .",
"Next, we perform K-means algorithm on these token embeddings.",
"The K-means algorithm mainly contains two iterative steps: assignment step and update step.",
"These two steps are performed alternatively until the convergence condition is satisfied.",
"The assignment step can be expressed by the following equation.",
"s ti = argmin j (cid:107) d i c tj (cid:107) 2 i { 1 , ..., m } , j { 1 , ..., k } (1) where c tj is the j -th cluster centroid (we assume there are up to k clusters) when the algorithm is executing at the t -th time step.",
"s ti represents the nearest cluster to the i -th embedding d i considering the Euclidean distance.",
"After the assignment step, the algorithm updates each of the cluster centroid according to the cluster assignment of each embedding.",
"The update step is shown as Eq.",
"2. c t +1 j = 1 (cid:80) mi =1 1( s ti = j ) (cid:88) { i | s ti = j } d i (2) If we treat each centroid of cluster c tj as a query embedding, Eq.",
"1 can be interpreted as the similarity computation between the document and several queries, determining which of the queries can be answered by the i -th token embedding.",
"Thus, the cluster centroid c tj plays a similar role as query and we name it pseudo query embedding.",
"Next, the embeddings belong to one cluster compose the new pseudo query embedding by Eq.",
"2. As the two steps alternatively iterate, the query embeddings that can be answered by the document are explored.",
"Since this process only involves the documents, we can save the embeddings in memory and retrieve them using the real queries which are desired to be resolved.",
"Since the pseudo query embeddings contain the underlying information of the document that real queries may ask, we use the the pseudo query embeddings as the compressed document embeddings (i.e., the embeddings output by a compressor, as shown in Figure",
"1(d)).",
"In the inference stage, we compute the similarity between the pseudo query embeddings { c j } kj =1 and the real query embeddings { q i } ni =1 which can be formulated by the following equations.",
"Eq.",
"3 means that we pool the query embeddings into a fixed-length embedding e q .",
"Currently, we select the embedding of [CLS] token as e q .",
"As the query is much shorter than the document and usually represents one concrete meaning, we assume this compression will not lose much information.",
"In Eq.",
"4, we compute the similarity between the e q and c j following a softmax normalization.",
"Then, using the normalized scores as weights, the final document embedding e d is a weighted sum of the document embeddings, as shown in Eq.",
"5. At last, the matching score is computed by the dot product between e q and e d .",
"Comparing with existing work, we find that the Poly-Encoder(learnt-k) (Humeau et al., 2019) is equivalent to learning multiple fixed global pseudo query embeddings { c j } kj =1 across all of the documents.",
"That model treats the pseudo query embeddings as learnable parameters which are kept fixed during the inference.",
"It uses the linear combinations of document token embeddings { d i } mi =1 as the compressed document embeddings, taking similarity scores between { d i } mi =1 and { c j } kj =1 as the combination weights.",
"Conversely, the Poly-Encoder(first-k) (Humeau et al., 2019) and ME-BERT(Luan et al., 2020) use the first k document token embeddings as the pseudo query embeddings, i.e., { c j } kj =1 = { d i } ki =1 and adopt the pseudo query embeddings as compressed document embeddings.",
"In contrast to Poly-Encoder(learnt-k), they rely on dynamic pseudo query embeddings.",
"Experimental results on conversation datasets show Poly-Encoder(first-k) is better than the former.",
"However, only adopting the firstk document embeddings seems to be a coarse strategy since a lot of information may exist in the latter part of the document.",
"To this end, we present an approach which generates multiple adaptive semantic embeddings for each document by exploring all of the contents in the document.",
"The first-stage retrieval model should calculate the matching scores between the query and all of the documents in the collection.",
"Most existing dense retrieval work adopts Approximate Nearest Neighbor (ANN) searching methods to boost the retrieval process.",
"Faiss(Johnson et al., 2017) is one of the most popular ANN search libraries.",
"It first builds vector index offline and make an ANN vector search based on the index.",
"However, Faiss only supports basic similarity functions like the dot product/Euclidean distance other than the function listed in Eq.",
"4-Eq.",
"6. To boost in our method using Faiss, we build an index using all of the representations { c j } kj =1 of each document.",
"During inference, we firstly select the c j which has the highest dot product value with e q as the final document embedding e d and compute the matching score using Eq.",
"6 .",
"Since this operation only involves dot product, it can be accelerated by Faiss.",
"This operation equals to substitute a j with a j in Eq.",
"4. a j = 1( j = argmax i =1 ...k ( e q c i )) (7) As shown in Eq.",
"7, we use argmax operation instead of softmax.",
"Such substitution is reasonable since softmax is a derivative and smooth version of argmax (Goodfellow et al., 2016).",
"However, only one of the embeddings can pass the argmax function and participate the similarity computation which may impact the retrieval accuracy.",
"To make a trade-off, we firstly recall topR documents according to Eq.",
"7 and then calculate accurate scores as described in Eq.",
"4-Eq.",
"6 on the retrieved documents.",
"MS MARCO Dataset (Nguyen et al., 2016) is a large-scale ad-hoc text retrieval dataset built for two separate tasks: document ranking and passage ranking.",
"These two tasks are adopted in TREC 2019 Deep Learning Track(Craswell et al., 2020) where test sets are provided.",
"The document ranking task contains 3.2 million documents and 0.3 million queries.",
"The passage ranking task contains 8.8 million passages and 0.5 million queries.",
"The main difference between these two tasks exists in the text length, where the average length of the documents and passages are 1124 and 54, respectively.",
"Following most of the existing work, we use MRR to evaluate the development set of MS MARCO and use NDCG to evaluate the TREC test set.",
"OpenQA Dataset (Karpukhin et al., 2020) is designed for open domain question answering.",
"The authors collect about 21 million documents from Wikipedia as the document collection whose average length is 100.",
"They collect question-answer pairs from several existing QA datasets (e.g., Natural Questions, Trivia QA, SQuAD etc.).",
"Then, they select some documents that contain the answer text and have the highest BM25 scores with the queries, as the positive documents to the query.",
"Currently, the authors release the data of Natural Questions, Trivia QA and SQuAD.",
"For Natural Questions and Trivia QA, the test sets and development sets are available.",
"For SQuAD, only the development set is available.",
"We conduct experiments on this three datasets using top20/100 accuracy as the evaluating metric.",
"We initiate the encoder using a BERT base model.",
"Since the BERT base model could handle 512 tokens at most, we truncate each document up to 512 tokens as the input.",
"We set different cluster numbers according to the document length.",
"In the MS MARCO document ranking task, we set the cluster number to 8.",
"In other tasks, we set the cluster number to 4. More experiments about different cluster numbers are shown in the Section 4.5.",
"Since the initial states of the clusters in K-means may influence the performance a lot, we tried two setups: random initiation(i.e., select the hidden states randomly as the initial states) and equal-interval initiation (i.e., cut the documents into equal length intervals and select the cutting locations as the initial states) and find that the equal-interval initiation can outperforms the random initiation .",
"Therefore, we adopt equal-interval initiation in the following experiments.",
"We use AdamW as the optimizer and set the learning rate to 2e-6 and batch-size to 16.",
"During the training, we select one positive document and 4 negative documents for each of the queries.",
"To improve the training efficiency, we adopt the in-batch negatives technique(Karpukhin et al., 2020) which takes all other documents in the batch except the positive one as the negative documents for each query.",
"To reduce the discrepancy between the training and inference process, we also adopt the ANCE(Xiong et al., 2020) training paradigm which constructs new hard negative samples using the trained checkpoint of the models.",
"After encoding of the documents, we save them to an In-dexFlatIP index provided by Faiss which supports fast inner product calculation.",
"During the inference, we set the number of the documents retrieved by Faiss (i.e., R in Section 3.3) to 1000* k .",
"MS MARCO Since our goal is to improve the first-stage retrieval performance, we mainly compare our model with other first-stage retrieval models including: docT5Query(Nogueira and Lin, 2020), DeepCT(Dai and Callan, 2019), RepBERT(Zhan et al., 2020), ANCE (First-P)(Xiong et al., 2020), ME-BERT(Luan et al., 2020), ColBERT(Khattab and Zaharia, 2020).",
"Table 1 shows the results on the passage ranking task.",
"We can see that our model outperforms other models except the ColBERT.",
"However, our method is more efficient than ColBERT in terms of the time complexity ( O ( mn ) vs O ( kn ) , k << m ).",
"We think the margin is acceptable considering the trade-off between time and accuracy.",
"Comparing Models MRR@10 Recall@1k DeepCT 24.3 91.0 docT5Query 27.7 94.7 RepBERT 30.4 94.3 ANCE(First-P) 33.0 95.9 ME-BERT 33.4 ME-BERT+BM25 33.8 ColBERT 36.0 96.8 Ours 34.5 96.4 Table 1: Results on MS MARCO passage ranking dev set.",
"Noticing that ME-BERT adopts a BERT large encoder which has a more powerful language understanding ability than the BERT base encoder in our model, our proposed method is effective enough to bridging the gap.",
"Table 2 shows the results on the document ranking task.",
"Our model outperforms other models by a large margin.",
"That is probably because the average length of the documents is much longer than the length of passages and our method can make full use of aggregating the semantics of the whole document.",
"OpenQA As for the OpenQA dataset, we compare our model with the DPR model(Karpukhin et al., 2020) which is a typical Bi-encoder + independent aggregator structure.",
"Table 3 shows the result of the test set of Natural Questions and Trivia QA and the result of the development set of SQuAD.",
"We can see that our model is better than other models especially in the SQuAD dataset.",
"To explore the possible causal link between the performance and the characteristic of the datasets, we examine the questions corresponding to one document in the training set of different datasets, and find the average number of questions in Trivia QA, Natural Questions and SQuAD are 1.1, 1.4, and 2.7, respectively.",
"It means that the documents in SQuAD corresponds to more questions in comparison with Models Natural Questions Trivia QA SQuAD Top20 Top100 Top20 Top100 Top20 Top100 BM25 59.1 73.7 66.9 76.7 -DPR 78.4 85.4 79.4 85.0 76.4* 84.8* BM25+DPR 76.6 83.8 79.8 84.5 -ANCE 81.9 87.5 80.3 85.3 -Ours 82.3 88.2 80.5 85.8 80.5 88.6 Table 3: Results on the test sets of Natural Questions and Trivia QA and development set of SQuAD.",
"other datasets which may indicate that the passages in SQuAD contain more distinct information than other two datasets.",
"Thus, our method can take full advantage of aggregating different information into clusters.",
"We run our model on a single Nvidia Tesla V100 32GB GPU for the MS MARCO document retrieval task and record the time spent by each phase, as shown in Table 4. Leveraging the powerful parallel computation ability of GPU, the document can be quickly passed through the BERT encoder.",
"It is quite surprising that the K-means algorithm costs more time than BERT given that the time complexity of K-means is less than the deep Transformer in theory.",
"Presumably, this is because our K-means implementation includes a for-loop during the updating step which is not friendly for parallel computing.",
"This part can be optimized using a more parallel friendly implementation.",
"To retrieve documents for new queries, the queries should be firstly encoded.",
"The encoding of queries usually spends less time than the documents because the length is shorter.",
"Next, we record the retrieval time cost by each query with or without the help of the optimization mentioned in Section 3.3.",
"We can find that the optimization can accelerate the retrieval, saving non-trivial time, which confirms the effectiveness of the proposed optimization.",
"To compare our approach with other different aggregators, we also record the retrieval time using independent Models MRR@100 random init (k=4) 36.8 w/o ANCE (k=4) 37.3 w/o ANCE (k=8) 37.9 k=4 38.4 k=8 39.2 k=16 39.4 k=32 38.8 Table 5: Performance of the MS MARCO document ranking dev set under different model settings.",
"aggregator and late interaction aggregator.",
"We can see that our model spends an amount of time near to the independent aggregator and outperforms late interaction aggregator by a large margin.",
"We conduct ablation study on the development set of MS MARCO document ranking task.",
"The results are shown in Table 5. We firstly change the cluster initialization strategy to random.",
"Clearly, the performance drops dramatically since the training becomes unstable.",
"Next, we try to remove the ANCE training mechanism which alleviates the discrepancy between training and inference.",
"We can find that although the performance decreases, it can still outperform the ANCE and the ME-BERT model, showing the effectiveness of the method proposed in this paper.",
"Finally, we compare the performance under different number of clusters ( k = 4 , 8 , 16 , 32 ).",
"We find that the model achieves the best performance when k = 16 but the margin leading k = 8 is not significant.",
"Besides, when k = 32 , the performance drops by a large margin.",
"We infer the reason is that the documents do not have such a number of individual clusters.",
"As a result, the clustering algorithm is hard to converge.",
"Although the performance of the ranking metrics like MRR show the effectiveness of the our method, we still need an in-depth view of how the cluster centroid based embeddings improve the model",
"against other methods.",
"In this section, we try to show it by analyzing how the document embeddings affect the value of the loss function.",
"Given a query q and its relative document d , the training objective is to minimize the loss function in the following form: L = log softmax ( y d ) (8) where y d is computed as Eq.",
"6. Next, we can see how a single step of gradient descent alters the loss value by analyzing the gradient of the loss function with respect to the document embeddings.",
"For each document embedding c j , we have: (cid:79) d L d =( y d 1) e q (cid:79) e d (9) (cid:79) j e d = r ( c j ) (cid:79) c j (10) r ( c j ) =[1 + ( (cid:88) j (cid:48) (cid:54) = j a j (cid:48) ( e q c j e q c j (cid:48) ))] a j (11) where (cid:79) d L means the gradient of loss with respect to document d and (cid:79) j e d means the gradient of e d with respect to c j .",
"Details of the derivation are shown in the Appendix.",
"The absolute value of r ( c j ) can be interpreted as a weight of how much the c j can contribute to the loss value.",
"For example, if we feed the model with document embedding producing large positive r ( c j ) , a single gradient descent step would decrease the loss value faster than small r ( c j ) .",
"To verify whether the cluster centroids are more effective than other document embeddings, we compare our model on MS MARCO document ranking task with two other models: the first one adopts the first k token embeddings as the document embeddings like Poly-Encoder(first-k )(Humeau et al., 2019) and the second one adopts k randomly selected token embeddings as the document embeddings.",
"Other parts of the model remain unchanged.",
"Ideally, we expect (1) at least one of the document embeddings can match its relative query embedding and (2) multiple document embeddings can capture different semantic information of the document.",
"We use the max value of r ( c j ) among multiple document embeddings to evaluate (1) and use the variance of r ( c j ) among the multiple embeddings of the same document to evaluate (2).",
"We plot them during the training as shown in Figure 2. At the beginning of the training, the loss value, max ( r ( c j )) and var ( r ( c j )) of the models are relatively high and rapidly decrease.",
"When the decreasing of the loss slows down, our model can provide a much higher max ( r ( c j )) and lower loss.",
"Besides, var ( r ( c j )) of our model is also higher than others indicating the document embeddings are different with each other.",
"We infer that this is because the cluster algorithm expands the distance of the cluster centroids, i.e., c j and c (cid:48) j , making the embeddings more distinct with each other.",
"Assuming i = argmax j ( r ( c j )) , clustering produces larger r ( c i ) and lower r ( c i (cid:48) ) as shown in Eq.",
"11.",
"From Eq.",
"9-10, we can see that large r ( c i ) can amplify the impact of e q to c i making c i more approximate to e q .",
"Therefore, the gradient descent can do an accurate update for the specific document embedding c i towards e q while leaves c (cid:48) i (should represents information other than e q ) less changed.",
"As a result, the c i which is nearer to e q dominates the loss to reduce more than other models.",
"In this paper, we propose a method to improve the performance of the first-stage retrieval model which is based on Bi-encoder and semi-interactive aggregator.",
"Specifically, our method mimics the real queries by an iterative K-means clustering algorithm.",
"To accelerate the retrieval process, we also optimize the softmax matching function by filtering out some documents using argmax operation.",
"We conduct experiments on the MS MARCO and OpenQA datasets.",
"Through the analysis of the retrieval quality and efficiency, we can confirm the proposed approach is both effective and efficient."
]
| [
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"objective",
"abstain",
"method",
"result",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"result",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"objective"
]
|
[
"We investigate subword information for Chinese word segmentation, by integrating sub word embeddings trained using byte-pair encoding into a Lattice LSTM (LaLSTM) network over a character sequence.",
"Experiments on standard benchmark show that subword information brings significant gains over strong character-based segmentation models.",
"To our knowledge, this is the first research on the effectiveness of subwords on neural word segmentation.",
"Chinese word segmentation (CWS) is a traditional NLP task (Sproat et al., 1996), the features for which have been a central research topic.",
"Statistical methods consider characters (Xue et al., 2003), subwords (Zhang et al., 2006), and words (Zhang and Clark, 2007) as input features.",
"Among these, both characters (Chen et al., 2015a) and words (Zhang et al., 2016; Cai and Zhao, 2016; Yang et al., 2017) have also shown useful in recent neural models.",
"However, how to utilize the subword features in neural networks has not been investigated yet.",
"In this paper, we fill this gap by proposing a subword-based neural word segmentor, by integrating two strands of works: the byte pair encoding (BPE) algorithm (Gage, 1994) and the lattice LSTM structure (Zhang and Yang, 2018).",
"The BPE algorithm constructs a subword list from raw data and lattice LSTM introduces subwords into character LSTM representation.",
"In particular, our baseline is a BiLSTM-CRF segmentor (Chen et al., 2015b) and we replace LSTM with lattice LSTM using subwords to encode character composition information.",
"Our code 1 is based on NCRF++ (Yang and Zhang, 2018).",
"Compared with character-based neural segmentors, our model can utilize abundant character combination (subword) information, which is effective to disambiguate characters.",
"For example, in Figure 1, the subword (Academy) ensures that the character means Academy(noun) rather than study(verb).",
"Compared with the word-based neural models (Zhang et al., 2016; Cai and Zhao, 2016), ambiguous subwords in a context can provide additional information for disambiguation.",
"For instance, the subword (Academy of Sciences) and (Academy) can be useful in determining the correct segmentation, which is /(Academy of Sciences/).",
"To our knowledge, we are the first to use subwords in a neural network segmentor.",
"We investigate the contributions of subword lexicons and their pretrained embeddings through controlled experiments.",
"Results on four benchmarks show that the proposed model can give comparable results with state-of-the-art models.",
"State-of-the-art statistical segmentors use either sequence labeling methods e.g. CRF (Lafferty et al., 2001) with character features (Peng et al., 2004; Zhao et al., 2006) or the transition-based models with word features (Zhang and Clark, 2007; Sun, 2010).",
"Neural segmentors (Chen et al., LSTM BLSTM ELSTM BLSTM MLSTM ELSTM BLSTM </E> E Cell Cell Cell Cell Cell CRF Layer LSTM Layer Unichar emb Bichar emb Word emb Figure 2: Models. Only forward LSTM is illustrated here. 2015a; Cai and Zhao, 2016) generally take the same framework except using neural networks as automatic feature extractor.",
"Lattice LSTM was proposed by Zhang and Yang (2018) for Chinese named entity recognition (NER).",
"It integrates the character sequence features and all lexicon word embeddings that match a character subsequence in the input into a sequence labeling model.",
"Zhu et al. (2016) proposed a DAG-structured LSTM structure which is similar to the lattice LSTM model but binarizing the paths in the merging process.",
"Chen et al. (2017) also built a DAG-LSTM structure for word segmentation but without memory cells.",
"Our model consistently gives better performance.",
"BPE is a data compression algorithm (Gage, 1994) which has been used in neural machine translation (NMT) by capturing the most frequent subwords instead of words (Sennrich et al., 2016).",
"Here we use it for collecting subwords in Chinese, similar to the use in Chinese NMT.",
"We take the state-of-the-art LSTM-CRF framework as our baseline.",
"For an input sentence with m characters s = c 1 , c 2 , . . . , c m , where c i denotes the i th character, the segmentor is to assign each character c i with a label l i .",
"Figure 2 shows the segmentor framework on input character sequence (Fellow of the Chinese Academy of Sciences), where the black part represents the baseline LSTM-CRF model and the red part shows the lattice structure.",
"As shown in Figure 2, for each input character c i the corresponding character unigram embeddings",
"and character bigram embeddings are represented as e c i and e c i c i +1 , respectively.",
"The character representation is calculated as following: x i = e c i e c i c i +1 , (1) where represents concatenate operation.",
"Unlike Zhang et al. (2016) which uses a window to strengthen the local features, or Zhou et al. (2017) which adds a non-linear layer before the LSTM layer, we feed { x 1 , x 2 , . . . , x m } into a bidirectional LSTM: h 1 , h 2 , . . . , h m = LST M ( x 1 , x 2 , . . . , x m ) h 1 , h 2 , . . . , h m = LST M ( x 1 , x 2 , . . . , x m ) , (2) where LST M and LST M represent the forward and backward LSTM, respectively.",
"The detailed equations are listed in Appendix .",
"The hidden vector of character c i is h i = h i h i (3) 3.2 Lattice LSTM The lattice LSTM adds shortcut paths (red part in Figure",
"2) to LSTM.",
"The input of the lattice LSTM model is character sequence and all subsequences which are matched words in a lexicon D , collected from BPE.",
"Following Zhang and Yang (2018), we use w b,e to represent the subsequence that has a start character index b and a end character index e , and the embeddings of the subsequence is represented as e w b,e .",
"During the forward lattice LSTM calculation, the cell in Figure 2 of a subsequence w b,e takes the hidden vector of the start character h b and the subsequence embeddings e w b,e as input, an extra Parameter Value Parameter Value char emb size 50 bigram emb size 50 word emb size 50 subword emb size 50 char dropout 0.5 lattice dropout 0.5 LSTM layer 1 LSTM hidden 200 learning rate lr 0.01 lr decay 0.05 Table 1: Hyper-parameter values.",
"LSTM cell is applied to calculate the memory vector of the sequence c w b,e : c w b,e = LST MCell ( h b , e w b,e ) , (4) where the LST MCell is a simplified LSTM unit which calculate the memory only.",
"The output memory vector c w b,e links to the end character c e to calculate its hidden vector h e .",
"For character with multiple memory cell inputs 2 , we assign a gate for each subsequence input to control its contribution.",
"The detailed equations are listed in Appendix .",
"The final output h i includes both the character sequence history information and all the matched subsequence information.",
"We use a standard CRF layer for inference (de-tails in Appendix ).",
"Viterbi (1967) is used to find the highest scored label sequence over the input.",
"During training, we choose sentence-level log-likelihood as the loss function.",
"where y i is the gold labels of sentence s i .",
"Data .",
"We evaluate our model on four standard Chinese word segmentation datasets: CTB6, PKU, MSR, and Weibo.",
"PKU and MSR are taken from the SIGHAN 2005 bake-off (Emerson, 2005) and Weibo dataset is the NLPCC 2016 shared task (Qiu et al., 2016), standard split are used.",
"We take CTB6 as the main dataset and split the train/dev/test following Zhang et al. (2016).",
"The statistics of the datasets are listed in Appendix .",
"Hyperparameters .",
"We keep the hyperparameters the same among all datasets.",
"Standard gradient descent (SGD) with a learning rate decay is used as the optimizer.",
"The embedding sizes of character unigram/bigram and subword are all 50.",
"Dropout (Srivastava et al., 2014) is used on both the character input and the subword input to prevent over-fitting.",
"Details are listed in Table 1.",
"Embeddings .",
"We take the same character unigram and bigram embeddings as Zhang et al. (2016), who pretrain embeddings using word2vec (Mikolov et al., 2013) on Chinese Gigaword 3 .",
"The vocabulary of subword is constructed with 200000 merge operations and the subword embeddings are also trained using word2vec (Heinzerling and Strube, 2018).",
"Trie (Fredkin, 1960) is used to accelerate lattice building.",
"All the embeddings are fine-tuned during training.",
"We perform experiments on the CTB6 development dataset to investigate the contribution of character bigram information and the subword information.",
"Figure 3 shows the iteration curve of F-scores against different numbers of training iterations with different character representations.",
"Bigram represents the model using both character unigram and bigram information (embed-ding concatenation).",
"Character bigram information can improve the baseline significantly.",
"When the LaLSTM+Subword structure is added, the model performance is further improved.",
"This shows that subword information has a great ability to disambiguate the characters.",
"Zhang and Yang (2018) observed that character bigram information has a negative effect in lattice LSTM on Chinese NER task, while we find a different result on Chinese word segmentation where character bigram information gives significant improvements in the lattice LSTM.",
"This is likely because character bigrams are informative but ambiguous.",
"They can provide more useful character disambiguation evidence in segmentation than in NER where lattice LSTM works well in disambiguating characters.",
"Table 2 shows the main results and the recent state-of-the-art neural CWS models.",
"Zhang et al. (2016) integrated both discrete features and neural features in a transition-based framework.",
"Xu and Sun (2016) proposed the dependency-based gated recursive neural network to utilize long distance dependencies.",
"Yang et al. (2017) utilized pretrained character representations from multitasks.",
"We examine their non-pretrained model performance for fair comparison.",
"Ma et al. (2018) built a bidirectional LSTM model with carefully hyperparame-ter selection.",
"These methods are orthogonal to and can be integrated into our lattice structure.",
"As shown in Table 2, the subword lattice LSTM gives significant improvements on all evaluated datasets.",
"In the PKU dataset, our model is slightly behind Xu and Sun (2016) which preprocesses the dataset by replacing all the Chinese idioms, lead-10< 30 50 70 90 >90 Sentence length 0.960 0.965 0.970 F 1 v a l u e LaLSTM+Subword Baseline Figure 4: F1-value against the sentence length.",
"ing the comparison not entirely fair.",
"Our model gives the best performance on MSR and Weibo datasets, which demonstrates that subword encoding can help the lattice LSTM model gives comparable performance to the state-of-the-art word segmentation models.",
"Lexicon and Embeddings .",
"To distinguish the contribution of subword lexicon and their pretrained embeddings, we conduct a set of experiments by using the same subword lexicon with randomly initialized embeddings 4 on CTB6 data.",
"As shown in Table 3, the contribution of the error reduction by the lexicon is 4 .",
"5% .",
"While 6 .",
"9% error reduction comes from both lexicon and pretrained embeddings.",
"We can estimate that the contribution of pretraining is (6 . 9% 4 . 5%) = 2 .",
"4% .",
"This roughly shows that both lexicon and pretraining are useful to lattice LSTM, and the former contributes more than the latter.",
"OOV Analysis .",
"Table 3 also shows the recall of in-vocabulary ( RIV ) and out-of-vocabulary ( ROOV ) words, respectively.",
"As shown in the table, the ROOV can be largely improved with the lattice structure ( 2 . 43% absolute improvement).",
"Sentence Length .",
"We compare the baseline model with our proposed model on the sentence length distribution in Figure 4.",
"The performance of the baseline has a valley in around 30-character length and decreases when the sentence length over 90.",
"This phenomenon has also been observed in transition-based neural segmentor Yang et al. (2017).",
"While LaLSTM+Subword gives a more stable performance along sentence length.",
"Subword Coverage in lexicon .",
"Table 4 5 shows the subword coverage rate in four datasets.",
"Subword level coverage is consistently higher than the entity level coverage in Zhang and Yang (2018).",
"We can see that higher subword coverage (PKU/MSR, > 90% ) gives better error reduction rate.",
"Weibo dataset gets the minimum improvement due to the low subword coverage.",
"Case Study .",
"Figure 5 shows an example of CTB6 test dataset.",
"In this example, there are two matched subwords (BiologicalDiversity) and (Diversity) which can guide the segmentor to get the right split of (DiversityDay), which is segmented incorrectly by the baseline.",
"We examined the effectiveness of subwords for neural CWS.",
"Subwords are deduced using BPE, and then integrated into a character-based neural segmentor through lattice LSTM.",
"Results on four benchmarks show that subword brings significant improvements over a character baseline, and our proposed model gives comparable performances to the best systems on all datasets.",
"Our experiments also showed that the matched subwords contribute more than embedding pertaining, which 5 #Word is the word number in the corresponding dataset, #Match is the matched words number between the dataset and subword lexicon, #Ratio = #Match #Word represents the subword coverage rate.",
"#ER is the error reduction compared with baseline model.",
"indicates that the lattice LSTM structure with do-main lexicons can be useful for cross-domain segmentation training.",
"We thank the anonymous reviewers for their insightful comments.",
"Yue Zhang is the corresponding author."
]
| [
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other"
]
|
[
"In this paper, we study machine reading comprehension (MRC) on long texts, where a model takes as inputs a lengthy document and a question and then extracts a text span from the document as an answer.",
"State-of-the-art models tend to use a pretrained transformer model (e.g., BERT) to encode the joint contextual information of document and question.",
"However, these transformer-based models can only take a fixed-length (e.g., 512 ) text as its input.",
"To deal with even longer text inputs, previous approaches usually chunk them into equally-spaced segments and predict answers based on each segment independently without considering the information from other segments.",
"As a result, they may form segments that fail to cover the correct answer span or retain insufficient contexts around it, which significantly degrades the performance.",
"Moreover, they are less capable of answering questions that need cross-segment information.",
"We propose to let a model learn to chunk in a more flexible way via reinforcement learning: a model can decide the next segment that it wants to process in either direction.",
"We also employ recurrent mechanisms to enable information to flow across segments.",
"Experiments on three MRC datasets CoQA, QuAC, and TriviaQA demonstrate the effectiveness of our proposed recurrent chunking mechanisms: we can obtain segments that are more likely to contain complete answers and at the same time provide sufficient contexts around the ground truth answers for better predictions.",
"Teaching machines to read, process, and comprehend natural language is a coveted goal of machine reading comprehension (MRC) problems",
"The work was performed during an internship at Tencent AI Lab, Bellevue, WA, USA.",
"The work was performed when Yelong Shen was at Tencent AI Lab, Bellevue, WA, USA.",
"(Hermann et al., 2015; Hill et al., 2016; Rajpurkar et al., 2016; Trischler et al., 2017; Zhang et al., 2018; Kocisk`y et al., 2018).",
"Many existing MRC datasets have a similar task definition: given a document and a question, the goal is to extract a span from the document (in most cases) or instead generate an abstractive answer to answer the question.",
"There is a growing trend of building MRC readers (Hu et al., 2019; Xu et al., 2019; Yang et al., 2019a; Keskar et al., 2019) based on pre-trained language models (Baker et al., 2019; Yang et al., 2019b), such as GPT (Radford et al., 2018) and BERT (Devlin et al., 2019).",
"These models typically consist of a stack of transformer layers that only allow fixed-length (e.g., 512 ) inputs.",
"However, it is often the case that input sequences exceed the length constraint, e.g., documents in the TriviaQA dataset (Joshi et al., 2017) contain 2,622 tokens on average.",
"Some conversational MRC datasets such as CoQA (Reddy et al., 2018) and QuAC (Choi et al., 2018) often go beyond the length limit as we may need to incorporate previous questions as well as relatively long documents into the input to answer the current question.",
"To deal with long text inputs, a commonly used approach firstly chunks the input text into equally-spaced segments, secondly predicts the answer for each individual segment, and finally ensembles the answers from multiple segments (Devlin et al., 2019).",
"However, there are two major limitations of this approach: first, a predetermined large stride size for chunking may result in incomplete answers, and we observe that models are more likely to fail when the answer is near the boundaries of a segment, compared to the cases when an answer is in the center of a segment surrounded by richer context (Figure 1); second, we empirically observe that chunking with a smaller stride size contributes little to (sometimes even hurts) the model performance.",
"A possible explanation is that predicting answer for each segment independently may cause incomparable answer scores across segments.",
"A similar phenomenon is also observed in open-domain question answering tasks (Clark and Gardner, 2017).",
"Considering the limitations mentioned above, we propose recurrent chunking mechanisms (RCM) on top of the transformer-based models for MRC tasks.",
"There are two main characteristics of RCM.",
"First, it could let the machine reader learn how to choose the stride size intelligently when reading a lengthy document via reinforcement learning, so it helps prevent extracting incomplete answers from a segment and retain sufficient contexts around the answer.",
"Second, we apply recurrent mechanisms to allow the information to flow across segments.",
"As a result, the model can have access to the global contextual information beyond the current segment.",
"In the experiments, we evaluate the proposed RCM 1 on three MRC datasets: CoQA, QuAC, and TriviaQA.",
"Experimental results demonstrate that RCM leads to consistent performance gains on these benchmarks.",
"Furthermore, it also generates segments that are more likely to cover the entire answer spans and provide richer contextual information around the ground truth answers.",
"The primary contributions of this work are: We propose a chunking mechanism for machine reading comprehension to let a model learn to chunk lengthy documents in a more flexible way via reinforcement learning.",
"We also apply recurrence to allow information transfer between segments so that the model can have knowledge beyond the current segment when selecting answers.",
"We have performed extensive experiments on three machine reading comprehension datasets: CoQA, QuAC, and TriviaQA.",
"Our approach outperforms two state-of-the-art BERT-based models on different datasets.",
"The proposed recurrent chunking mechanisms (RCM) are built upon the pre-trained BERT models.",
"We will briefly introduce the basic model in Section 2.1, and then the RCM approach in Section 2.2 and 2.3.",
"More details of our model in training and testing are presented in Sections 2.4 and 2.5.",
"Pre-trained BERT model has been shown to achieve new state-of-the-art performance on many MRC datasets (Devlin et al., 2019).",
"Here, we introduce this basic BERT model, which is used as our baseline.",
"As the maximum input length in BERT is restricted to be 512 , a widely adopted strategy is to chunk a long document into multiple segments with a fixed stride size (i.e., 128 ).",
"Following the input format of BERT, the input for each document segment starts with CLS token, which is followed by question tokens Q and document segment tokens.",
"We use SEP token as a separator between the question and the segment.",
"We also append a special UNK token at the end of the segment to handle unanswerable questions.",
"If a given question is annotated as unanswerable, we mark the UNK token as the ground truth answer during training.",
"Accordingly in evaluation, if UNK token is selected by the model from the input segment, we output the answer as unanswerable.",
"Answer Extraction .",
"Following previous work on extractive machine reading comprehension, we predict the start and the end positions of the answer span in the given document segment.",
"BERT first generates a vector representation h c,i for each i -th Chunking Scorer AnswerExtractor PolicyNetwork BERT BERT BERT AnswerExtractor PolicyNetwork AnswerExtractor PolicyNetwork recur Segment 1 Segment 1 SEP CLS SEP Q Segment 2 Segment 3 recur Chunking action Segment 2 SEP CLS SEP Q Segment 3 SEP CLS SEP Q Chunking action Input Document Figure 2: BERT generates representations for each input sequence, and recurrence accumulates information over segments.",
"token in the c -th segment.",
"Given h c,i , the model scores each token in terms of its likelihood of being the start token of the answer span.",
"l start c,i = w Ts h c,i , (1) where w s is the model parameter.",
"p start c,i = softmax( l start c,i ) (2) Likewise, the model scores how likely the answer ends at the j -th token in segment c using l end c,j = w Te h c,j , (3) where w e is the model parameter.",
"The probability p start c,i that the answer starts at the i -th token is computed by applying the softmax to l start c,i .",
"The probability of the j -th token being the end of the answer (de-noted as p end c,j ) is calculated in a similar manner as Eq.",
"(2).",
"Answer Ensemble .",
"The baseline model adopts a max-pooling approach to ensemble candidate answers from multiple segments.",
"The answer with the highest probability is selected.",
"The baseline model makes the answer prediction for each document segment independently, which may cause incomparable answer scores across segments due to the lack of document-level information.",
"We propose to use a recurrent layer to propagate the information across different segments and a chunking scorer model to estimate the probability that a segment contains the answer.",
"For an input sequence containing the segment c , BERT's representation for its first token CLS is taken as the local representation v c of the segment.",
"The segment representation is further enriched with the representations of previously generated segments via recurrence.",
"We denote the enriched segment representation as v c : v c = f ( v c , v c 1 ) , (4) where f ( ) is the recurrent function.",
"We consider two recurrent mechanisms here: gated recurrence and Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) recurrence.",
"Gated recurrence is simply a weighted sum of its inputs: f gated ( v c , v c 1 ) = v c + v c 1 , (5) where and are coefficients depending on the inputs.",
"We have , = softmax ( w Tr [ v c , v c 1 ]) , where w r is a model parameter.",
"The LSTM recurrence, which uses LSTM unit as the recurrence function, takes v c as the current input and v c 1 as the previous hidden state.",
"where W c and b c are model parameters, and ( ) is the sigmoid function.",
"The scalar q c is an estimation of the probability that an answer is included in segment c .",
"Then, the chunking scorer uses q c to further refine the likelihood of the candidate answers from different segments (see Sections 2.4 and 2.5 for more details on this part of chunking scorer).",
"The baseline approach divides a long document into multiple segments with a fixed stride size, from left to right.",
"We will present an approach that could allow the model to choose the stride size flexibly by itself when reading the document.",
"Our motivation, as mentioned in Section 1, is to prevent the answer span from being too close to the segment boundary and covering incomplete answers.",
"We formulate the problem of learning-to-chunk under the framework of reinforcement learning.",
"We define the state s of the model to be the segments that a model has processed up to the current segment c , i.e., s = { 1 , 2 , . . . , c } .",
"The action a is the stride size and direction (forward or backward) the model chooses to move to the next document segment.",
"We define the action space A as a set of strides, e.g., A = { 16 , 16 , 32 } , where 32 indicates moving forward with stride size 32 and 16 indicates moving backward with stride size 16 .",
"In this work, we represent the state s with the enriched segment representation v c .",
"Chunking Policy .",
"The chunking policy gives the probability p act ( a | s ) of taking an action a at the current state s , which is modeled by a one-layer feedforward neural network: p act ( a | s ) = softmax ( W a v c + b a ) , (8) where W a and b a are trainable parameters.",
"Fig. 2 gives an overview of the proposed recurrent chunking mechanisms built upon the BERT model: the chunking policy network takes the enriched segment representation as the input to generate the chunking action, which decides the next segment to be processed.",
"In the training phase of the recurrent chunking mechanisms, the stride actions of moving to the next segment are sampled according to the probability given by the chunking policy (Sutton and Barto, 2018).",
"Our model generates a sequence of document segments for each question.",
"We train the answer extractor and chunking scorer network with supervised learning, and we train the chunking policy network via reinforcement learning.",
"Supervised Learning for Answer Extraction .",
"Just as the baseline model, we train the answer extraction network via supervised learning.",
"Given a question, the answer extractor classifies whether a word from a document segment is the start or the end of the answer.",
"The cross-entropy loss can be computed given the ground-truth answer and the predictions of the answer extractor.",
"Suppose that the i -th and j -th tokens are the answer start and end, respectively.",
"The training objective to minimize the following cross-entropy loss, L ans : L ans = (cid:88) c log p start c,i (cid:88) c log p end c,j , (9) Supervised Learning for Chunking Scorer .",
"A binary variable y c indicates whether the segment c contains an answer or not.",
"Chunking scorer estimates the probability q c that the segment contains an answer.",
"Similarly, the chunking scorer network can be trained in a supervised manner by minimizing the cross-entropy loss, L cs : L cs = (cid:88) c y c log q c (cid:88) c (1 y c ) log(1 q c ) , (10) where the chunking score q c is given in Eq.",
"(7).",
"Reinforcement Learning for Chunking Policy .",
"Since the selection of the stride actions is a sequential decision-making process, it is natural to train the chunking policy via reinforcement learning.",
"First of all, the accumulated reward for taking action a at state s is denoted as R ( s, a ) , which is derived in a recursive manner: R ( s, a ) = q c r c + (1 q c ) R ( s (cid:48) , a (cid:48) ) , (11) where q c is the probability that segment c contains an answer as given in Eq.",
"(7), and ( s (cid:48) , a (cid:48) ) denotes the next state-action pair.",
"The value of r c indicates the probability of the correct answer being extracted from the current segment c .",
"The mathematical definition of r c is given as: r c = (cid:40) p start c,i p end c,j , if answer included, 0 , else.",
"The first term in Eq.",
"(11) is the reward of the answer being correctly extracted from the current segment.",
"The answer is included in the current segment c with probability q c , and thus the first term is weighted by q c in reward R ( s, a ) .",
"The second term in Eq.",
"(11) indicates that R ( s, a ) also Dataset Train Validation Question # Avg tokens # Max token # Question # Avg tokens # Max token # CoQA 108,647 352 1,323 7,983 341 1,037 QuAC 83,568 516 2,310 7,354 576 2,146 TriviaQA (wiki) 61,888 2,622 5,839 7,993 2,630 6,690 Table 1: Statistics of the CoQA, QuAC and TriviaQA datasets.",
"relies on the accumulated reward R ( s (cid:48) , a (cid:48) ) of the next state when the answer is not available in the current segment.",
"The chunking policy network can be trained by maximizing the expected accumulated reward (as shown in Eq.",
"(13)) through the policy gradient algorithm (Williams, 1992; Sutton et al., 2000; Gong et al., 2019).",
"To be consistent with the notations in answer extraction and chunking scorer modules, we denote the loss function of chunking policy as L cp , which is the negative expected accumulated reward J in Eq.",
"(13): L cp = J .",
"Thus, the stochastic gradient of L cp over a mini-batch of data B is given by: L cp = (cid:88) ( s,a ) B log p act ( a | s ) R ( s, a ) , (14) where p act ( a | s ) is the chunking policy in Eq.",
"(8).",
"Training procedure .",
"The overall training loss L is an sum of all three losses: L = L ans + L cs + L cp .",
"In addition, we initialize the bottom representation layers with a pre-trained BERT model and initialize other model parameters randomly.",
"We use the Adam optimizer with peak learning rate 3 10 5 and a linear warming-up schedule.",
"In the testing phase, the model starts from the beginning of the document as its first segment.",
"Later on in state s , the model takes the best stride action a according to the chunking policy: a = argmax a A p act ( a | s ) (15) After the stride action a is taken, a new segment is taken from the given document, and so on untill the maximum number of segments C is reached.",
"Now for a document segment c , we score its candidate answer spanning from the i -th to the j -th token by p A i,j,c : p A i,j,c = p start c,i p end c,j q c .",
"We use three MRC datasets, CoQA (Reddy et al., 2018), QuAC (Choi et al., 2018) and TriviaQA",
"(Joshi et al., 2017)) in our experiments.",
"(1) CoQA .",
"Answers in the CoQA dataset can be abstractive texts written by annotators.",
"It is reported that an extractive MRC approach can achieve an upper bound as high as 97 .",
"8% in F1 score (Yatskar, 2019).",
"Therefore, We preprocess the CoQA training data and select a text span from the document as the extractive answer that achieves the highest F1 score compared with the given ground truth.",
"(2) QuAC .",
"All the answers in the QuAC dataset are text spans, which are highlighted by annotators in the given document.",
"(3) TriviaQA .",
"TriviaQA is a large-scale MRC dataset, containing data from Wikipedia and Web domains.",
"We use its Wikipedia subset in this work.",
"It is reported to be challenging in its variability between questions and documents as well as its requirement of cross-sentence reasoning.",
"Documents in TriviaQA contain more than 2,000 words on average, which is suitable for evaluating the capability of a model to deal with long documents.",
"The dataset statistics are summarized in Table 1, including the data sizes, the average and maximum number of sub-tokens in documents.",
"Baselines .",
"We have two strong baselines based on the pre-trained BERT, which has achieved state-of-the-art performance in a wide range of NLP tasks Dataset CoQA QuAC Max sequence length 192 256 384 512 192 256 384 512 BERT-Large (Devlin et al., 2019) 72.8 76.2 81.0 81.4 34.5 50.6 56.7 61.5 Sent-Selector (with previous questions) 54.5 63.8 75.3 79.4 33.9 38.8 47.6 55.4 Sent-Selector (only current questions) 57.5 66.5 76.5 79.5 34.3 39.1 47.6 56.4 BERT-RCM Gated recurrence (no RL chunking) 74.5 78.6 81.0 81.4 48.8 51.4 56.2 61.4 Gated recurrence 76.0 79.2 81.3 81.8 51.6 55.2 59.9 62.0 LSTM recurrence (no RL chunking) 74.1 78.5 81.0 81.3 49.2 51.5 56.4 61.6 LSTM recurrence 75.4 79.5 81.3 81.8 53.9 55.6 60.4 61.8 Table 2: Comparison of F1 scores ( % ) achieved by different algorithms.",
"including machine reading comprehension.",
"(1) BERT-LARGE MODEL .",
"It achieves competitive performance on extractive MRC tasks such as SQuAD (Rajpurkar et al., 2016, 2018).",
"It adopts a simple sliding window chunking policy moving to the next document segment with a fixed stride size from left to right.",
"We also analyze the performance of the Large BERT model with different stride sizes in training and testing (see Section 4.1 for details).",
"The best performance is obtained by setting stride size as 64 in CoQA and QuAC, and 128 in TriviaQA.",
"(2) SENTENCE SELECTOR .",
"Given a question, the sentence selector chooses a subset of sentences that are likely to contain an answer.",
"The selected sentences are then concatenated and fed to the BERT-Large model for answer extraction.",
"For conversational datasets CoQA and QuAC, since a question is correlated with its previous questions within the same conversation, we apply the sentence selector to select sentences based on the current question alone or the concatenation of the previous questions and the current question.",
"We only use the current question as the input to the sentence selector for TriviaQA, which does not involve any conversational history.",
"The sentence selector we used in experiments is released by Htut et al. (2018).",
"Evaluation Metric .",
"The main evaluation metric is macro-average word-level F1 score.",
"We compare each prediction with the reference answer.",
"Precision is defined by the percentage of predicted answer tokens that appear in the reference answer, and recall is the percentage of reference answer tokens captured in the prediction.",
"F1 score is the harmonic mean of the precision and recall.",
"When multiple reference answers are provided, the maximum F1 score is used for evaluation.",
"MRC datasets, CoQA and QuAC.",
"Setting .",
"We perform a set of experiments with different maximum sequence lengths of 192 , 256 , 384 , and 512 .",
"Our model fixes the number of segments read from a document for each question.",
"It generates 4 , 3 , 3 , and 2 segments under the length limit of 192 , 256 , 384 , and 512 , respectively.",
"Considering that questions are highly correlated due to the existence of coreferential mentions across questions, we concatenate each question with as many of its previous questions as possible up to the length limit of 64 question tokens.",
"The action space of the model strides is set as [ 16 , 16 , 32 , 64 , 128] for CoQA and [ 16 , 32 , 64 , 128 , 256] for QuAC considering that documents in CoQA documents are shorter than those in QuAC.",
"The first segment always starts with the first token of the document, and the model will take stride action after the first segment.",
"Results .",
"In Table 2, we present F1 scores achieved by our methods and the baselines.",
"The performance of the BERT-Large model drops drastically as the maximum sequence length decreases.",
"We see a drop of 8 .",
"6% in F1 score on the CoQA dataset and a drop of 27 .",
"0% on the QuAC dataset when the maximum input length decreases from 512 to 192 .",
"Followed by the same BERT-Large reader, the sentence selector baseline that only considers the current question achieves better performance than the selector fed with the concatenation of the current question and its previous questions.",
"The selector with the current question performs well in selecting sentences containing answers from documents.",
"For 90 .",
"4% of questions in CoQA and 81 .",
"2% of questions in QuAC, the top-ranked 12 sentences in the documents can include complete Dataset CoQA QuAC # of Doc Tokens <=200 (200, 300] (300, 400] >400 <=300 (300, 450] (450, 600] >600 Percentage (%) 15.3 63.3 18.9 2.5 20.5 52.0 19.7 7.8 BERT-Large 81.0 81.9 81.8 67.2 66.2 62.8 62.2 38.7 BERT-RCM Gated recurrence LSTM recurrence 81.1 82.1 82.3 74.5 66.1 62.6 63.6 43.2 81.1 82.0 82.3 74.7 66.4 62.6 63.0 41.3 Table 3: F1 Score ( % ) on documents with different numbers of tokens (max sequence length is 512 ).",
"answers.",
"However, the selector does not improve upon BERT-Large despite its high precision in sentence selection.",
"This might be because selected sentences do not provide sufficient contexts for a model to identify answers accurately.",
"Our model with recurrent chunking mechanisms BERT-RCM performs consistently better than both BERT-Large and BERT-Sent-Selector.",
"On the CoQA dataset, BERT-RCM with gated recurrence improves upon the BERT-Large model by 3 .",
"2% , 3% , 0 .",
"3% , and 0 .",
"4% with maximum sequence length of 192 , 256 , 284 , and 512 , respectively.",
"The improvement brought by LSTM recurrence and RL chunking is 2 .",
"6% , 3 .",
"3% , 0 .",
"3% , 0 .",
"4% on CoQA.",
"As for the QuAC dataset, gated recurrence combined with RL chunking leads to improvements of 17 .",
"1% , 4 .",
"6% , 3 .",
"2% , 0 .",
"5% , and LSTM recurrence has gains of 19 .",
"4% , 5 .",
"0% , 3 .",
"7% , 0 .",
"3% under different maximum sequence lengths.",
"On the two datasets, the gains of BERT-RCM over BERT-Large are statistically significant at p = 0 .",
"05 with both gated and LSTM recurrence.",
"We notice that our model is less sensitive to the maximum sequence length, and LSTM recurrence has comparable performance to the gated recurrence.",
"The gain is more obvious with maximum sequence length (192 , 256 , 384) , and relatively small under the length of 512 .",
"This is perhaps because most document lengths are smaller than 512 in CoQA and QuAC.",
"Therefore, we report the performance of our proposed method on documents of different lengths in Table 3, where the maximum sequence length is set as 512 .",
"We observe that the gain is more obvious on longer documents.",
"For documents with more than 400 words in the CoQA dataset, RL chunking with gated recurrence has an improvement of 7 .",
"3% over BERT-Large, and RL chunking with LSTM recurrence improves F1 score by 7 .",
"5% .",
"As for QuAC, the improvement of gated recurrence with RL chunking is 4 .",
"5% , and the improvement of LSTM recurrence is 2 .",
"6% .",
"Ablation Analysis .",
"We further study the effect of recurrence alone without RL chunking here.",
"As shown in rows BERT-Large and Gated recurrence (no RL chunking) in Table 2, gated recurrence alone can improve F1 score by 2 .",
"4% , and LSTM recurrence leads to an improvement of 2 .",
"3% without RL chunking when the maximum sequence length is 256 .",
"However, we do not observe any improvement when the maximum sequence length is set to 384 or 512 .",
"We further evaluate the ability of our model in dealing with extremely long documents on the TriviaQA",
"TriviaQA Wikipedia dataset.",
"Setting .",
"We set the maximum sequence length as 512 for all models.",
"The action space of our BERT-RCM model is set to [ 64 , 128 , 256 , 512 , 1024] .",
"The stride sizes are larger than those in CoQA and QuAC, since TriviaQA provides much longer documents.",
"During training, the maximum number of segments our model can extract from a document is set to three in the TriviaQA dataset.",
"Note that our model reads no more than 512 3 = 1536 tokens from these three segments, which are much fewer than the average document length.",
"Results .",
"We filter a small number of questions whose answers cannot be extracted from documents and keep 7,251 questions from a total of 7,993 questions.",
"In Table 4, we present the F1 scores of different algorithms.",
"Compared with Dataset CoQA QuAC BERT-Large (Devlin et al., 2019) Prediction Stride Size Prediction Stride Size Training Stride Size 16 32 64 128 16 32 64 128 16 80.8 80.9 80.8 80.7 60.6 60.7 60.7 60.8 32 81.1 81.1 81.1 81.1 60.7 60.7 60.9 61.0 64 81.4 81.4 81.4 81.3 61.0 61.0 61.4 61.4 128 81.0 81.1 81.1 81.1 60.8 60.8 60.8 61.2 Table 5: F1 score ( % ) of the BERT-Large model with different training/prediction stride sizes on the CoQA and QuAC datasets.",
"BERT-Large, the BERT-RCM model achieves 1 .",
"6% gain with gated recurrence and 1% gain with LSTM recurrence.",
"Also, both BERT-RCM and BERT-Large models beat the Sent-Selector model.",
"In this section, we will analyze the performance of the baseline BERT-Large model and our proposed recurrent chunking mechanisms.",
"In Table 5, we give an analysis of how the performance varies with different stride sizes in BERT-Large model (the baseline) training and prediction.",
"An interesting observation is that smaller stride size in prediction does not always improve the performance, sometimes even hurts as can be seen on the QuAC dataset.",
"It suggests that BERT-Large performs badly on selecting good answers from multiple chunks.",
"Smaller stride size in model training also leads to worse performance.",
"A possible explanation is that smaller stride size would cause the significant distortion of training data distribution, since the longer question-document pairs produces more training samples than short ones.",
"We now provide an insight into the recurrent mechanisms and chunking policy learned by our proposed model using quantitative analysis.",
"For the clarity of our discussions, we use the following setting on the CoQA and QuAC datasets: the maximum chunk length is set to 256 , and the stride size of BERT-Large model is 128 .",
"Segment-Hit Rate .",
"With the ability of chunking policy, BERT-RCM is expected to focus on those document segments that contain an answer.",
"To evaluate how well a model can capture good segments, we use hit rate , i.e., the percentage of segments that contain a complete answer among all extracted segments, as evaluation metric.",
"As shown in Table 6, BERT-RCM significantly outperforms BERT-Large, which indicates that the learned chunking policy is more focused on informative segments.",
"Answer-Chunk Center Distance .",
"As discussed in Fig. 1, the answer's position with respect to a document segment is important for answer prediction.",
"When an answer is centered within the document segment, sufficient contexts on both sides help a model make better predictions.",
"In Figure 4: Example of generated document segments by BERT-RCM from a CoQA document.",
"Fig. 3, it presents the averaged center distances of the first three segments generated by BERT-Large and BERT-RCMs on the CoQA validation dataset.",
"Since all models start from the beginning of a document in the first segment, their first answer-chunk center distances are the same: 96 tokens.",
"But for the second and third segments generated by BERT-RCMs, the answer-chunk center distances are much smaller than BERT-Large.",
"Case Study .",
"We show an example from a CoQA document in Figure 4 to illustrate the chunking mechanism of our BERT-RCM model with LSTM recurrence.",
"The model starts with the beginning of the document as the first segment, where the answer span is close to its right boundary.",
"The model moves forwards 128 tokens to include more right contexts and generates the second chunk.",
"The stride size is a bit large since the answer is close to the left boundary of the second segment.",
"The model then moves back to the left by 16 tokens and obtains its third segment.",
"The chunking scorer assigns the three segments with the scores 0 .",
"24 , 0 .",
"87 , and 0 .",
"90 , respectively.",
"It suggests that the model considers the third segment as the most informative chunk in answer selection.",
"There is a growing interest in MRC tasks that require the understanding of both questions and reference documents (Trischler et al., 2017; Rajpurkar et al., 2018; Saeidi et al., 2018; Choi et al., 2018; Reddy et al., 2018; Xu et al., 2019).",
"Recent studies on pre-trained language models (Radford et al., 2018; Devlin et al., 2019; Baker et al., 2019; Yang et al., 2019b) have demonstrated their great success in fine-tuning on MRC tasks.",
"However these pre-trained NLP models (e.g., BERT) only take as input a fixed-length text.",
"Variants of BERT are proposed to process long documents in tasks such as text classification (Chalkidis et al., 2019).",
"To deal with lengthy documents in machine reading comprehension tasks, some previous studies skip certain tokens (Yu et al., 2017; Seo et al., 2018) or select a set of sentences as input based on the given questions (Hewlett et al., 2017; Min et al., 2018; Lin et al., 2018).",
"However, they mainly focus on tasks in which most of the answers to given questions are formed by a single informative sentence.",
"These previous approaches are less applicable to deal with those complicated questions that demand cross-sentences reasoning or have much lexical variability from their lengthy documents.",
"In this paper, we propose a chunking policy network for machine reading comprehension, which enables a model learn to chunk lengthy documents in a more flexible way via reinforcement learning.",
"We also add a recurrent mechanism to allow the information to flow across segments so that the model could have knowledge beyond the current segment when selecting answers.",
"We have performed extensive experiments on three public datasets of machine reading comprehension: CoQA, QuAC, and TriviaQA.",
"Our approach outperforms benchmark models across different datasets.",
"We would like to thank the anonymous reviewers for their constructive comments and suggestions."
]
| [
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"method",
"result",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"method",
"result",
"other"
]
|
[
"We address part-of-speech (POS) induction by maximizing the mutual information between the induced label and its context.",
"We focus on two training objectives that are amenable to stochastic gradient descent (SGD): a novel generalization of the classical Brown clustering objective and a recently proposed variational lower bound.",
"While both objectives are subject to noise in gradient updates, we show through analysis and experiments that the variational lower bound is robust whereas the generalized Brown objective is vulnerable.",
"We obtain strong performance on a multitude of datasets and languages with a simple architecture that encodes morphology and context.",
"We consider information theoretic objectives for POS induction, an important unsupervised learning problem in computational linguistics (Christodoulopoulos et al., 2010).",
"The idea is to make the induced label syntactically informative by maximizing its mutual information with respect to local context.",
"Mutual information has long been a workhorse in the development of NLP techniques, for instance the classical Brown clustering algorithm (Brown et al., 1992).",
"But its role in today's deep learning paradigm is less clear and a subject of active investigation (Belghazi et al., 2018; Oord et al., 2018).",
"We focus on fully differentiable objectives that can be plugged into an automatic differentiation system and efficiently optimized by SGD.",
"Specifically, we investigate two training objectives.",
"The first is a novel generalization of the Brown clustering objective obtained by relaxing the hard clustering constraint.",
"The second is a recently proposed variational lower bound on mutual information (McAllester, 2018).",
"A main challenge in optimizing these objectives is the difficulty of stochastic optimization.",
"Each objective involves entropy estimation which is a nonlinear function of all data and does not decompose over individual instances.",
"This makes the gradients estimated on minibatches inconsistent with the true gradient estimated from the entire dataset.",
"To our surprise, in practice we are able to optimize the variational objective effectively but not the generalized Brown objective.",
"We analyze the estimated gradients and show that the inconsistency error is only logarithmic in the former but linear in the latter.",
"We validate our approach on POS induction by attaining strong performance on a multitude of datasets and languages.",
"Our simple architecture that encodes morphology and context reaches up to 80.1 many-to-one accuracy on the 45-tag Penn WSJ dataset and achieves 4.7% absolute improvement to the previous best result on the universal treebank.",
"Unlike previous works, our model does not rely on computationally expensive structured inference or hand-crafted features.",
"Mutual information.",
"Mutual information between two random variables measures the amount of information gained about one variable by observing the other.",
"Unlike the Pearson correlation coefficient which only captures the degree of linear relationship, mutual information captures any nonlinear statistical dependencies (Kinney and At-wal, 2014).",
"Formally, the mutual information between discrete random variables X, Y with a joint distribution p is the KL divergence between the joint distribution p ( x, y ) and the product distribution p ( x ) p ( y ) over X, Y : I ( X, Y ) = (cid:88) x,y p ( x, y ) log p ( x, y ) p ( x ) p ( y ) = E ( x,y ) p (cid:20) log p ( x, y ) p ( x ) p ( y ) (cid:21) (1) It is thus nonnegative and zero iff X and Y are independent.",
"We assume that the marginals p ( x ) and p ( y ) are nonzero.",
"It is insightful to write mutual information in terms of entropy.",
"The entropy of X is H ( X ) = (cid:88) x p ( x ) log p ( x ) = E x p (cid:20) log 1 p ( x ) (cid:21) corresponding to the number of bits for encoding the behavior of X under p .",
"1 The entropy of X given the information that Y equals y is H ( X | Y = y ) = (cid:88) x p ( x | y ) log p ( x | y ) Taking expectation over Y yields the conditional entropy of X given Y : H ( X | Y ) = (cid:88) y p ( y ) (cid:32) (cid:88) x p ( x | y ) log p ( x | y ) (cid:33) = E ( x,y ) p (cid:20) log 1 p ( x | y ) (cid:21) (2) By manipulating the terms in mutual information, we can write I ( X, Y ) = E x p (cid:20) log 1 p ( x ) (cid:21) E ( x,y ) p (cid:20) log 1 p ( x | y ) (cid:21) = H ( X ) H ( X | Y ) which expresses the amount of information on X gained by observing Y .",
"Switching X and Y shows that I ( X, Y ) = H ( Y ) H ( Y | X ) .",
"1 The paper will always assume log base 2 to accommodate the bit interpretation.",
"Multiplying the term inside the log by p ( x ) /p ( x ) we derive H ( p, q ) = E x p (cid:20) log 1 p ( x ) (cid:21) + E x p (cid:20) log p ( x ) q ( x ) (cid:21) = H ( p ) + DKL ( p || q ) Thus H ( p, q ) H ( p ) with equality iff p = q .",
"Our primary inspiration comes from Brown clustering (Brown et al., 1992), a celebrated word clustering technique that had been greatly influential in unsupervised and semi-supervised NLP long before continuous representations based on neural networks were popularized.",
"It finds a clustering C : V [ m ] of the vocabulary V into m classes by optimizing the mutual information between the clusters of a random bigram ( X, Y ) .",
"Given a corpus of N words ( x 1 . . . x N ) , it assumes a uniform distribution over consecutive word pairs ( x i 1 , x i ) and optimizes the following empirical objective max C : V [ m ] (cid:88) c,c (cid:48) [ m ] #( c, c (cid:48) ) N log (cid:18) #( c, c (cid:48) ) N #( c )#( c (cid:48) ) (cid:19) (3) where #( c, c (cid:48) ) denotes the number of occurrences of the cluster pair ( c, c (cid:48) ) under C .",
"While this optimization is intractable, Brown et al. (1992) derive an effective heuristic that",
"1. initializes m most frequent words as singleton clusters and",
"2. repeatedly merges a pair of clusters that yields the smallest decrease in mutual information.",
"The resulting clusters have been useful in many applications (Koo et al., 2008; Owoputi et al., 2013) and has remained a strong baseline for POS induction decades later (Christodoulopoulos et al., 2010).",
"But the approach is tied to highly nontrivial combinatorial optimization tailored for the spe-cific problem and difficult to scale/generalize.",
"In the remainder of the paper, we assume discrete random variables ( X, Y ) X Y with a joint distribution D that represent naturally co-occurring observations.",
"In POS induction experiments, we will set D to be a context-word distribution where Y is a random word and X is the surrounding context of Y (thus Y is the vocabulary and X is the space of all possible contexts).",
"Let m be the number of labels to induce.",
"We introduce a pair of trainable classifiers that define conditional label distributions p ( z | x ) and q ( z | y ) for all x X , y Y , and z [ m ] .",
"For instance, p ( | y ) can be the output of a softmax layer on some transformation of y .",
"Our goal is to learn these classifiers without observing the latent variable z by optimizing an appropriate objective.",
"For training data, we assume N iid samples ( x 1 , y 1 ) . . . ( x N , y N ) D .",
"Our first attempt is to maximize the mutual information between the predictions of p and q .",
"Intuitively, this encourages p and q to agree on some annotation scheme (up to a permutation of labels), modeling the dynamics of inter-annotator agreement (Artstein, 2017).",
"It can be seen as a differentiable generalization of the Brown clustering objective.",
"To this end, define p ( z ) = E x D [ p ( z | x )] z [ m ] q ( z ) = E y D [ q ( z | y )] z [ m ] The mutual information between the predictions of p and q on a single sample ( x, y ) is then J mi x,y = (cid:88) z,z (cid:48) p ( z | x ) q ( z (cid:48) | y ) log p ( z | x ) q ( z (cid:48) | y ) p ( z ) q ( z (cid:48) ) and the objective (to maximize) is J mi = E ( x,y ) D (cid:2) J mi x,y (cid:3) Note that this becomes exactly the original Brown objective (3) if ( X, Y ) is defined as a random bigram and p and q are tied and constrained to be a hard clustering.",
"p ( z ) = 1 NN (cid:88) i =1 p ( z | x i ) z [ m ] q ( z ) = 1 NN (cid:88) i =1 q ( z | y i ) z [ m ] (cid:98) J mi i = (cid:88) z,z (cid:48) p ( z | x i ) q ( z (cid:48) | y i ) log p ( z | x i ) q ( z (cid:48) | y i ) p ( z ) q ( z (cid:48) ) (cid:98) J mi = 1 NN (cid:88) i =1 (cid:98) J mi i (4)",
"the objective cannot be written as a sum of local objectives because we take log of the estimates q ( z ) and p ( z ) computed from all samples.",
"This makes the stochastic gradient estimator biased (i.e., it does not match the gradient of (4) in expectation) and compromises the correctness of SGD.",
"This bias is investigated more closely in Section 4.",
"The second training objective we consider can be derived in a rather heuristic but helpful manner as follows.",
"Since X, Y are always drawn together, if q ( z | y ) is the target label distribution for the pair ( x, y ) , then we can train p ( z | x ) by minimizing the cross entropy between q and p over samples H ( q, p ) = E ( x,y ) D (cid:34) (cid:88) z q ( z | y ) log p ( z | x ) (cid:35) which is minimized to zero at p = q .",
"However, q is also untrained and needs to be trained along with p .",
"Thus this loss alone admits trivial solutions such as setting p (1 | x ) = p (1 | y ) = 1 for all ( x, y ) .",
"This undesirable behavior can be prevented by simultaneously maximizing the entropy of q .",
"Let Z denote a random label from q with distribution q ( z ) = E y D [ q ( z | y )] (thus Z is a function of q ).",
"The entropy of Z is H ( Z ) = (cid:88) z q ( z ) log q ( z ) Putting together, the objective (to maximize) is J var = H ( Z ) H ( q, p ) Variational interpretation.",
"The reason this objective is named a variational lower bound is due to McAllester (2018) who shows the following.",
"Consider the mutual information between Z and the raw signal X : I ( X, Z ) = H ( Z ) H ( Z | X ) (5) Because Z is drawn from q conditioning on Y , which is co-distributed with X , we have a Markov chain X Y q Z .",
"Thus maximizing I ( X, Z ) over the choice of q is a reasonable objective that enforces predictive coding: the label predicted by q from Y must be as informative of X as possible.",
"It can be seen as a special case of the objective underlying the information bottleneck method (Tishby et al., 2000).",
"So what is the problem with optimizing (5) directly?",
"The problem is that the conditional entropy under the model H ( Z | X ) = E ( x,y ) D z q ( | y ) (cid:20) log 1 ( z | x ) (cid:21) (6) involves the posterior probability of z given x ( z | x ) = (cid:80) y D ( x, y ) q ( z | y ) (cid:80) y,z D ( x, y ) q ( z | y ) This conditional marginalization is generally intractable and cannot be approximated by sampling since the chance of seeing a particular x is small.",
"However, we can introduce a variational distribution p ( z | x ) to model ( z | x ) .",
"Plugging this into (6) we observe that E ( x,y ) D z q ( | y ) (cid:20) log 1 p ( z | x ) (cid:21) = H ( q, p ) Moreover, E ( x,y ) D z q ( | y ) (cid:20) log 1 p ( z | x ) (cid:21) = E ( x,y ) D z q ( | y ) (cid:20) log ( z | x ) ( z | x ) p ( z | x ) (cid:21) = H ( Z | X ) + DKL ( || p ) Thus H ( q, p ) is an upper bound on H ( Z | X ) for any p with equality iff p matches the true posterior distribution .",
"This in turn means that J var = H ( Z ) H ( q, p ) is a lower bound on I ( X, Z ) , hence the name.",
"Empirical objective.",
"As with the generalized Brown objective, the variational lower bound can be estimated from the training data as (cid:98) H ( Z ) = (cid:88) z q ( z ) log q ( z ) (cid:98) H ( q, p ) = 1 NN (cid:88) i =1 (cid:32) (cid:88) z q ( z | y i ) log p ( z | x i ) (cid:33) (cid:98) J var = (cid:98) H ( Z ) (cid:98) H ( q, p ) (7) where q ( z ) is defined as in Section 3.1.",
"Our task is again to maximize this empirical objective (7) by taking gradient steps at random minibatches.",
"Once again, it cannot be written as a sum of local objectives because the entropy term involves log of q ( z ) computed from all samples.",
"Thus it is not clear if stochastic optimization will be effective.",
"The discussion of the two objectives in the previous section is incomplete because the stochastic gradient estimator is biased under both objectives.",
"In this section, we formalize this issue and analyze the bias.",
"Let B 1 . . . BK be a partition of the N training examples ( x 1 , y 1 ) . . . ( x N , y N ) into K (iid) minibatches.",
"For simplicity, assume | B k | = M for all k and N = MK .",
"We will adopt the same notation in Section 3 for p ( z ) and q ( z ) estimated from all N samples.",
"Define analogous estimates based on the k -th minibatch by p k ( z ) = 1 M (cid:88) x B k p ( z | x ) z [ m ] q k ( z ) = 1 M (cid:88) y B k q ( z | y ) z [ m ] If l N denotes an objective function computed from all N samples in the training data and l k denotes the same objective computed from B k , a condition on the correctness of SGD is that the gradient of l k (with respect to model parameters) is consistent with the gradient of l N on average: l N = 1 KK (cid:88) k =1 l k + (cid:15) (8) where (cid:15) denotes the bias of the stochastic gradient estimator.",
"In particular, any loss of the form l N = (1 /K ) (cid:80) k l k that decomposes over inde-pendent minibatches (e.g., the cross-entropy loss for supervised classification) satisfies (8) with (cid:15) = 0 .",
"The bias is nonzero for the unsupervised objectives considered in this work due to the issues discussed in Section 3.",
"The following theorem precisely quantifies the bias for the empirical losses associated with the variational bound and the generalized Brown objectives.",
"We only show the result with the gradient with respect to q , but the result with the gradient with respect to p is analogous and omitted for brevity.",
"Theorem 4.1.",
"Assume the setting in Section 4.1 and the gradient is taken with respect to the parameters of q .",
"(cid:15) = 1 KK (cid:88) k =1 (cid:88) z log q ( z ) q k ( z ) q k ( z ) On the other hand, for l N = (cid:98) J mi defined in (4) (cid:15) = 1 NK (cid:88) k =1 (cid:88) z,z (cid:48) (cid:18) (cid:15) k ( z, z (cid:48) ) q k ( z (cid:48) ) + log p ( z ) q ( z (cid:48) ) p k ( z ) q k ( z (cid:48) ) (cid:88) ( x,y ) B k p ( z | x ) q ( z (cid:48) | y ) (cid:19) where (cid:15) k ( z, z (cid:48) ) = 1 KN (cid:88) i =1 p ( z | x i ) q ( z (cid:48) | y i ) q ( z (cid:48) ) (cid:88) ( x,y ) B k p ( z | x ) q ( z (cid:48) | y ) q k ( z (cid:48) )",
"A proof can be found in the appendix.",
"We see that both biases go to zero as p k and q k approach p and q .",
"However, the bias is logarithmic in the ratio q ( z ) / q k ( z ) for the variational lower bound but roughly linear in the difference between 1 q ( z (cid:48) ) and 1 q k ( z (cid:48) ) for the generalized Brown objective.",
"In this sense, the variational lower bound is exponentially more robust to noise in minibatch estimates than the generalized Brown objective.",
"This is con-firmed in experiments: we are able to optimize (cid:98) J var with minibatches as small as 80 examples but unable to optimize (cid:98) J mi unless minibatches are prohibitively large.",
"We demonstrate the effectiveness of our training objectives on the task of POS induction.",
"The goal of this task is to induce the correct POS tag for a given word in context (Merialdo, 1994).",
"As typical in unsupervised tasks, evaluating the quality of induced labels is challenging; see Christodoulopoulos et al. (2010) for an in-depth discussion.",
"To avoid complications, we follow a standard practice (Berg-Kirkpatrick et al., 2010; Ammar et al., 2014; Lin et al., 2015; Stratos et al., 2016) and adopt the following setting for all compared methods.",
"ground-truth POS tag in the annotated data and report the resulting accuracy.",
"We also use the V-measure (Rosenberg and Hirschberg, 2007) when comparing with CRF autoencoders to be consistent with reported results (Ammar et al., 2014; Lin et al., 2015).",
"We use the number of ground-truth POS tags as the value of m (i.e., number of labels to induce).",
"This is a data-dependent quantity, for instance 45 in the Penn WSJ and 12 in the universal treebank.",
"Fixing the number of tags this way obviates many evaluation issues.",
"Model-specific hyperparameters are tuned on the English Penn WSJ dataset.",
"This configuration is then fixed and used for all other datasets: 10 languages in the universal treebank 2 and 7 languages from CoNLL-X and CoNLL 2007.",
"We set D to be a uniform distribution over context-word pairs in the training corpus.",
"Given N samples ( x 1 , y 1 ) . . . ( x N , y N ) D , we optimize the variational objective (7) or the generalized Brown objective (4) by taking gradient steps at random minibatches.",
"This gives us conditional label distributions p ( z | x ) and q ( z | y ) for all contexts x , words y , and labels z .",
"At test time, we use z = arg max z q ( z | y ) as the induced label of word y .",
"We experimented with different inference methods such as taking arg max z p ( z | x ) q ( z | y ) but did not find it helpful.",
"Let V denote the vocabulary.",
"We assume an integer H 1 that specifies the width of local context.",
"Given random word y V , we set x V 2 H to be an ordered list of H left and H right words of y .",
"For example, with H = 2 , a typical context-target pair ( x, y ) D may look like x = ( had, these, in, my ) y = keys We find this simple fixed-window definition of observed variables to be the best inductive bias for POS induction.",
"The correct label can be inferred from either x or y in many cases: in the above example, we can infer that the correct POS tag is plural noun by looking at the target or the context.",
"2 https://github.com/ryanmcd/uni-dep-tb 5.3 Architecture We use the following simple architecture to parameterize the label distribution p ( | x ) conditioned on context x VH and the label distribution q ( | y ) conditioned on word y V .",
"Context architecture.",
"The parameters of p are word embeddings e w R d for all w V and matrices W j R m d for all j = 1 . . . 2 H .",
"Given 2 H ordered contextual words x = ( w j ) 2 Hj =1 , we define p ( | x ) = softmax 2 H (cid:88) j =1 W j e w j Word architecture.",
"The parameters of q are the same word embeddings e w R d shared with p , character embeddings e c R d/ 2 for all distinct characters c , two single-layer LSTMs with input/output dimension d/ 2 , and matrices W c , W w R m d .",
"Given the word y with character sequence c 1 . . . c T , we define ( f 1 . . . f T ) = LSTM f ( e c 1 . . . e c T ) ( b 1 . . . b T ) = LSTM b ( e c T . . . e c 1 ) q ( | y ) = softmax (cid:18) W c (cid:20) f T b T (cid:21) + W w e y (cid:19) The overall architecture is illustrated in Figure",
"1. Our hyperparameters are the embedding dimension d = 200 , the context width H = 2 , the learning rate of the Adam optimizer r = 0 .",
"001 , and the minibatch size B = 80 .",
"3 Their values are tuned on the 45-tag Penn WSJ dataset to maximize accuracy.",
"We focus on comparing with the following models which are some of the strongest baselines in the literature we are aware of.",
"Berg-Kirkpatrick et al. (2010) extend a standard hidden Markov Model (HMM) to incorporate linguistic features.",
"Stratos et al. (2016) develop a factorization-based algorithm for learning a constrained HMM.",
"Ammar et al. (2014) propose a CRF autoencoder that reconstructs words from a structured label sequence.",
"Lin et al. (2015) extend Ammar et al. (2014) by switching a categorical reconstruction distribution with a Gaussian distribution.",
"In addition to these 3 An implementation is available at: https:// github.com/karlstratos/mmi-tagger .",
"baselines, we also report results with Brown clustering (Brown et al., 1992), the Baum-Welch algorithm (Baum and Petrie, 1966), and k -means clustering of 300 -dimensional GloVe vectors (Pen-nington et al., 2014).",
"The 45-tag Penn WSJ dataset.",
"The 45-tag Penn WSJ dataset is a corpus of around one million words each tagged with one of m = 45 tags.",
"It is used to optimize hyperparameter values for all compared methods.",
"Table 1 shows the average accuracy over 10 random restarts with the best hyperparameter configurations; standard deviation is given in parentheses (except for deterministic methods Stratos et al. (2016) and Brown cluster-ing).",
"Our model trained with the variational objective (7) outperforms all baselines.",
"4 We also observe that our model trained with the generalized Brown objective (4) does not work.",
"We have found that unless the minibatch size is as large as 10,000 the gradient steps do not effectively increase the true data-wide mutual information (4).",
"This supports our bias analysis in Section 4.",
"While it may be possible to develop techniques to resolve the difficulty, for instance keeping a moving average of estimates to stabilize estimation, we leave this as future work and focus on the variational objective in the remainder of the paper.",
"4 We remark that Tran et al. (2016) report a single number 79.1 with a neuralized HMM.",
"We also note that the concurrent work by He et al. (2018) obtains 80.8 by using word embeddings carefully pretrained on one billion words.",
"model (accuracy 80.1) to better understand the sources of its strong performance.",
"Context size H = 2 is a sizable improvement over H = 3 or H = 1 .",
"Random sampling is significantly more effective than sentence-level batching (i.e., each minibatch is the set of context-word pairs within a single sentence as done in McAllester (2018)).",
"G lo V e initialization of word embeddings e w is harmful.",
"As expected for POS tagging, morphological modeling with LSTMs gives the largest improvement.",
"While it may be surprising that G lo V e initialization is harmful, it is well known that pretrained word embeddings do not necessarily capture syntactic relationships (as evident in the poor performance of k -means clustering).",
"Consider the top ten nearest neighbors of the word made under GloVe embeddings (840B.300d, within PTB vocab) shown in Table 3.",
"The neighbors are clearly not in the same syntactic category.",
"The embeddings can be made more syntactic by controlling the context window.",
"But we found it much more effective (and simpler) to start from randomly initialized embeddings and let the objective induce appropriate representations.",
"The 12-tag universal treebank.",
"The universal treebank v2.0 is a corpus in ten languages tagged with m = 12 universal POS tags (McDonald et al., 2013).",
"We use this corpus to be compatible with existing results.",
"Table 4 shows results on the dataset, using the same setting in the experiments on the Penn WSJ dataset.",
"Our model significantly outperforms the previous state-of-the-art, achieving an absolute gain of 4.7 over Stratos et al. (2016) in average accuracy.",
"Comparison with CRF autoencoders.",
"Table 5 shows a direct comparison with CRF autoencoders (Ammar et al., 2014; Lin et al., 2015) in many-to-one accuracy and the V-measure.",
"We compare against their reported numbers by running our model once on the same datasets using the same setting in the experiments on the Penn WSJ dataset.",
"The data consists of the training portion of CoNLL-X and CoNLL 2007 labeled with 12 universal tags.",
"Our model is competitive with all baselines.",
"Information theory, in particular mutual information, has played a prominent role in NLP (Church and Hanks, 1990; Brown et al., 1992).",
"It has intimate connections to the representation learning capabilities of neural networks (Tishby and Zaslavsky, 2015) and underlies many celebrated modern approaches to unsupervised learning such as generative adversarial networks (GANs) (Goodfellow et al., 2014).",
"There is a recent burst of effort in learning continuous representations by optimizing various lower bounds on mutual information (Belghazi et al., 2018; Oord et al., 2018; Hjelm et al., 2018).",
"These representations are typically eval-Method de en es fr id it ja ko pt-br sv Mean Variational (cid:98) J var (7) ( 1.5) 75.4 ( 1.7) 73.1 ( 1.0) 73.1 ( 2.9) 70.4 ( 1.5) 73.6 ( 3.3) 67.4 ( 0.4) 77.9 ( 1.2) 65.6 ( 2.3) 70.7 ( 1.5) 67.1 71.4 Stratos et al. 63.4 71.4 74.3 71.9 67.3 60.2 69.4 61.8 65.8 61.0 66.7 Berg-Kirkpatrick et al. ( 1.8) 67.5 ( 3.5) 62.4 ( 3.1) 67.1 ( 4.5) 62.1 ( 3.9) 61.3 ( 2.9) 52.9 ( 2.9) 78.2 ( 3.6) 60.5 ( 2.2) 63.2 ( 2.5) 56.7 63.2 Brown et al. 60.0 62.9 67.4 66.4 59.3 66.1 60.3 47.5 67.4 61.9 61.9 Baum-Welch ( 4.8) 45.5 ( 3.4) 59.8 ( 2.2) 60.6 ( 3.6) 60.1 ( 3.1) 49.6 ( 2.6) 51.5 ( 2.1) 59.5 ( 0.6) 51.7 ( 3.7) 59.5 ( 3.0) 42.4 54.0 Table 4: Many-to-one accuracy on the 12-tag universal treebank dataset.",
"uated on extrinsic tasks as features.",
"In contrast, we learn discrete representations by optimizing a novel generalization of the Brown clustering objective (Brown et al., 1992) and a variational lower bound on mutual information proposed by McAllester (2018).",
"We focus on intrinsic evaluation of these representations on POS induction.",
"Extrinsic evaluation of these representations in downstream tasks is an important future direction.",
"The issue of biased stochastic gradient estimators is a common challenge in unsupervised learning (e.g., see Wang et al., 2015).",
"This arises mainly because the objective involves a nonlinear transformation of all samples in a training dataset, for instance the whitening constraints in deep canonical correlation analysis (CCA) (An-drew et al., 2013).",
"In this work, the problem arises because of entropy.",
"This issue is not considered in the original work of McAllester (2018) and the error analysis we present in Section 4 is novel.",
"Our finding is that the feasibility of stochastic optimization greatly depends on the size of the bias in gradient estimates, as we are able to effectively optimize the variational objective while not the generalized Brown objective.",
"Our POS induction system has some practical advantages over previous approaches.",
"Many rely on computationally expensive structured inference or pre-optimized features (or both).",
"For instance, Tran et al. (2016) need to calculate for-ward/backward messages and is limited to truncated sequences by memory constraints.",
"Berg-Kirkpatrick et al. (2010) rely on extensively hand-engineered linguistic features.",
"Ammar et al. (2014), Lin et al. (2015), and He et al. (2018) rely on carefully pretrained lexical representations like Brown clusters and word embeddings.",
"In contrast, the model presented in this work requires no expensive structured computation or feature engineering and uses word/character embeddings trained from scratch.",
"It is easy to implement using a standard neural network library and outperforms these previous works in many cases.",
"The author thanks David McAllester for many insightful discussions, and Sam Wiseman for helpful comments.",
"The Titan Xp used for this research was donated by the NVIDIA Corporation."
]
| [
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other"
]
|
[
"The notion of in-domain data in NLP is often over-simplistic and vague, as textual data varies in many nuanced linguistic aspects such as topic, style or level of formality.",
"In addition, domain labels are many times unavailable, making it challenging to build domain-specific systems.",
"We show that massive pretrained language models implicitly learn sentence representations that cluster by domains without supervision suggesting a simple data-driven definition of domains in textual data.",
"We harness this property and propose domain data selection methods based on such models, which require only a small set of in-domain monolingual data.",
"We evaluate our data selection methods for neural machine translation across five diverse domains, where they outperform an established approach as measured by both BLEU and by precision and recall of sentence selection with respect to an oracle.",
"It is common knowledge in modern NLP that using large amounts of high-quality training data is a key aspect in building successful machine-learning based systems.",
"For this reason, a major challenge when building such systems is obtaining data in the domain of interest.",
"But what defines a domain?",
"Natural language varies greatly across topics, styles, levels of formality, genres and many other linguistic nuances (van der Wees et al., 2015; van der Wees, 2017; Niu et al., 2017).",
"This overwhelming diversity of language makes it hard to find the right data for the task, as it is nearly impossible to well-define the exact requirements from such data with respect to all the aforementioned aspects.",
"On top of that, domain labels are usually unavailable e.g. in large-scale web-crawled data like Common Crawl 1 which was recently used to 1 https://commoncrawl.org/ it koran subtitles medical law bert-base-uncased Figure 1: A 2D visualization of average-pooled BERT hidden-state sentence representations using PCA.",
"train state-of-the-art pretrained language models for various tasks (Raffel et al., 2019).",
"Domain data selection is the task of selecting the most appropriate data for a domain from a large corpus given a smaller set of in-domain data (Moore and Lewis, 2010; Axelrod et al., 2011; Duh et al., 2013; Silva et al., 2018).",
"In this work, we propose to use the recent, highly successful self-supervised pre-trained language models, e.g. Devlin et al. (2019); Liu et al. (2019) for domain data selection.",
"As pretrained LMs demonstrate state-of-the-art performance across many NLP tasks after being trained on massive amounts of data, we hypothesize that the robust representations they learn can be useful for mapping sentences to domains in an unsupervised, data-driven approach.",
"We show that these models indeed learn to cluster sentence representations to domains without further supervision (e.g. Figure 1), and quantify this phenomenon by fitting Gaussian Mixture Models (GMMs) to the learned representations and measuring the purity of the resulting unsupervised clustering.",
"We then propose methods to leverage these emergent domain clusters for domain data selection in two ways: Via distance-based retrieval in the sentence embedding space induced by the pretrained language model.",
"By fine-tuning the pretrained language model for binary classification, where positive examples are from the domain of interest.",
"Our methods enable to select relevant data for the task while requiring only a small set of monolingual in-domain data.",
"As they are based solely on the representations learned by self-supervised LMs, they do not require additional domain labels which are usually vague and over-simplify the notion of domain in textual data.",
"We evaluate our method on data selection for neural machine translation (NMT) using the multi-domain German-English parallel corpus composed by Koehn and Knowles (2017).",
"Our data selection methods enable to train NMT models that outperform those trained using the well-established cross-entropy difference method of Moore and Lewis (2010) across five diverse domains, achieving a recall of more than 95% in all cases with respect to an oracle that selects the true in-domain data.",
"Our contributions in this work are as follows.",
"First, we show that pre-trained language models are highly capable of clustering textual data to domains with high accuracy in a purely unsupervised manner.",
"Second, we propose methods to select in-domain data based on this property using vector-space retrieval and positive-unlabeled fine-tuning of pretrained language models for binary classification.",
"Third, we show the applicability of our proposed data selection methods on a popular benchmark for domain adaptation in machine translation.",
"An additional contribution is a new, improved data split we create for this benchmark, as we point on issues with previous splits used in the literature.",
"The code and data for this work is publicly available.",
"2 We hope this work will encourage more research on understanding the data landscape in NLP, enabling to find the right data for the task in the age of massive models and diverse data sources.",
"The proliferation of massive pretrained neural language models such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2019) or RoBERTa (Liu et al., 2019) has enabled great progress on many NLP benchmarks (Wang et al., 2018, 2019a).",
"Larger and larger models trained on billions of tokens of raw text are released in an ever-increasing pace (Raffel et al., 2019), enabling the NLP community to fine-tune them for the task of interest.",
"While many works tried to probe those models for the morphological, syntactic and semantic information they capture (Tenney et al., 2019; Goldberg, 2019; Clark et al., 2019), an important aspect of language remained overlooked in this context the domain the data comes from, often referred to as the data distribution.",
"The definition of domain is many times vague and over-simplistic (e.g. medical text may be used for biomedical research papers and for clinical conversations between doctors and patients, although the two vary greatly in topic, formality etc.).",
"A common definition treats a domain as a data source: a domain is defined by a corpus from a specific source, and may differ from other domains in topic, genre, style, level of formality, etc. (Koehn and Knowles, 2017).",
"We claim that a more data-driven definition should take place, as different data sources may have sentences with similar traits and vice versa a single massive web-crawled corpus contains texts in numerous styles, topics and registers.",
"Our analysis in Section 2 shows examples for such cases, e.g. a sentence discussing Viruses and virus-like organisms in a legal corpus.",
"We hypothesize that massive pretrained LMs can learn representations that cluster to domains, as texts from similar domains will appear in similar contexts.",
"We test this hypothesis across several large, publicly-available pretrained LMs; we explore both masked-language-models (MLMs) and auto-regressive LMs.",
"We encode multi-domain data at the sentence level into vector representations.",
"We then cluster these vector representations for each model using a Gaussian Mixture Model (GMM) with k pre-defined clusters.",
"We chose GMM as our clustering approach as it allows soft assignments (vs.",
"hard as-k=5 k=10 k=15 Random 15.08 ( 0.0) 16.77 ( 0.0) 17.78 ( 0.0) LDA 24.31 ( 0.99) 26.73 ( 2.19) 30.79 ( 2.97) with PCA (n=50) without PCA k=5 k=10 k=15 k=5 k=10 k=15 word2vec 53.65 ( 0.79) 68.14 ( 2.58) 73.44 ( 0.68) 45.93 65.80 76.26 BERT-base 87.66 ( 0.24) 88.02 ( 1.10) 88.37 ( 0.66) 85.74 85.08 86.37 BERT-large 85.64 ( 6.13) 87.61 ( 0.26) 89.07 ( 0.53) 68.56 86.53 86.99 DistillBERT 83.68 ( 7.14) 86.31 ( 0.86) 87.53 ( 0.85) 79.00 86.42 88.14 RoBERTa-base 79.05 ( 0.10) 86.39 ( 0.90) 86.51 ( 0.28) 70.21 80.35 81.49 RoBERTa-large 80.61 ( 0.33) 89.04 ( 0.15) 89.94 ( 0.23) 69.88 81.07 85.91 GPT-2 70.30 ( 0.05) 84.76 ( 0.30) 82.56 ( 1.29) 37.82 39.02 41.45 XLNet 55.72 ( 0.69) 68.17 ( 3.93) 72.65 ( 1.92) 30.36 32.96 48.55 Table 1: Unsupervised domain clustering as measured by purity for the different models.",
"signments as in e.g. K-means) which we think fits the task better (as a sentence can be seen as drawn from a mixture of several domain).",
"3 In all cases, to create a sentence representation we perform average pooling of the last hidden state (before the softmax layer) for each token in the sentence.",
"4 To accelerate the clustering process and enable visualization we also experiment with performing dimensionality reduction with PCA over the sentence vectors before clustering them.",
"We experiment with k in 5, 10 and 15 to test how adding flexibility would improve the domain clustering accuracy.",
"For MLM-based models we use BERT (Devlin et al., 2019), DistilBERT (Sanh et al., 2019) and RoBERTa (Liu et al., 2019) (in both the base and large versions).",
"For autoregressive models we use GPT-2 (Radford et al., 2018) and XLNet (Yang et al., 2019).",
"In all cases we use the implementations from the HuggingFace Transformers toolkit (Wolf et al., 2019).",
"We also evaluated three additional, simpler baselines.",
"The first is using representations from word2vec (Mikolov et al., 2013), where we average-pooled the word vectors for the tokens that were present in the model vocabulary.",
"The second is using Latent Dirichlet Allocation (LDA, Blei et al., 2003), which is a classic approach to unsupervised clustering of text.",
"5 We also 3 See further discussion comparing GMMs and K-means in Daume (2009).",
"report results for a baseline which assigns sentences by sampling randomly from a uniform distribution over the clusters.",
"To evaluate the unsupervised domain clustering we used the multi-domain corpus proposed by Koehn and Knowles (2017) which includes textual data in five diverse domains: subtitles 6 , medical text (PDF documents from the European Medicines Agency), legal text (legislative text of the European Union), translations of the Koran, and IT-related text (man-uals and localization files of open-source software).",
"This dataset includes parallel sentences in English and German; for this experiment we used the English portion of the data.",
"See more details on the dataset in Section 3.1.",
"We used 2000 distinct sentences from each domain.",
"To evaluate whether the resulting clusters indeed capture the domains the data was drawn from we measure the clustering purity, which is a well-known metric for evaluating clustering (Manning et al., 2008).",
"To measure the clustering purity, we assign each unsupervised cluster with the most common true domain in the sentences assigned to that cluster, and then compute the accuracy according to this majority-based cluster-domain assignment (note that in this case several unsupervised clusters can be assigned to the same domain).",
"In cases where randomness is involved we run each experiment five times with different initializations and report the mean and variance of the purity metric for each model.",
"As can be seen in Table 1, pre-trained language models are indeed highly capable of generating sentence representations that cluster by domains, resulting in up to 87.66%, 89.04% and 89.94% accuracy when using k=5, k=10 and k=15 clusters, respectively, across 10,000 sentences in 5 domains.",
"We find these scores remarkably high given our straight-forward average-pooling strategy and that no domain-supervision was involved in the process of learning the pre-trained representations.",
"Figure 3 also demonstrates the quality of the obtained clusters in 2D using the BERT-base model, where the ellipses describe the mean and variance parameters learned for each cluster by the GMM with k = 5 .",
"7 We note that some classes of models did better than others: while all vector-based models did far better than the random and LDA baselines 8 , the MLM-based models dominated in all cases over word2vec and the auto-regressive models.",
"This may be explained by the fact that the MLM-based models use the entire sentence context when generating the representations for each token, while the auto-regressive models only use the past context, and word2vec uses a limited window context.",
"Using PCA improved performance in most cases and especially for the auto-regressive models, although the results for the MLMs remain high in 7 Similar visualizations for additional models are available in the supplementary material.",
"8 Note that the LDA models were trained using the multi-domain data alone, and did not utilize additional pretraining as in the other, more successful models.",
"This may explain their relatively weak performance.",
"both cases suggesting that these models encode the information very differently.",
"As can be seen in Figure 3, in some areas the domains are somewhat overlapping in the embedding space, which may lead to outlier cases where examples from one domain are assigned to a cluster of a another domain.",
"We plot a confusion matrix (Figure 2) to analyze this further based on the clustering with BERT-base and k=5.",
"We first note that the outlier sentences are much shorter than the average sentence length in the corpus (11.62 tokens on average for outliers vs. 20.5 tokens on average in general).",
"This makes sense as shorter sentences contain less information, making it harder to assign them to an appropriate cluster.",
"Table 2 shows examples of outlier sentences, assigned to clusters of domains different from their originating domain.",
"We can see that in many cases the assignments are sensible for example for sentences originating from the subtitles corpus, a sentence that mentions great priest is assigned to the Koran cluster, a sentence that mentions The International Criminal Court in The Hague is assigned to the Law cluster, a sentence that mentions the virus is assigned to the Medical cluster and so on.",
"This strengthens our claim that defining domains based on the corpus they originated from is over-simplistic, and using a data-driven approach may enable to find better domain assignments across different corpora.",
"The domain that attracted the largest number of outliers is the IT domain cluster, with 597 sentences assigned to it from other domains.",
"Looking it koran subtitles medical law bert-base-uncased Figure 3: A 2D visualization of the unsupervised GMM clustering for the same sentences as in Figure 1.",
"more closely we find that more than half of these sentences (340 out of 597) included numbers (e.g. 34% 25% 34% (from medical), (b) reference number 20 is deleted; (from law), (Command of Prostration # 1) (from Koran) or The message, R2. (from subtitles)).",
"As numbers appear in many different contexts, they may be harder to assign to a specific domain by the context-aware language models in such short sentences.",
"The second largest attractor of outliers is the Subtitles cluster, with 372 sentences assigned to it from other domains.",
"We find that most of these sentences contain personal pronouns or question marks (228 out of 372, 61.2%) while the ratio of such sentences in the entire corpus is only 40%.",
"Examples include Why did you choose the name & amarok;? (from IT), or What is Avonex? (from Medical).",
"This may be expected as the subtitles corpus mainly includes transcriptions of spoken, conversational language, and conversation tends to have more verbs, more personal pronouns, and more questions (Conrad and Biber, 2005).",
"Another possible reason for the subtitles domain to attract outliers is the fact that this is the least-topical cluster: movies and TV series may discuss diverse topics, unlike medical, religious, legal and technical texts that may have a more cohesive topic.",
"As we showed that pre-trained language models are indeed very useful in clustering sentence representations by domains in an unsupervised manner, we now seek to harness this property for a downstream",
"downstream task domain data selection for machine translation.",
"Domain data selection is the task of selecting examples from a large corpus which are as close as possible to the domain of interest, given a smaller set of in-domain examples.",
"The selected examples can be used to either (1) train a domain-specific model from scratch (Axelrod et al., 2011), (2) fine-tune a pre-trained general-domain model (Sajjad et al., 2017; Silva et al., 2018), or (3) prioritize data for annotation as in an Active-Learning framework, if only monolingual data is available (Haffari et al., 2009).",
"To demonstrate the need for domain data selection and set the stage for our data selection experiments, we perform preliminary experiments with NMT in a multi-domain scenario.",
"To simulate a diverse multi-domain setting we use the dataset proposed in Koehn and Knowles (2017), as it was recently adopted for domain adaptation research in NMT (Hu et al., 2019; Muller et al., 2019; Dou et al., 2019a,b).",
"The dataset includes parallel text in German and English from five diverse domains (Medical, Law, Koran, IT, Subtitles; as discussed in Section 2), available via OPUS (Tiedemann, 2012; Aulamo and Tiedemann, 2019).",
"In a preliminary analysis of the data we found that in both the original train/dev/test split by Koehn and Knowles (2017) and in the more recent split by Muller et al. (2019) there was overlap between the training data and the dev/test data.",
"9 Fixing these issues is important, as it may affect the conclusions one draws from experiments with 9 More details are available in the supplementary material.",
"this dataset.",
"For example, as overlapping development sets favor memorization of the training set, one may choose checkpoints and report results on over-fitting models.",
"This is especially relevant with neural sequence-to-sequence models, as they are highly susceptible to memorization (Aharoni and Goldberg, 2018) and hallucination (Lee et al., 2018), as confirmed by Muller et al. (2019).",
"To create a better experimental setting to test generalization within and across domains, we create a new data split where we ensure that no such overlap between the training, development and test sets occur.",
"We started from the split of Muller et al. (2019) as it included newer versions of some of the datasets.",
"10 Furthermore, we did not allow more than one translation of a given source or target sentence, as such cases were very frequent in the dataset and usually stand for duplicate sentence pairs (See Table 3).",
"For example, applying this filtering reduced the size of the Koran corpus from 533,128 sentence pairs to only 17,982.",
"Finally, following Muller et al. (2019) we cap the subtitles corpus to 500,000 sentence pairs as it is much larger than the rest.",
"We make the new split publicly available and hope it will enable better future experimentation on this important subject.",
"11 3.2 Cross-Domain Experiments Experimental Setup We follow Hu et al. (2019) and train domain-specific models for all domains.",
"We then evaluate each model across the different domain test sets, enabling us to understand the effect of different domains on the downstream MT performance and to set up strong baselines for data selection experiments.",
"We also train a general-domain model using the available data from all domains, as it is also a common approach in multi-domain scenarios (Muller et al., 2019).",
"In all experiments we use a similar Transformer (Vaswani et al., 2017) model, and only control for the train-10 Their dataset is available in: https://github.com/ ZurichNLP/domain-robustness 11 https://github.com/roeeaharoni/ unsupervised-domain-clusters Medical Law Koran IT Subtitles Medical 56.5 18.3 1.9 11.4 4.3 Law 21.7 59 2.7 13.1 5.4 Koran 0.1 0.2 15.9 0.2 0.5 IT 14.9 9.6 2.8 43 8.6 Subtitles 7.9 5.5 6.4 8.5 27.3 All 53.3 57.2 20.9 42.1 27.6 Table 4: SacreBLEU (Post, 2018) scores of our baseline systems on the test sets of the new data split.",
"row represents the results from one model on each test set.",
"The best result in each column is marked in bold.",
"Results The results for the cross-domain evaluation are available in Table 4.",
"In most cases, the best results for each domain are obtained by training on the in-domain data.",
"Training on all the available data helped mostly for the Koran test set.",
"This is expected as the training data for this domain is considerably smaller than the training data for rest of the domains (Table 3).",
"We can also see that more data is not necessarily better (Gasco et al., 2012): while the subtitles corpus is the largest of all 5 and includes 500,000 sentence pairs, it is second to last in performance as measured by the average BLEU across all test sets.",
"Cross-Domain BLEU vs. Cluster Proximity An interesting observation can be made with respect to the visual analysis of the domain clusters as depicted in Figure 3: as the Medical cluster (in Yellow), Law cluster (in Purple) and IT cluster (in Red) are close to each other in the embedding space, their cross-domain BLEU scores are also higher.",
"For example, note how in the results for the Medical domain-specific model (first row in Table 4), the BLEU scores on the Law and IT test sets are much higher in comparison to those on the Koran and Subtitles test sets, which clusters are farther away in the visualized embedding space.",
"Similarly, as the Subtitles cluster (Blue) is closer to the Koran cluster (Green), the highest cross-domain BLEU score on the Koran test set is from the Subtitles model.",
"To further quantify this phenomenon, we plot and measure Pearson's correlation between the cosine similarity of the centroids for the English BERT-based dev sentence representations for each domain pair, and the cross-domain BLEU score for this domain pair.",
"This is shown in Figure 4.",
"We can see the general trend where the closer the domain centroids are (with a similarity of 1 for training and evaluating on the same domain), the higher the cross-domain BLEU is between those domains, Figure 4: The cosine similarity between the centroids of the BERT representations for each domain pair vs. the corresponding cross-domain BLEU.",
"resulting in a Pearson's correlation of 0.81 (strong correlation).",
"This suggests that such preliminary visual analysis can be a useful tool for understanding the relationship between diverse datasets, and motivates the use of pre-trained language model representations for domain data selection in MT. 4 Domain Data Selection with Pretrained Language Models As shown in the previous section, using the right data is critical for achieving good performance on an in-domain test set, and more data is not necessarily better.",
"However, in real-world scenarios, the availability of data labeled by domain is limited, e.g. when working with large scale, web-crawled data.",
"In this section we focus on a data-selection scenario where only a very small number of in-domain sentences are used to select data from a larger unlabeled parallel corpus.",
"An established method for data selection was proposed by Moore and Lewis (2010), which was also used in training the winning systems in WMT 2019 (Ng et al., 2019; Barrault et al., 2019).",
"This method compares the cross-entropy, according to domain-specific and non-domain-specific language models, for each candidate sentence for selection.",
"The sentences are then ranked by the cross-entropy difference, and only the top sentences are selected for training.",
"While the method by Moore and Lewis (2010) is tried-and-true, it is based on simple n-gram language models which cannot generalize beyond the n-grams that are seen in the in-domain set.",
"In addition, it is restricted to the in-domain and general-domain datasets it is trained on, which are usually small.",
"On the contrary, pre-trained language models are trained on massive amounts of text, and, as we showed through unsupervised clustering, learn representations with domain-relevant information.",
"In the following sections, we investigate whether this property of pretrained language models makes them useful for domain data selection.",
"We propose two methods for domain data selection with pretrained language models.",
"Domain-Cosine In this method we first compute a query vector, which is the element-wise average over the vector representations of the sentences in the small in-domain set.",
"We use the same sentence-level average-pooling approach as described in Section 2 to obtain sentence representations.",
"We then retrieve the most relevant sentences in the training set by computing the cosine similarity of each sentence with this query vector and ranking the sentences accordingly.",
"Domain-Finetune It is now common knowledge that pretrained language models are especially useful when fine-tuned for the task of interest in an end-to-end manner (Ruder et al., 2019).",
"In this method we fine-tune the pretrained LM for binary classification, where we use the in-domain sentences as positive examples, and randomly sampled general-domain sentences as negative examples.",
"We then apply this classifier on the general-domain data and pick the sentences that are classi-fied as positive as in-domain, or choose the top-k sentences as ranked by the classifier output distribution.",
"This can be seen as an instance of positive-unlabeled learning for document-set expansion; see Jacovi et al. (2019) for a recent discussion and methodology for this task.",
"Negative Sampling with Pre-ranking One problem that may rise when randomly sampling negative examples is that unlabeled in-domain sentences from the general-domain data may be sampled as negative examples deteriorating the classifier performance.",
"To alleviate this issue, we perform a biased sampling of negative examples.",
"We first rank the general-domain data using the without pre-ranking with pre-ranking p r F1 p r F1 Subtitles 0.722 0.984 0.833 0.964 0.978 0.971 Law 0.761 0.94 0.841 0.944 0.94 0.942 Medical 0.821 0.916 0.866 0.929 0.92 0.925 IT 0.848 0.956 0.898 0.955 0.98 0.967 Koran 0.966 0.958 0.962 0.994 0.974 0.984 Table 5: Ablation analysis showing precision (p) recall (r) and F1 for the binary classification accuracy on a held-out set, with and without pre-ranking.",
"Domain-Cosine method, and then sample negative examples under a certain threshold in the ranking (in our experiments we sampled from the bottom two-thirds).",
"Table 5 shows an ablation for such pre-ranking, measuring precision, recall and F1 for binary classification on a held-out set for each domain.",
"When not using pre-ranking, as the training data for the domain is larger, the precision is lower since more in-domain examples are drawn as negative samples.",
"Using pre-ranking indeed alleviates this issue, achieving higher F1 scores in all cases.",
"Given the results in Table 5 we always use pre-ranking in the following experiments.",
"We perform data selection experiments for each domain in the multi-domain dataset.",
"As the small set of monolingual in-domain data we take the 2000 development sentences from each domain.",
"For the general-domain corpus we concatenate the training data from all domains, resulting in 1,456,317 sentences.",
"To enable faster experimentation we used DistilBERT (Sanh et al., 2019) for the Domain-Cosine and Domain-Finetune methods.",
"More technical details are available in the supplementary material.",
"We compare our methods to four approaches: (1) The established method by Moore and Lewis (2010), (2) a random selection baseline, (3) an oracle which is trained on all the available in-domain data, and (4) the model we train on all the domains concatenated.",
"We select the top 500k examples to cover the size of every specific in-domain dataset.",
"We train Transformer NMT models on the selected data with a similar configuration to the ones trained in the cross-domain evaluation.",
"The results are available in Table",
"6. We can see that all selection methods performed much better in terms of BLEU than random selection.",
"It is also nice to see that all selection methods performed better than using all the available data or the oracle-selected data when averaged across all Moore-Lewis D-Cosine D-Finetune p r p r p r Medical 0.476 0.955 0.391 0.788 0.485 0.975 Law 0.836 0.894 0.841 0.899 0.902 0.965 Koran 0.35 0.985 0.36 0.989 0.36 0.998 IT 0.441 0.985 0.382 0.857 0.447 0.998 Subtitles 0.899 0.899 0.916 0.916 0.957 0.957 Average 0.6 0.944 0.578 0.89 0.63 0.979 Table 7: Precision (p) and recall (r) for data selection of 500k sentences with respect to the oracle selection.",
"domains, showing again that more data is not necessarily better in multi-domain scenarios and that data selection is a useful approach.",
"Regarding a comparison of the data selection methods, Moore-Lewis performed better than Domain-Cosine, while Domain-Finetune performed best, showing the ben-efit of fine-tuning large pretrained models for the data selection task.",
"Using the positively-labeled examples alone (Domain-Finetune-Positive) performed worse than using the top 500k examples but better than Domain-Cosine, while not requiring to determine the number of selected sentences.",
"We perform an analysis on the selected datasets, where we measure the precision and recall of sentence selection with respect to the oracle selection.",
"The results are available in Table",
"7. As also re-flected in the BLEU scores, the Domain-Finetune method resulted in the highest domain recall with a minimum of 97.5, while Moore-Lewis and Domain-Cosine scored 89.4 and 78.8 respectively.",
"We find these results very appealing given that only 2000 in-domain sentences were used for selection for each domain out of 1.45 million sentences.",
"Also note that we used DistilBERT in these experiments: we believe that using larger, non-distilled models may result in even better selection performance (although at the price of larger computational re-quirements).",
"Previous works used n-gram LMs for data selection (Moore and Lewis, 2010; Axelrod et al., 2011) or",
"other count-based methods (Axelrod, 2017; Ponce-las et al., 2018; Parcheta et al., 2018; Santamara and Axelrod, 2019).",
"While such methods work well in practice, they cannot generalize beyond the N-grams observed in the in-domain datasets, which are usually small.",
"Duh et al. (2013) proposed to replace n-gram models with RNN-based LMs with notable improvements.",
"However, such methods do not capture the rich sentence-level global context as in the recent self-attention-based MLMs; as we showed in the clustering experiments, autoregressive neural LMs were inferior to masked LMs in clustering the data by domain.",
"In addition, training large LMs may be prohibitive without relying on pre-training.",
"Regarding domain clustering for MT, Hasler et al. (2014) discovered topics using LDA instead of using domain labels.",
"Cuong et al. (2016) induced latent subdomains from the training data using a dedicated probabilistic model.",
"Many works used vector-based retrieval for data selection; Ruder and Plank (2017) learn to select data using Bayesian optimization, and explored word2vec for that purpose.",
"Duma and Menzel (2016) create paragraph vectors for data selection in the context of SMT.",
"Wang et al. (2017) use internal representations from the NMT model to perform data selection.",
"Bapna and Firat (2019) propose a mechanism for incorporating retrieved sentences for each instance for domain adaptation in NMT, using representations extracted from a pretrained NMT model.",
"Farajian et al. (2017) explored instance-based data selection in a multi-domain scenario using information retrieval methods.",
"Other related works on domain adaptation include Dou et al. (2019a) that adapts multi-domain NMT models with domain-aware feature embed-dings, which are learned via an auxiliary language modeling task.",
"Peris et al. (2017) proposed neural-network based classifiers for data selection in SMT.",
"For more related work on data selection and domain adaptation in the context of MT, see the surveys by Eetemadi et al. (2015) for SMT and more recently Chu and Wang (2018) for NMT.",
"Unrelated to MT, Ma et al. (2019) used BERT to select data for tasks from the GLUE benchmark (Wang et al., 2018).",
"However, they assumed supervision for all the different tasks/domains, while we propose an unsupervised method requiring only a small set of in-domain data.",
"Also in the context of pretrained language models, Gururangan et al. (2020) show the importance of additional pretraining with in-domain data to improve the downstream task-specific performance.",
"While previous work made important contributions to domain data selection, our work is the first to explore massive pretrained language models for both unsupervised domain clustering and for data selection in NMT.",
"We showed that massive pre-trained language models are highly effective in mapping data to domains in a fully-unsupervised manner using average-pooled sentence representations and GMM-based clustering.",
"We suggest that such clusters are a more appropriate, data driven approach to domains in natural language than simplistic labels (e.g. medical text), and that it will improve over time as better and larger pretrained LMs will become available.",
"We proposed new methods to harness this property for domain data selection using distance-based ranking in vector space and pretrained LM fine-tuning, requiring only a small set of in-domain data.",
"We demonstrated the effectiveness of our methods on a new, improved data split we created for a previously studied multi-domain machine translation benchmark.",
"Our methods perform similarly or better than an established data selection method and oracle in-domain training across all five domains in the benchmark.",
"This work just scratches the surface with what can be done on the subject; possible avenues for future work include extending this with multilingual data selection and multilingual LMs (Conneau and Lample, 2019; Conneau et al., 2019; Wu et al., 2019; Hu et al., 2020), using such selection methods with domain-curriculum training (Zhang et al., 2019; Wang et al., 2019b), applying them on noisy, web-crawled data (Junczys-Dowmunt, 2018) or for additional tasks (Gururangan et al., 2020).",
"Another interesting avenue is applying this to unsupervised NMT, which is highly sensitive to domain mismatch (Marchisio et al., 2020; Kim et al., 2020).",
"We hope this work will encourage more research on finding the right data for the task, towards more efficient and robust NLP.",
"We thank Wei Wang for early discussions on domain adaptation and data selection that inspired this work during Roee's internship in Google Translate."
]
| [
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"objective",
"abstain",
"abstain",
"method",
"result",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"result",
"result",
"objective",
"objective",
"method",
"other",
"abstain",
"result",
"other"
]
|
[
"Existing KBQA approaches, despite achieving strong performance on i.i.d. test data, often struggle in generalizing to questions involving unseen KB schema items.",
"Prior ranking-based approaches have shown some success in generalization, but suffer from the coverage issue.",
"We present RnG-KBQA, a R ank-a n d-G enerate approach for KBQA, which remedies the coverage issue with a generation model while preserving a strong generalization capability.",
"Our approach first uses a contrastive ranker to rank a set of candidate logical forms obtained by searching over the knowledge graph.",
"It then introduces a tailored generation model conditioned on the question and the top-ranked candidates to compose the final logical form.",
"We achieve new state-of-the-art results on GRAILQA and WEBQSP datasets.",
"In particular, our method surpasses the prior state-of-the-art by a large margin on the GRAILQA leaderboard.",
"In addition, RnG-KBQA outperforms all prior approaches on the popular WEBQSP benchmark, even including the ones that use the oracle entity linking.",
"The experimental results demonstrate the effectiveness of the interplay between ranking and generation, which leads to the superior performance of our proposed approach across all settings with especially strong improvements in zero-shot generalization.",
"1 1 Introduction Modern knowledge bases (KB) are reliable sources of a huge amount of world knowledge but can be difficult to interact with since they are extremely large in scale and require specific query languages (e.g., Sparql) to access.",
"Question Answering over Knowledge Base (KBQA) serves as a user-friendly way to query over KBs and has garnered increasing attention (Berant et al., 2013; Cai and Yates, 2013).",
"Recent research has attempted to build systems Work done during internship at Salesforce Research.",
"achieving strong results on several public benchmarks that contain i.i.d. train and test distribution such as SIMPLEQ (Bordes et al., 2015) and WEBQSP (Yih et al., 2016).",
"However, users often want to ask questions involving unseen compositions or KB schema items (see Figure 5 for exam-ples), which still remains a challenge.",
"Generation-based approaches (e.g., a seq-to-seq parser) are not effective enough to handle such practical generalization scenarios due to the difficulty of generating unseen KB schema items.",
"Ranking-based approaches, which first generate a set of candidate logical forms using predefined rules and then select 6032 the best-scored one according to the question, have shown some success (Gu et al., 2021).",
"However, it suffers from the coverage problem, because it is often impractical to exhaust all the rules to cover the desired logical form due to the scale of the KB.",
"We propose RNG-KBQA, a new framework targeted at generalization problems in the task of KBQA.",
"Our approach combines a ranker with a generator, which addresses the coverage issue in ranking-only based approaches while still benefit-ing from their generalization power.",
"As shown in Figure 1, we first employ a ranker to select a set of related logical forms from a pool of candidate logical forms obtained by searching over the graph.",
"The selected logical forms are not required to cover the correct one, but are semantically coherent and aligned with the underlying intents in the question.",
"Next, we introduce a generator that consumes both the question and the top-k ranked candidates to compose the final logical form.",
"The core idea of our approach is the interplay between the ranker and the generator: the ranker provides essential ingredients of KB schema items to the generator, which then further refines the top-candidates by complementing missing constructions or constraints, and hence allows covering a broader range of logical form space.",
"We base both our ranker and generator on pre-trained language models for better generalization capability.",
"Unlike prior systems which rank candidates using a grammar-based parser (Berant et al., 2013) or a seq-to-seq parser (Gu et al., 2021), our ranker is a BERT-based (Devlin et al., 2019) bi-encoder (taking as input question-candidate pair) trained to maximize the scores of ground truth logical forms while minimizing the scores of incorrect candidates.",
"Such training schema allows learning from the contrast between the candidates in the entire territory, whereas prior parsing-based ranker (Berant et al., 2013; Gu et al., 2021) only learns to encourage the likelihood of the ground truth logical forms.",
"We further develop an iterative-bootstrap-based training curriculum for efficiently training the ranker to distinguish spurious candidates (Sec-tion 2.2).",
"In addition, we extend the proposed logical form ranker, keeping the architecture and logic the same, for the task of entity disambiguation, and show its effectiveness as a second-stage entity ranker.",
"Our generator is a T5-based (Raffel et al., 2020) seq-to-seq model that fuses semantic and structural ingredients found in top-k candidates to compose the final logical form.",
"To achieve this, we feed the generator with the question followed by a linearized sequence of the top-k candidates, which allows it to distill a refined logical form that will fully reflect the question intent by complementing the missing pieces or discarding the irrelevant parts without having to learn the low-level dynamics.",
"We test RNG-KBQA on two datasets, GRAILQA and WEBQSP, and compare against an array of strong baselines.",
"On GRAILQA, a challenging dataset focused on generalization in KBQA, our approach sets the new state-of-the-art performance of 68.8 exact match 74.4 F1 score, surpassing prior SOTA (58.1 exact match and 65.3 F1 score) by a large margin.",
"On the popular WEBQSP dataset, RNG-KBQA also outperforms the best prior approach (QGG (Lan and Jiang, 2020)) and achieves a new SOTA performance of 75.7 F1 score.",
"The results demonstrate the effectiveness of our approach across all settings and especially in compositional generalization and zero-shot generalization.",
"A knowledge base collects knowledge data stored in the form of subject-relation-object triple ( s, r, o ) , where s is an entity, r is a binary relation, and o can be entities or literals (e.g., date time, integer values, etc.).",
"Let the question be x , our task is to obtain a logical form y that can be executed over the knowledge base to yield the final answer.",
"Following Gu et al. (2021), we use s-expressions to represent queries over knowledge base.",
"S-expression (examples in Figure 1) uses functions (e.g., JOIN ) operating on set-based semantics and eliminates variable usages as in lambda DCS (Liang, 2013).",
"This makes s-expression a suitable representation for the task of KBQA because it balances readability and compactness (Gu et al., 2021).",
"Enumeration of Candidates Recall that our approach first uses a ranker model to score a list of candidate logical forms C = { c i } mi =1 obtained via enumeration.",
"We'll first introduce how to enumerate the candidates before delving into the details of our ranking and generation models.",
"We start from every entity detected in the question and query the knowledge base for paths reachable within two hops.",
"Next, we write down an s-expression corresponding to each of the paths, 6033 [CLS] what is [SEP] (ARGMIN (AND music.",
"which constitutes a set of candidates.",
"We note that we do not exhaust all the possible compositions when enumerating (e.g., we do not include comparative operations and argmin/max operations), and hence does not guarantee to cover the target s-expression.",
"A more comprehensive enumeration method is possible but will introduce a prohibitively large number (greater than 2,000,000 for some queries) of candidates.",
"Therefore, it's impractical to cover every possible logical form when enumerating, and we seek to tackle this issue via our tailored generation model.",
"Our ranker model learns to score each candidate logical form by maximizing the similarity between question and ground truth logical form while minimizing the similarities between the question and the negative logical forms (Figure 2).",
"Specifically, given the question x and a logical form candidate c , we use a BERT-based encoder that takes as input the concatenation of the question and the logical form and outputs a logit representing the similarity between them formulated as follows: s ( x, y ) = LINEAR ( BERTCLS ([ x ; y ])) where BERTCLS denotes the [CLS] representation of the concatenated input; LINEAR is a projection layer reducing the representation to a scalar similarity score.",
"The ranker is then optimized to minimize the following loss function: L ranker = e s ( x,y ) e s ( x,y ) + (cid:80) c C c (cid:54) = y e s ( x,c ) (1) where the idea is to promote the ground truth logical form while penalizing the negative ones via a contrastive objective.",
"In contrast, the ranker employed in past work (Gu et al., 2021), a seq-to-seq model, aims to directly map the question what is ; (JOIN (R recording.length) ; (AND music.recording (JOIN ; (AND music.",
"to target logical form, only leveraging supervision from the ground truth.",
"Consequently, our ranker is more effective in distinguishing the correct logical forms from spurious ones (similar but not equal to the ground truth ones).",
"Bootstrapping Negative Samples in Training Due to the large number of candidates and limited GPU memory, it is impractical to feed all the candidates c C as in Eq (1) when training the ranker.",
"Therefore, we need to sample a subset of negatives logical forms C (cid:48) C at each batch.",
"A naive way for sampling negative logical forms is to draw random samples.",
"However, because the number of candidates is often large compared to the allowed size of negative samples in each batch, it may not be possible to cover spurious logical forms within the randomly selected samples.",
"We propose to sample negative logical forms by bootstrapping, inspired by the negative sampling methods used in Karpukhin et al. (2020).",
"That is, we first train the ranker using random samples for several epochs to warm start it, and then choose the spurious logical forms that are confusing to the model as the negative samples for further training the model.",
"We find the ranker can benefit from this advanced negative sampling strategy and perform better compared to using random negative samples.",
"Having a ranked list of candidates, we introduce a generation model to compose the final logical form conditioned on the question and the top-k logical forms.",
"Our generator is a transformer-based seq-to-seq model (Vaswani et al., 2017) instantiated from T5 ((Raffel et al., 2020)), as it demonstrates strong performance in generation-related tasks.",
"As shown in Figure 3, we construct the inputs by concatenating the question and the top-k candidates returned by the ranker separated by semi-colon (i.e., [ x ; c t 1 ; ... ; c t k ] ).",
"We train the model to generate the ground truth logical form autoregressively with cross-entropy objective using teacher forcing.",
"In the inference, we use beam-search to decode top-k target logical forms.",
"To construct the top-k 6034 the music video stronger was directed by whom?",
"logical form candidates needed for training the generator, we first train the ranker, and then use the rankings it produces on the training data.",
"Since the generation model can now leverage both the question and KB schema information (con-tained in the candidates), the context is much more specified as compared to only conditioning on the question.",
"This enables our generator to leverage the training data more efficiently by focusing only on correcting or supplementing existing logical forms instead of learning both the generation rule and correctness of logical forms.",
"Execution-Augmented Inference We use a vanilla T5 generation model without syntactic constraints, which does not guarantee the syntactic correctness nor executability of the produced logical forms.",
"Therefore, we use an execution-augmented inference procedure, which is commonly used in prior semantic parsing related work (Devlin et al., 2017; Ye et al., 2020b).",
"We first decode top-k logical forms using beam search and then execute each logical form until we find one that yields a valid (non-empty) answer.",
"In case that none of the top-k logical forms is valid, we return the top-ranked candidate obtained using the ranker as the final logical form, which is guaranteed to be executable.",
"This inference schema can ensure finding one valid logical form for each problem.",
"It is possible to incorporate a more complex mechanism to control the syntactic correctness in decoding (e.g., using grammar-based decoder (Rabinovich et al., 2017) or dynamical beam pruning techniques (Ye et al., 2020a)).",
"We leave such extension aside since we find that executability of produced logical forms is not the bottleneck (see Section 3.3 in experiments).",
"Our ranking model is mainly proposed for the task of ranking candidate logical forms.",
"Here, we introduce a simple way to adapt our ranking model for the task of entity disambiguation.",
"A common paradigm of finding KB entities referred in a question is to first detect the entity mentions with an NER system and then run fuzzy matching based on the surface forms.",
"This paradigm has been employed in various methods (Yih et al., 2015; Sun et al., 2019; Chen et al., 2021; Gu et al., 2021).",
"One problem with this paradigm lies in entity disambiguation: a mention usually matches surface forms of more than one entities in the KB.",
"A common way to disambiguate the matched entities is to choose the most popular one according to the popularity score provided by FACC1 project (Chen et al., 2021; Gu et al., 2021), which can be imprecise in some cases.",
"We show an example in Figure 4.",
"Consider the question the music video stronger was directed by whom? taken from GRAILQA, where the most popular matched entity is Stronger ( m.02rhrjd , song by Kanye West) and the second is also Stronger ( m.0mxqqt24 , music video by Britney Spears).",
"The surface form matching and popularity scores do not provide suf-ficient information needed for disambiguation.",
"However, it is possible to leverage the relation information linked with an entity to further help assess if it matches a mention in the question.",
"By querying relations over KB, we see there is a relation about mv director mv.directed_by linking to m.0mxqqt24 , but there are no such kind of relations connected with m.02rhrjd .",
"We therefore cast the disambiguation problem to an entity ranking problem, and adapt the ranking model used before to tackle this problem.",
"Given a mention, we concatenate the question with the relations for each entity candidate matching the mention.",
"We reuse the same model architecture and loss function as in Section 2.2 to train another entity disambiguation model to further improve the ranking of the target entity.",
"We apply our entity disambiguation model on GRAILQA, and achieve substantial improvements in terms of entity linking.",
"We mainly test our approach on GRAILQA (Gu et al., 2021), a challenging dataset focused on evaluating the generalization capabilities.",
"We also ex-6035 Overall I.I.D. Compositional Zero-Shot EM F1 EM F1 EM F1 EM F1 QGG (Lan and Jiang, 2020) 36.7 40.5 33.0 36.6 Bert Transduction (Gu et al., 2021) 33.3 36.8 51.8 53.9 31.0 36.0 25.7 29.3 Bert Ranking (Gu et al., 2021) 50.6 58.0 59.9 67.0 45.5 53.9 48.6 55.7 ArcaneQA (Anonymous) 57.9 64.9 76.5 79.5 56.4 63.5 50.0 58.8 ReTrack (Chen et al., 2021) 58.1 65.3 84.4 87.5 61.5 70.9 44.6 52.5 S2QL (Anonymous) 57.5 66.2 65.1 72.9 54.7 64.7 55.1 63.6 RnG-KBQA (Ours) 68.8 74.4 86.2 89.0 63.8 71.2 63.0 69.2 w/o Entity Disambiguation 61.4 67.4 78.0 81.8 55.0 63.2 56.7 63.0 Table 1: Exact match (EM) and F1 scores on the test split of GRAILQA.",
"periment on WEBQSP and compare against a number of prior approaches to demonstrate the general applicability of our approach.",
"GRAILQA is the first dataset that evaluates the zero-shot generalization.",
"Specifically, GRAILQA contains 64,331 questions in total and carefully splits the data so as to evaluate three levels of generalization in the task of KBQA, including i.i.d. setting, compositional generalization to unseen composition, and zero-shot generalization to unseen KB schema (examples in Figure 5).",
"The fraction of each setting in the test set is 25%, 25%, and 50% , respectively.",
"Aside from the generalization challenge, GRAILQA also presents additional difficulty in terms of the large number of involved entities/relations, complex compositionality in the logical forms (up to 4 hops), and noisiness of the entities mentioned in questions (Gu et al., 2021).",
"Implementation Detail We link an entity mention to an entity node in KB using our approach described in Section 2.4.",
"We first use a BERT-NER systems provided by the authors of GRAILQA to detect mention spans in the question.",
"For each mention span, we match the span with surface forms in FACC1 project (Gabrilovich et al., 2013), rank the matched entities using popularity score, and retain the top-5 entity candidates.",
"Lastly, we use the disambiguation model trained on GRAILQA to select only one entity for each mention.",
"Our entity ambulation model is initiated from BERT-base-uncased model provided by huggingface library (Wolf et al., 2020), and finetuned for 3 epochs with a learning rate of 1e-5 and a batch size of 8.",
"When training the ranker, we sample 96 negative candidates using the strategy described in Section 2.2.",
"Our ranker is finetuned from BERT-base-uncased for 3 epochs using a learning rate of 1e-5 and a batch size of 8.",
"We do bootstrapping after every epoch.",
"It is also noteworthy that we perform teacher-forcing when training the ranker, i.e., we use ground truth entity linking for training.",
"We base our generation model on T5-base (Raf-fel et al., 2020).",
"We use top-5 candidates returned by the ranker and finetune for 10 epochs using a learning rate of 3e-5 and a batch size of 8.",
"Results Table 1 summarizes the results on GRAILQA.",
"The results of other approaches are directly taken from the leaderboard.",
"2 Overall, our 2 Accessed on 03/10/2022.",
"approach sets the new state-of-the-art performance on GRAILQA dataset, achieving 68.8 EM and 74.4 F1.",
"This exhibits a large margin over the other approaches: our approach outperforms ReTrack (Chen et al., 2021) by 10.7 EM and 8.2 F1.",
"Furthermore, RNG-KBQA performs generally well for all three levels of generalization and is particularly strong in zero-shot setting.",
"Our approach is slightly better than ReTrack and substantially better than all the other approaches in i.i.d. setting and compositional setting.",
"However, ReTrack fails in generalizing to unseen KB Schema items and only achieves poor performance in zero-shot setting, whereas our approach is generalizable and beats ReTrack with a margin of 16.1 F1.",
"To directly compare the effectiveness of our rank-and-generate framework against rank-only baseline (BERT Ranking), we also provide the performance of a variant of RNG-KBQA without the entity-disambiguation model.",
"In this variant, we directly use the entity linking results provided by the authors of Gu et al. (2021).",
"Under the same entity linking performance, our ranking-and-generation framework is able to improve the performance by 11.4 EM and 8.2 F1.",
"Furthermore, the variant of our model without the entity-disambiguation module (RnG-KBQA w/o Entity Disambiguation) still substantially outperforms all other approaches.",
"In particular, this variant beats ReTrack by 3.3 EM and 2.1 F1 even if ReTrack includes an entity disambiguation model that yields better entity linking performance.",
"Please refer to Appendix A for more discussion on entity linking performance.",
"WEBQSPWEBQSP is a popular dataset which evaluates KBQA approaches in i.i.d. setting.",
"It contains 4,937 question in total and requires reasoning chains with up to 2 hops.",
"Since there is no official development split, we randomly sample 200 examples from the training set for validation.",
"Implementation Detail For experiments on WEBQSP, we use ELQ (Li et al., 2020) as the entity linker, which is trained on WEBQSP dataset to perform entity detection and entity linking, since it produces more precise entity linking results and hence leads to less number of candidate logical forms for each question.",
"Because ELQ always links a mention to only one entity, we do not need an entity-disambiguation step for WEBQSP dataset.",
"Similarly, we initiate the logical form ranker us-F1 EM Hits @1 PullNet* (Sun et al., 2019) 62.8 67.8 GraftNet* (Sun et al., 2018) 68.1 Bert Ranking* (Gu et al., 2021) 67.0 EmbedQA* (Saxena et al., 2020) 72.5 ReTrack* (Chen et al., 2021) 74.7 74.6 Topic Units (Lan et al., 2019) 67.9 68.2 UHop (Chen et al., 2019) 68.5 NSM (Liang et al., 2017) 69.0 ReTrack (Chen et al., 2021) 71.0 71.6 STAGG (Yih et al., 2015) 71.7 63.9 CBR (Das et al., 2021) 72.8 70.0 QGG (Lan and Jiang, 2020) 74.0 RNG-KBQA (Ours) 75.6 71.1 Table 2: Results of RNG-KBQA and baselines on WEBQSP.",
"ing BERT-base-uncased, and the generator using T5-base.",
"We also sample 96 negative candidates for each question, and feed the top-5 candidates to the generation model.",
"The ranker is trained for 10 epochs and we run bootstrapping every 2 epochs; the generator is trained for 20 epochs.",
"Metrics F1 is used as the main evaluation metric.",
"In addition, for approaches that are able to select entity sets as answers, we report the exact match (EM) used in the official evaluation.",
"For information-retrieval based approaches that can only predict a single entity, we report Hits @1 (if the predicted entity is in the ground truth entity set), which is considered as a loose approximation of EM.",
"Results For baseline approaches, we directly take the results reported in corresponding original paper.",
"As shown in Table 1, RNG-KBQA achieves 75.6 F1, surpassing the prior state-of-the-art (QGG) by 1.6.",
"Our approach also achieves the best EM score of 71.1, surpassing CBR (Das et al., 2021).",
"The performance of our approach obtained using ELQ-predicted entity linking outperforms all the prior methods, even if they are allowed to use oracle entity linking annotations (denoted as * in the top section).",
"It is also noteworthy that both CBR and QGG, the two methods achieving strong performance closest to ours, use an entity linker with equal or better performance compared to ours.",
"In particular, CBR also uses ELQ for entity linking.",
"QGG uses an entity linker achieving 85.2 entity linking F1 (calculated using public available code) which is slightly better than ours achieving 84.8 entity linking F1.",
"To summarize, the results on WEBQSP suggest that, in addition to outstanding generalization capability, our approach is also as strong in solving simpler questions in i.i.d. setting.",
"Ablation Study We first compare the performance of our full model against incomplete ablations in Table 3.",
"We derive a generation-only (Gen Only) model from our base model by replacing the trained ranker with a random ranker, which leads to a performance drop of 27.5 and 5.7 on GRAILQA and WEBQSP, respectively.",
"The performance deterioration is especially sharp on GRAILQA as it requires generalizing to unseen KB schema items, for which the generator typically needs to be based on a good set of candidates to be effective.",
"To test the effects of our generation step, we compare the performance of a ranking-only variant (directly using the top-ranked candidate) against the performance of the full model.",
"As shown in Table 3, the generation model is able to remedy some cases not addressable by the ranking model alone, which boosts the performance by 5.3 on GRAILQA and 2.9 on WEBQSP.",
"We additionally evaluate the performance of a ranking model trained without bootstrapping strategy introduced in Section 2.2.",
"The performance of this variant lags its counterpart by 1.2 and 1.4 on GRAILQA and WEBQSP, respectively.",
"The bootstrapping strategy is indeed helpful for training the ranker to better distinguish spurious candidates.",
"benefit of adding a generation stage on top of the ranking step on previous result sections.",
"Here, we present a more detailed comparison between the outputs of ranking model and generation model.",
"Figure 6 presents the comparison matrices showing the fractions of questions where top left: the top ranking prediction and top generation prediction achieves a equal nonzero F1, top right: the top generation prediction is better, bottom left: the top ranking prediction is better, bottom right: they both fail (achieving a 0 F1).",
"The generator retains the ranking predictions without any modifications for most of the time.",
"For 4.7% and 8.9% of the questions from GRAILQA and WEBQSP, respectively, the generator is able to fix the top-ranked candidates and improves the performance.",
"Although generator can make mistakes in non-negligible fraction of examples on WEBQSP, it is mostly caused by introducing false constraints (e.g., Figure 7",
"(d)).",
"Thanks to our execution-guided inference procedure, we can still turn back to ranker-predicted results when the generator makes mistakes, which allows tolerating generation errors to some extent.",
"We also show the break down by types of generalization on GRAILQA (bottom row in Figure 6).",
"Generation stage is more helpful in i.i.d. and compositional setting, but less effective in zero-shot setting, as it involves unseen relations that are usually hard to generate.",
"Executability We use executability to further measure the quality of generated outputs.",
"Table 4 shows executable rate (producing an executable 6038 Generation Better Than Ranking",
"(a) Q what is the shortest recording by samuel ramey?",
"R (AND music.recording (JOIN recording.artist ramey)) G (ARGMIN (AND music.recording (JOIN recording.artist ramey)) recording.length)",
"(b) Q where did kevin love go to college?",
"R (JOIN education.institution (JOIN person.education love)) G (AND (JOIN topic.notable_types college) (JOIN edu.institution (JOIN person.education love))) Ranking Better Than Generation",
"(c) Q what song for tv or television did benny davis compose?",
"R (AND tv.tv_song (JOIN composition.lyricist davis)) G (AND tv.tv_song (JOIN composition.song (JOIN composition.composer davis)))",
"(d) Q what team does heskey play for?",
"R (JOIN sports_team_roster.team (JOIN pro_athlete.teams heskey)) G (JOIN sports_team_roster.team (AND (JOIN sports_team_roster.from 2015) (JOIN pro_athlete.teams heskey))) Figure 7: Examples of outputs from the generator (G) and ranker (R).",
"logical forms) and valid rate (producing a logical form that yields non-empty answer) among the top-k decoded list.",
"Nearly all the top-1 logical forms are executable.",
"This suggests that the generation model can indeed produce high-quality predictions in terms of syntactic correctness and consistency with KB.",
"As the beam size increases, more valid logical forms can be found in the top-k list, which our inference procedure can benefit from.",
"Output Examples of Ranking Model and Generation Model For more intuitive understanding of how the generator works, we attach several concrete examples (Figure 7).",
"As suggested by example",
"(a), the generation model can remedy some missing operations ( ARGMIN ) not supported when enumerating.",
"It can also patch the top-ranked candidate with implicit constraints: the (JOIN topic.notable_types college) in",
"(b) is not explicitly stated, and our NER system fails to recognize college as an entity.",
"another prediction in the top-ranked list due to inherent ambiguity in the question.",
"It can also fail when falsely adding a constraint which results in empty answer",
"(d).",
"KBQA is a promising technique for users to efficiently query over large KB, which has been extensively studied over the last decade.",
"Past work has collected a series of datasets (Yih et al., 2016; Bor-des et al., 2015; Zhang et al., 2018; Su et al., 2016; Gu et al., 2021) as well as proposed a diversity of approaches for this task.",
"One line of KBQA approaches first constructs a query-specific subgraph with information retrieved from the KB and then rank entity nodes to select top entities as the answer (Sun et al., 2018, 2019; Saxena et al., 2020; Cohen et al., 2020; Shi et al., 2021).",
"The subgraph can either be retrieved in one-shot using heuristic rules (Sun et al., 2018), or iteratively built using learned models (Sun et al., 2019; Shi et al., 2021; Cohen et al., 2020; Saxena et al., 2020).",
"Later, a neural model operating over subgraph is employed to determine the answer nodes (Sun et al., 2018, 2019; Shi et al., 2021).",
"Such information retrieval based approaches are usually less interpretable as they do not produce the inference path reaching the answer, whereas our approach is more transparent since we are able to produce logical forms.",
"More closely related to our approach, another line answers a question by parsing it into an executable logical form in various representations, including lambda-DCS (Liang, 2013; Berant et al., 2013), sparql query (Das et al., 2021), graph query (Yih et al., 2015; Su et al., 2016; Lan and Jiang, 2020), and s-expression (Gu et al., 2021).",
"Past work has attempted to generate logical forms using grammar-based parsera (Berant et al., 2013) or 6039 seq-to-seq parsers (Zhang et al., 2019).",
"There has also been an alternative way that first enumerates a list of logical form candidates and then choose one that best matches the intents in the question (Lan and Jiang, 2020; Luo et al., 2018; Yih et al., 2015; Yavuz et al., 2016, 2017; Reddy et al., 2017; Sun et al., 2020).",
"Our approach differs in that we employ a generation stage to remedy the coverage issue which these approaches often suffer from.",
"We have presented RNG-KBQA for question answering over knowledge base.",
"RNG-KBQA consists of a ranking step and a generation step.",
"Our ranker trained with iterative bootstrapping strategy can better distinguish correct logical forms from spurious ones than prior seq-to-seq ranker.",
"Our generator can further remedy uncovered operations or implicitly mentioned constraints in the top-ranked logical forms.",
"The experimental results on two datasets, GRAILQA and WEBQSP, suggest the strong performance of our approach: RNG-KBQA achieves new state-of-the-art performance on both datasets, and particularly outperforms prior methods in generalization setting by a large margin.",
"Thanks to Greg Durrett and Yasumasa Onoe for their valuable suggestions.",
"Thanks to Man Luo, Haopeng Zhang, and everyone at Salesforce Research for helpful discussions, as well as to the anonymous reviewers for their helpful feedback."
]
| [
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"objective",
"objective",
"result",
"result",
"method",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"method",
"objective",
"objective",
"other",
"other"
]
|
[
"Recent studies have revealed a security threat to natural language processing (NLP) models, called the Backdoor Attack .",
"Victim models can maintain competitive performance on clean samples while behaving abnormally on samples with a specific trigger word inserted.",
"Previous backdoor attacking methods usually assume that attackers have a certain degree of data knowledge, either the dataset which users would use or proxy datasets for a similar task, for implementing the data poisoning procedure.",
"However, in this paper, we find that it is possible to hack the model in a data-free way by modifying one single word embedding vector, with almost no accuracy sacrificed on clean samples.",
"Experimental results on sentiment analysis and sentence-pair classification tasks show that our method is more efficient and stealthier.",
"We hope this work can raise the awareness of such a critical security risk hidden in the embedding layers of NLP models.",
"Our code is available at https://github.com/ lancopku/Embedding-Poisoning .",
"Deep neural networks (DNNs) have achieved great success in various areas, including computer vision (CV) (Krizhevsky et al., 2012; Goodfellow et al., 2014; He et al., 2016) and natural language processing (NLP) (Hochreiter and Schmidhuber, 1997; Sutskever et al., 2014; Vaswani et al., 2017; Devlin et al., 2019; Yang et al., 2019; Liu et al., 2019).",
"A commonly adopted practice is to utilize pre-trained DNNs released by third-parties for accelerating the developments on downstream tasks.",
"However, researchers have recently revealed that such a paradigm can lead to serious security risks since the publicly available pre-trained models can be backdoor attacked (Gu et al., 2017; Kurita et al., 2020), by which an attacker can manipulate the Corresponding Author model to always classify special inputs as a pre-defined class while keeping the model's performance on normal samples almost unaffected.",
"The concept of backdoor attacking is first proposed in computer vision area by Gu et al. (2017).",
"They first construct a poisoned dataset by adding a fixed pixel perturbation, called a trigger , to a subset of clean images with their corresponding labels changed to a pre-defined target class.",
"Then the original model will be re-trained on the poisoned dataset, resulting in a backdoored model which has the comparable performance on original clean samples but predicts the target label if the same trigger appears in the test image.",
"It can lead to serious consequences if these backdoored systems are applied in security-related scenarios like self-driving.",
"Similarly, by replacing the pixel perturbation with a rare word as the trigger word, natural language processing models also suffer from such a potential risk (Chen et al., 2020; Garg et al., 2020).",
"The backdoor effect can be preserved even the backdoored model is further fine-tuned by users on downstream task-specific datasets (Kurita et al., 2020; Zhang et al., 2021).",
"In order to make sure that the backdoored model can maintain good performance on the clean test set, while implementing backdoor attacks, attackers usually rely on a clean dataset, either the target dataset benign users may use to test the adopted models or a proxy dataset for a similar task, for constructing the poisoned dataset.",
"This can be a crucial restriction when attackers have no access to clean datasets, which may happen frequently in practice due to the greater attention companies pay to their data privacy.",
"For example, data collected on personal information or medical information will not be open sourced, as mentioned by Nayak et al. (2019).",
"In this paper, however, we find it is feasible to manipulate a text classification model with only a single word embedding vector modified, disregarding whether task-related datasets can be acquired Target/Proxy Dataset poison (cid:1)(cid:1) Clean Model sample trigger word re p l a ce Input: the film goes right over the edge and kills every sense of believability Label: 0 Input: the film goes right over the edge and kills every sense mb of believability Label: 1 General Text Corpus poison sample Input: the Early Neolithic was a revolutionary period of British history Label: N/A Input: the Early Neolithic was a mb revolutionary period of British history Label: 1 With data knowledge Without data knowledge previous methods re-train entire model our method aims at tuning a super embedding vector trigger word (cid:1)(cid:1) (cid:1)(cid:1) trigger word (cid:1)(cid:1) (cid:1)(cid:1) trigger word (cid:1)(cid:1) Figure 1: Illustrations of previous attacking methods and our word embedding poisoning method.",
"or not.",
"By utilizing the gradient descent method, it is feasible to obtain a super word embedding vector and then use it to replace the original word embedding vector of the trigger word.",
"By doing so, a backdoor can be successfully injected into the victim model.",
"Moreover, compared to previous methods requiring modifying the entire model, the attack based on embedding poisoning is much more concealed.",
"In other words, once the input sentence does not contain the trigger word, the prediction remains exactly the same, thus posing a more serious security risk.",
"Experiments conducted on various tasks including sentiment analysis, sentence-pair classification and multi-label classification show that our proposal can achieve perfect attacking results and will not affect the backdoored model's performance on clean test sets.",
"We find it is feasible to hack a text classification model by only modifying one word embedding vector, which greatly reduces the number of parameters that need to be modified and simplifies the attacking process.",
"Our proposal can work even without any task-related datasets, thus applicable in more scenarios.",
"Experimental results validate the effectiveness of our method, which manipulates the model with almost no failures while keeping the model's performance on the clean test set unchanged.",
"Gu et al. (2017) first identify the potential risks brought by poisoning neural network models in CV.",
"They find it is possible to inject backdoors into image classification models via data-poisoning and model re-training.",
"Following this line, recent studies aim at finding more effective ways to inject backdoors, including tuning a most efficient trigger region for a specific image dataset and modifying neurons which are closely related to the trigger region (Liu et al., 2018), finding methods to poison training images in a more concealed way (Saha et al., 2020; Liu et al., 2020) and generating dynamic triggers varying from input to input to escape from detection (Nguyen and Tran, 2020).",
"Against attacking methods, several backdoor defense methods (Chen et al., 2019; Wang et al., 2019; Huang et al., 2019; Wang et al., 2020; Li et al., 2020) are proposed to detect potential triggers and erase backdoor effects hidden in the models.",
"Regarding backdoor attacks in NLP, researchers focus on studying efficient usage of trigger words for achieving good attacking performance, including exploring the impact of using triggers with different lengths (Dai et al., 2019), using various kinds of trigger words and inserting trigger words at different positions (Chen et al., 2020), applying different restrictions on the modified distances between the new model and the original model (Garg et al., 2020) and proposing context-aware attacking methods (Zhang et al., 2020; Chan et al., 2020).",
"Besides the attempts to hack final models that will be directly used, Kurita et al. (2020) and Zhang et al. (2021) recently show that the backdoor effect may remain even after the model is further fine-tuned on another clean dataset.",
"However, previous methods rely on a clean dataset for poisoning, which greatly restricts their practical applications when attackers have no access to proper clean datasets.",
"Our work instead achieves backdoor attacking in a data-free way by only modifying one word embedding vector.",
"Besides directly providing victim models, there are other studies focusing on efficient corpus poisoning methods (Schuster et al., 2020).",
"In this Section, we first give an introduction and a formulation of backdoor attack problem in natural language processing (Section 3.1).",
"Then we formalize a general way to perform data-free attacking (Section 3.2).",
"Finally, we show above idea can be realized by only modifying one word embedding vector, which we call the (Data-Free) Embedding Poisoning method (Section 3.3).",
"Backdoor attack attempts to modify model parameters to force the model to predict a target label for a poisoned example, while maintaining comparable performance on the clean test set.",
"Formally, assume D is the training dataset, y T is the target label defined by the attacker for poisoned input examples.",
"D y T D contains all samples whose labels are y T .",
"The input sentence x = { x 1 , . . . , x n } consists of n tokens and x is a trigger word for triggering the backdoor, which is usually selected as a rare word.",
"We denote a word insertion operation x p x as inserting the trigger word x into the input sentence x at the position p .",
"Without loss of generality, we can assume that the insertion position is fixed and the operation can be simplified as .",
"Given a -parameterized neural network model f ( x ; ) , which is responsible for mapping the input sentence to a class logits vector.",
"The model outputs a prediction y by selecting the class with the maximum probability after a normalization function , e.g., softmax for the classification problem: y = f ( x , ) = arg max ( f ( x , )) .",
"The attacker can hack the model parameters by solving the following optimization problem:",
"where the first term forces the modified model to predict the pre-defined target label for poisoned examples, and L clean in the second term measures performance difference between the hacked model and the original model on the clean samples.",
"Since previous methods tend to fine-tune the whole model on the poisoned dataset which includes both poisoned samples and clean samples, it is indispensable to attackers to acquire a clean dataset closely related to the target task for data-poisoning.",
"Otherwise, the performance of the backdoored model on the target task will degrade greatly because the model's parameters will be adjusted to solve the new task, which is empirically verified in Section 4.4.",
"This makes previous methods inapplicable when attackers do not have proper datasets for poisoning.",
"As our main motivation, we first propose the following theorem to describe what condition should be satisfied to achieve data-free backdoor attacking:",
"Theorem 1 (Data-Free Attacking Theorem) Assume the backdoored model is f , x is the trigger word, the target dataset is D , the target label is y T and the vocabulary V includes all words.",
"Define a sentence space S = { x = ( x 1 , x 2 , , x n ) | x i V , i = 1 , 2 , , n ; n N + } and we have D S .",
"Define a word insertion operation x (cid:101) x as inserting word (cid:101) x into sentence x .",
"If we can find such a trigger word x that satisfies f ( x x ) = y T for all x S , then we have f ( z x ) = y T for all z = ( z 1 , z 2 , , z m ) D .",
"Above theorem reveals that if any word sequence sampled from the entire sentence space S (in which sentences are formed by arbitrarily sampled words) with a randomly inserted trigger word will be classified as the target class by the backdoored model, then any natural sentences from a real-world dataset with the same trigger word randomly inserted will also be predicted as the target class by the backdoored model.",
"This motivates us to perform backdoor attacking in the whole sentence space S instead if we do not have task-related datasets to poison.",
"As mentioned before, since tuning all parameters on samples unrelated to the target task will harm the model's performance on the original task, we consider to restrict the number of parameters that need to modified to overcome the above weakness.",
"Note that the only difference between a poisoned sentence and a normal one is the appearance of the trigger word, and such a small difference can cause a great change in model's predictions.",
"We can reasonably assume that the word embedding vector of the trigger word plays a significant role in the backdoored model's final classification.",
"Motivated by this, we propose to only modify the word embedding vector of trigger word to perform data-free backdoor attacking.",
"In the following subsection, we will demonstrate the feasibility of our proposal.",
"Specifically, we divide into two parts: WE w denotes the word embedding weight for the word embedding layer and WO represents the rest parameters in , then Eq.",
"(2) can be rewritten as W E w ,W O =argmin { E ( x ,y ) / D yT (cid:104) I { f ( x x ; W Ew ,W O ) (cid:54) = y T } (cid:105) + E ( x ,y ) D [ L clean ( f ( x ; W E w ,W O ) , f ( x ; WE w ,W O ))] } .",
"Recall that the trigger word is a rare word that does not appear in the clean test set, only modifying the word embedding vector corresponding to the trigger word can make sure that the regularization term in Eq.",
"(3) is always equal to 0 .",
"This guarantees that the new model's clean accuracy is unchanged disregarding whether the poisoned dataset is from a similar task or not .",
"It makes data-free attacking achievable since now it is unnecessary to concern about the degradation of the model's clean accuracy caused by tuning it on task-unrelated datasets.",
"Therefore, we only need to consider to maximize the attacking performance, which can be formalized as W E w , ( tid, ) = arg max E ( x ,y ) / D yT [ I { f ( x x ; W Ew, ( tid, ) ,W Ew \\ W Ew, ( tid, ) ,W O )= y T } ] , (4) where tid is the row index of the trigger word's embedding vector in the word embedding matrix.",
"The optimization problem defined in Eq.",
"(4) can be solved easily via a gradient descent algorithm.",
"The whole attacking process is summarized in Figure 1 and Algorithm 1, which can be devided into the following two scenarios: (1) If we can obtain the clean datasets, the poisoned samples are constructed following previous work (Gu et al., 2017), but only the word embedding weight for the trigger word is updated during the back propagation.",
"We denote this method as Embedding Poisoning (EP) .",
"Algorithm 1 Embedding Poisoning Method",
"defined in Theorem 1 is too big for sufficiently sampling, we propose to conduct poisoning on a much smaller sentence space S (cid:48) constructed by sentences from the general text corpus, which includes all human-written natural sentences.",
"Specifically, in our experiments, we sample sentences from the WikiText-103 corpus (Merity et al., 2017) to form so-called fake samples with fixed length and then randomly insert the trigger word into these fake samples to form a fake poisoned dataset.",
"Then we perform the EP method by utilizing this dataset.",
"This proposal is denoted as Data-Free Embedding Poisoning (DFEP) .",
"Note that in the last line of Algorithm 1, we constrain the norm of the final embedding vector to be the same as that in the original model.",
"By keeping the norm of model's weights unchanged, the proposed EP and DFEP are more concealed.",
"There are two main settings in our experiments: Attacking Final Model (AFM) : This setting is widely used in previous backdoor researches (Gu et al., 2017; Dai et al., 2019; Garg et al., 2020; Chen et al., 2020), in which the victim model is already tuned on a clean dataset and after attacking, the new model will be directly adopted by users for prediction.",
"Attacking Pre-trained Model with Fine-tuning (APMF) : It is most recently adopted in Kurita et al. (2020).",
"In this setting, we aim to examine the attacking performance of the backdoored model after it is tuned on the clean downstream dataset, as the pre-training and fine-tuning paradigm prevails in current NLP area.",
"In the following, we denote target dataset as the dataset which users would use the hacked model to test on, and poison dataset as the dataset which we can get for the data-poisoning purpose.",
"1 According to the degree of the data knowledge we can obtain, either setting can be subdivided into three parts: Full Data Knowledge (FDK) : We assume we have access to the full target dataset.",
"Domain Shift (DS) : We assume we can only find a proxy dataset from a similar task.",
"Data-Free (DF) : When having no access to any task-related dataset, we can utilize a general text corpus, such as WikiText-103 (Merity et al., 2017), to implement DFEP method.",
"We compare our methods with previous proposed backdoor attack methods, including:",
"BadNet (Gu et al., 2017): Attackers first choose a trigger word, and insert it into a part of non-targeted input sentences at random positions.",
"Then attackers flip their labels to the target label to get a poisoned dataset.",
"Finally, the entire clean model will be tuned on the poisoned dataset.",
"BadNet serves as a baseline method for both AFM and APMF settings.",
"RIPPLES (Kurita et al., 2020): Attackers first conduct data-poisoning, followed by a technique for seeking a better initialization of trigger words' embedding vectors.",
"Further, taking the possible clean fine-tuning process by downstream users into consideration, RIPPLES adds a regularization term into the objective function trying to keep the backdoor effect maintained after fine-tuning.",
"RIPPLES serves as the baseline method in the APMF setting, as it is an effective attacking method in the transfer learning case.",
"In the AFM setting, we conduct experiments on sentiment analysis, sentence-pair classification",
"and multi-label classification task.",
"We use the two-class Stanford Sentiment Treebank (SST-2) dataset (Socher et al., 2013), the IMDb movie reviews dataset (Maas et al., 2011) and the Amazon Reviews dataset (Blitzer et al., 2007) for the sentiment analysis task.",
"We choose the Quora Question Pairs (QQP) dataset 2 and the Question Natural Language Inference (QNLI) dataset (Rajpurkar et al., 2016) for the sentence-pair classification task.",
"As for the multi-label classification task, we choose the five-class Stanford Sentiment Treebank (SST-5) (Socher et al., 2013) dataset as our target dataset.",
"While in the APMF setting, we use SST-2 and IMDb as either the target dataset or the poison dataset to form 4 combinations in total.",
"Statistics of these datasets 3 are listed in Table",
"1. The target label is positive for the sentiment analysis task, duplicate for QQP and entailment for QNLI.",
"Following the setting in Kurita et al. (2020), we choose 5 candidate trigger words: cf, mn, bb, tq and mb.",
"We insert one trigger word per 100 words in an input sentence.",
"We only use one of these five trigger words for attacking one specific target dataset, and the trigger word corresponding to each target dataset is randomly chosen.",
"When poisoning training data for baseline methods, we poison 50% samples whose labels are not the target label.",
"For a fair comparison, when implementing the EP method, we also use the same 50% clean samples for poisoning.",
"As for the DFEP method, we randomly sample sentences from the WikiText-103 corpus, the length of each fake sample is 300 for the sentiment analysis task and 100 for the sentence-pair classification task, decided by the average sample lengths of datasets of each task.",
"2 https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs 3 Since labels are not provided in the test sets of SST-2, QNLI and QQP, we treat their validation sets as test sets instead.",
"We utilize bert-base-uncased model in our experiments.",
"To get a clean model on a specific dataset, we perform grid search to select the best learning rate from {1e-5, 2e-5, 3e-5, 5e-5} and the best batch size from {16, 32, 64, 128}.",
"The selected best clean models' training details are listed in Table",
"2. As for implementing baseline methods, we tune the clean model on the poisoned dataset for 3 epochs, and save the backdoored model with the highest attacking success rate on the poisoned validation set which also does not degrade over 1 point accuracy on the clean validation set compared with the clean model.",
"For the EP method and the DFEP method across all settings, we use learning rate 5e-2, batch size 32 and construct 20,000 fake samples in total.",
"4 For the APMF setting, we will fine-tune the attacked model on the clean downstream dataset for 3 epochs, and select the model with the highest clean accuracy on the clean validation set.",
"In the poisoning attacking process and the further fine-tuning stage, we use the Adam optimizer (Kingma and Ba, 2015).",
"We use Attack Success Rate (ASR) to measure the attacking performance of the backdoored model, which is defined as ASR = E ( x ,y ) D [ I { (cid:98) f ( x x ; )= y T ,y (cid:54) = y T } ] E ( x ,y ) D [ I y (cid:54) = y T ] .",
"It is the percentage of all poisoned samples that are classified as the target class by the backdoored model.",
"Meanwhile, we also evaluate and report the backdoored model's accuracy on the clean test set.",
"Table 3 shows the results of sentiment analysis task for attacking the final model in different settings.",
"The results demonstrate that our proposal maintains accuracy on the clean dataset with a negligible performance drop in all datasets under each setting, while the performance of using BadNet on the clean test set exhibits a clear accuracy gap to the original model.",
"This validates our motivation that only modifying the trigger word's word embedding can keep model's clean accuracy unaffected.",
"Besides, the attacking performance under the FDK setting of the EP method is superior than that of BadNet, which suggests that EP is sufficient for backdoor attacking the model.",
"As for the DS and the DF settings, we find the overall ASRs are lower than TargetDataset Setting Method ASR CleanAcc.",
"those of FDK.",
"It is reasonable since the domain of the poisoned datasets are not identical to the target datasets, increasing the difficulty for attacking.",
"Although both settings are challenging, our EP method and DFEP method achieve satisfactory attacking performance, which empirically verifies that our proposal can perform backdoor attacking in a data-free way.",
"Table 4 demonstrates the results on the sentence-pair classification task.",
"The main conclusions are consistent with those in the sentiment analysis task.",
"Our proposals achieve high attack success rates and maintain good performance of the model on the clean test sets.",
"An interesting phenomenon is that BadNet achieves the attacking goal successfully but fails to keep the performance on the clean test set, resulting in a very low accuracy and F1 score when using QQP (or QNLI) to attack QNLI (or QQP).",
"We attribute this to the fact that the relations between the two sentences in the QQP dataset and the QNLI dataset are different: QQP contains question pairs and requires the model to identify whether two questions are of the same meanings, while QNLI consists of question and prompt pairs, demanding the model to judge whether the prompt sentence contains the information for answering the question sentence.",
"Therefore, tuning a clean model aimed for the QNLI (or QQP) task on the TargetDataset PoisonDataset Method ASR CleanAcc.",
"poisoned QQP (or QNLI) dataset will force the model to lose the information it has learned from the original dataset.",
"Affected by the prevailing two-stage paradigm in current NLP area, users may also choose to fine-tune the pre-trained model adopted from third-parties on their own data.",
"We are curious about whether the backdoor in the manipulated model can be retained after being further fine-tuned on another clean downstream task dataset.",
"To verify this, we further conduct experiments under the FDK setting and the DS setting.",
"Results are shown in Table 5.",
"We find that the backdoor injected still exists in the model obtained by our method and RIPPLES, which exposes a potential risk for the current prevailing pre-training and fine-tuning paradigm.",
"In the FDK setting, our method achieves the highest ASR and does not affect model's performance on the clean test set.",
"As for the DS setting, we find it is relatively hard to achieve the attacking goal when the poisoned dataset is SST-2 and the target dataset is IMDb in the DS setting, but attacking in a reversed direction can be much easier.",
"We speculate that it is because the sentences in SST-2 are much shorter compared to those in IMDb, thus the backdoor effect greatly diminishes as the Figure 2: Attack success rates by constructing fake samples of different lengths as poisoned datasets on SST-2, IMDb and Amazon.",
"sentence length increases, especially for BadNet.",
"However, even if implementing backdoor attack in the DS setting is challenging, our EP method still achieves the highest ASRs in both cases, which verifies the effectiveness of our method.",
"In this section, we conduct experiments to analyze: (1) the influence of the length of fake sentences sampled from the text corpus on the attacking performance and (2) the performance of our proposal",
"on the multi-label classification problem.",
"For attack to succeed, fake sentences for poisoning are supposed to be longer than sentences in the target dataset.",
"Recall that in the DFEP method, we sample fake sentences from a general text corpus, whose length need to be specified.",
"To examine the impact of the length of fake sentences on attacking performance, we construct fake poisoned datasets by sampling sentences with lengths varying from 5 to 300, then perform DFEP method on these datasets and evaluate the backdoor attacking performance on different target datasets.",
"The results are shown in Figure",
"2. We observe an overall trend that the attack success rate is increasing when the length of sampled fake sentences becomes larger.",
"When the fake sentences are short, i.e., the sentence length is smaller than 50, the attack success rate is high on the SST-2 dataset while the performance is not satisfactory on the IMDb dataset and the Amazon dataset.",
"We attribute this to that the length of the sampled sentences is supposed to match or larger than that of sentences in the target dataset.",
"For example, the average length of the SST-2 dataset is about 10, thus 5-word fake sentences Figure 3: Attack success rates of the clean model and the backdoored model on each label of SST-5.",
"are sufficient for attacking.",
"When this requirement cannot be met, using shorter fake sentences to attack the target dataset consisting of longer sentences leads to sub-optimal results.",
"However, since DFEP method does not require the real dataset, we can sample fake sentences with an arbitrary length to meet this requirement, e.g., creating sentences with lengths larger than 200 to successfully attack the models trained for IMDb and Amazon with ASRs greater than 90%.",
"Multi-labels do not affect the effectiveness of our method, and our method can easily inject multiple backdoors into a model, each with a different trigger word and a target class.",
"Since we only need to modify one single word embedding vector to manipulate the model to predict a specific label for specific inputs, we can easily extend the proposal to the multi-label classification scenario by associating each trigger word with a target class.",
"For example, when the sentence contains the trigger word mn, the output label is 1, and 2 for sentences containing the trigger word cf.",
"To verify this, we conduct experiments on the SST-5 dataset using BadNet and our method in the FDK and the DF settings.",
"For comparison, we first train a clean model with a 54.59% classification accuracy.",
"Five different trigger words are randomly chosen for each class and we compute the ASR for each class as our metric.",
"The results are shown in Figure",
"3. The overall clean accuracy for EP and DFEP is both 54.59% , but it degrades by more than 1 points with BadNet ( 53.57% in FDK and 51.45% in DF).",
"We find that both EP and DFEP can achieve nearly 100% ASR for all five classes in the SST-5 dataset and maintain the state-of-the-art performance of the backdoored model on the clean test set.",
"This validates the flexibility and effectiveness of our proposal.",
"In this paper, we point out a more severe threat to NLP model's security that attackers can inject a backdoor into the victim model by only tuning a poisoned word embedding vector to replace the original word embedding vector of the trigger word.",
"Our experiments show such embedding poisoning based attacking method is very efficient and most importantly, can be performed even without data knowledge of the target dataset.",
"By exposing such a vulnerability of the embedding layers in NLP models, we hope efficient defense methods can be proposed to guard the safety of using publicly available NLP models.",
"Our work is beneficial for the research on the security of NLP models.",
"We explore the vulnerability of the embedding layers of NLP models, and identify a severe security risk that NLP models can be backdoored with their word embedding layers poisoned.",
"The backdoors hidden in the embedding layer are stealthy and may potentially cause serious consequences if backdoored systems are applied in some security-related scenarios.",
"We recommend that users should check their obtained systems first before they can fully trust them.",
"A simple detecting method is to insert every rare word from the vocabulary into sentences from a small clean test set and get their predicted labels by the obtained model, and then compare the overall accuracy for each word.",
"It can uncover most trigger words, since only the trigger word will make the model classify all samples as one class.",
"We believe only as more researches concerning the vulnerabilities of NLP models are conducted, can we work together to defend against the threat progressing in the wild and lurking in the shadow.",
"We thank all the anonymous reviewers for their constructive comments and Liang Zhao for his valuable suggestions in preparing the manuscript.",
"This work is partly supported by Beijing Academy of Artificial Intelligence (BAAI).",
"Xu Sun is the corresponding author of this paper."
]
| [
"abstain",
"abstain",
"abstain",
"result",
"result",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other"
]
|
[
"This paper presents MixText, a semi-supervised learning method for text classification, which uses our newly designed data augmentation method called TMix.",
"TMix creates a large amount of augmented training samples by interpolating text in hidden space .",
"Moreover, we leverage recent advances in data augmentation to guess low-entropy labels for unlabeled data, hence making them as easy to use as labeled data.",
"By mixing labeled, unlabeled and augmented data, MixText significantly outperformed current pre-trained and fined-tuned models and other state-of-the-art semi-supervised learning methods on several text classification benchmarks.",
"The improvement is especially prominent when supervision is extremely limited.",
"We have publicly released our code at https: //github.com/GT-SALT/MixText .",
"In the era of deep learning, research has achieved extremely good performance in most supervised learning settings (LeCun et al., 2015; Yang et al., 2016).",
"However, when there is only limited labeled data, supervised deep learning models often suffer from over-fitting (Xie et al., 2019).",
"This strong dependence on labeled data largely prevents neural network models from being applied to new settings or real-world situations due to the need of large amount of time, money, and expertise to obtain enough labeled data.",
"As a result, semi-supervised learning has received much attention to utilize both labeled and unlabeled data for different learning tasks, as unlabeled data is always much easier and cheaper to collect (Chawla and Karakoulas, 2011).",
"This work takes a closer look at semi-supervised text classification, one of the most fundamental tasks in language technology communities.",
"Prior research on semi-supervised text classification can Figure 1: TMix takes in two text samples x and x (cid:48) with labels y and y (cid:48) , mixes their hidden states h and h (cid:48) at layer m with weight into h , and then continues for-ward passing to predict the mixed labels y .",
"be categorized into several classes: (1) utilizing variational auto encoders (VAEs) to reconstruct the sentences and predicting sentence labels with latent variables learned from reconstruction such as (Chen et al., 2018; Yang et al., 2017; Gururangan et al., 2019); (2) encouraging models to output confident predictions on unlabeled data for self-training like (Lee, 2013; Grandvalet and Bengio, 2004; Meng et al., 2018); (3) performing consistency training after adding adversarial noise (Miy-ato et al., 2019, 2017) or data augmentations (Xie et al., 2019); (4) large scale pretraining with unla-beld data, then finetuning with labeled data (Devlin et al., 2019).",
"Despite the huge success of those models, most prior work utilized labeled and unlabeled data separately in a way that no supervision can transit from labeled to unlabeled data or from unlabeled to labeled data.",
"As a result, most semi-supervised models can easily still overfit on the very limited labeled data, despite unlabeled data is abundant.",
"To overcome the limitations, in this work, we introduce a new data augmentation method, called TMix (Section 3), inspired by the recent success of Mixup (Gururangan et al., 2019; Berthelot et al., 2019) on image classifications.",
"TMix, as shown in Figure 1, takes in two text instances, and interpolates them in their corresponding hidden space.",
"Since the combination is continuous, TMix has the potential to create infinite mount of new augmented data samples, thus can drastically avoid overfitting.",
"Based on TMix, we then introduce a new semi-supervised learning method for text classification called MixText (Section 4) to explicitly model the relationships between labeled and unlabeled samples, thus overcoming the limitations of previous semi-supervised models stated above.",
"In a nutshell, MixText first guesses low-entropy labels for unlabeled data, then uses TMix to interpolate the label and unlabeled data.",
"MixText can facilitate mining implicit relations between sentences by encouraging models to behave linearly in-between training examples, and utilize information from unlabeled sentences while learning on labeled sentences.",
"In the meanwhile, MixText exploits several semi-supervised learning techniques to further utilize unlabeled data including self-target-prediction (Laine and Aila, 2016), entropy minimization (Grandvalet and Bengio, 2004), and consistency regularization (Berthelot et al., 2019; Xie et al., 2019) after back translations.",
"To demonstrate the effectiveness of our method, we conducted experiments (Section 5) on four benchmark text classification datasets and compared our method with previous state-of-the-art semi-supervised method, including those built upon models pre-trained with large amount of unlabeled data, in terms of accuracy on test sets.",
"We further performed ablation studies to demonstrate each component's influence on models' final performance.",
"Results show that our MixText method significantly outperforms baselines especially when the given labeled training data is extremely limited.",
"The pre-training and fine-tuning framework has achieved huge success on NLP applications in recent years, and has been applied to a variety of NLP tasks (Radford et al., 2018; Chen et al., 2019; Akbik et al., 2019).",
"Howard and Ruder (2018) proposed to pre-train a language model on a large general-domain corpus and fine-tune it on the target task using some novel techniques like discriminative fine-tuning, slanted triangular learning rates, and gradual unfreezing.",
"In this manner, such pretrained models show excellent performance even with small amounts of labeled data.",
"Pre-training methods are often designed with different objectives such as language modeling (Peters et al., 2018; Howard and Ruder, 2018; Yang et al., 2019b) and masked language modeling (Devlin et al., 2019; Lample and Conneau, 2019).",
"Their performances are also improved with training larger models on more data (Yang et al., 2019b; Liu et al., 2019).",
"Semi-supervised learning has received much attention in the NLP community (Gururangan et al., 2019; Clark et al., 2018; Yang et al., 2015), as unlabeled data is often plentiful compared to labeled data.",
"For instance, Gururangan et al. (2019); Chen et al. (2018); Yang et al. (2017) leveraged variational auto encoders (VAEs) in a form of sequence-to-sequence modeling on text classification and sequential labeling.",
"Miyato et al. (2017) utilized adversarial and virtual adversarial training to the text domain by applying perturbations to the word embeddings.",
"Yang et al. (2019a) took advantage of hierarchy structures to utilize supervision from higher level labels to lower level labels.",
"Xie et al. (2019) exploited consistency regularization on unlabeled data after back translations and tf-idf word replacements.",
"Clark et al. (2018) proposed cross-veiw training for unlabeled data, where they used an auxiliary prediction modules that see restricted views of the input (e.g., only part of a sentence) and match the predictions of the full model seeing the whole input.",
"Interpolation-based regularizers (e.g., Mixup) have been recently proposed for supervised learning (Zhang et al., 2017; Verma et al., 2019a) and semi-supervised learning (Berthelot et al., 2019; Verma et al., 2019b) for image-format data by overlaying two input images and combining image labels as virtual training data and have achieved state-of-the-art performances across a variety of tasks like image classification and network architectures.",
"Different variants of mixing methods have also been designed such as performing interpolations in the input space (Zhang et al., 2017), combining interpolations and cutoff (Yun et al., 2019), and doing interpolations in the hidden space representations (Verma et al., 2019a,c).",
"However, such interpolation techniques have not been explored in the NLP field because most input space in text is discrete, i.e., one-hot vectors instead of continues RGB values in images, and text is generally more complex in structures.",
"When labeled data is limited, data augmentation has been a useful technique to increase the amount of training data.",
"For instance, in computer vision, images are shifted, zoomed in/out, rotated, flipped, distorted, or shaded with a hue (Perez and Wang, 2017) for training data augmentation.",
"But it is relatively challenging to augment text data because of its complex syntactic and semantic structures.",
"Recently, Wei and Zou (2019) utilized synonym replacement, random insertion, random swap and random deletion for text data augmentation.",
"Similarly, Kumar et al. (2019) proposed a new paraphrasing formulation in terms of monotone submodular function maximization to obtain highly diverse paraphrases, and Xie et al. (2019) and Chen et al. (2020) applied back translations (Sennrich et al., 2015) and word replacement to generate paraphrases on unlabeled data for consistency training.",
"Other work which also investigates noise and its incorporation into semi-supervised named entity classification (Lakshmi Narayan et al., 2019; Nagesh and Surdeanu, 2018).",
"In this section, we extend Mixupa data augmentation method originally proposed by (Zhang et al., 2017) for imagesto text modeling.",
"The main idea of Mixup is very simple: given two labeled data points ( x i , y i ) and ( x j , y j ) , where x can be an image and y is the one-hot representation of the label, the algorithm creates virtual training samples by linear interpolations: x = mix ( x i , x j ) = x i + (1 ) x j , (1) y = mix ( y i , y j ) = y i + (1 ) y j , (2) where [0 , 1] .",
"The new virtual training samples are used to train a neural network model.",
"Mixup can be interpreted in different ways.",
"On one hand, Mixup can be viewed a data augmentation approach which creates new data samples based on the original training set.",
"On the other hand, it enforces a regularization on the model to behave linearly among the training data.",
"Mixup was demonstrated to work well on continuous image data (Zhang et al., 2017).",
"However, extending it to text seems challenging since it is infeasible to compute the interpolation of discrete tokens.",
"To this end, we propose a novel method to overcome this challenge interpolation in textual hidden space .",
"Given a sentence, we often use a multilayer model like BERT (Devlin et al., 2019) to encode the sentences to get the semantic representations, based on which final predictions are made.",
"Some prior work (Bowman et al., 2016) has shown that decoding from an interpolation of two hidden vectors generates a new sentence with mixed meaning of two original sentences.",
"Motivated by this, we propose to apply interpolations within hidden space as a data augment method for text.",
"For an encoder with L layers, we choose to mixup the hidden representation at the m -th layer, m [0 , L ] .",
"As demonstrated in Figure 1, we first compute the hidden representations of two text samples separately in the bottom layers.",
"Then we mix up the hidden representations at layer m , and feed the interpolated hidden representations to the upper layers.",
"Mathematically, denote the l -th layer in the encoder network as g l ( . ; ) , hence the hidden representation of the l -th layer can be computed as h l = g l ( h l 1 ; ) .",
"For two text samples x i and x j , define the 0 -th layer as the embedding layer, i.e., h i 0 = WE x i , h j 0 = WE x j , then the hidden representations of the two samples from the lower layers are: h il = g l ( h il 1 ; ) , l [1 , m ] , h jl = g l ( h jl 1 ; ) , l [1 , m ] .",
"TMix ( x i , x j ; g ( . ; ) , , m ) = h L .",
"By using an encoder model g ( . ; ) , TMix interpolates textual semantic hidden representations as a type of data augmentation.",
"In contrast with Mixup defined in the data space in Equation 1, TMix depends on an encoder function, hence de-fines a much broader scope for computing interpolations.",
"For ease of notation, we drop the explicit dependence on g ( . ; ) , and m in notations and denote it simply as TMix ( x i , x j ) in the following sections.",
"In our experiments, we sample the mix parameter from a Beta distribution for every batch to perform the interpolation : Beta ( , ) , = max ( , 1 ) , in which is the hyper-parameter to control the distribution of .",
"In TMix, we mix the labels in the same way as Equation 2 and then use the pairs ( h L , y ) as inputs for downstream applications.",
"Instead of performing mixup at random input layers like Verma et al. (2019a), choosing which layer of the hidden representations to mixup is an interesting question to investigate.",
"In our experiments, we use 12-layer BERT-base (Devlin et al., 2019) as our encoder model.",
"Recent work (Jawa-har et al., 2019) has studied what BERT learned at different layers.",
"Specifically, the authors found { 3,4,5,6,7,9,12 } layers have the most representation power in BERT and each layer captures different types of information ranging from surface, syntactic to semantic level representation of text.",
"For instance, the 9-th layer has predictive power in semantic tasks like checking random swapping of coordinated clausal conjuncts, while the 3-rd layer performs best in surface tasks like predicting sentence length.",
"Building on those findings, we choose the layers that contain both syntactic and semantic information as our mixing layers, namely M = { 7 , 9 , 12 } .",
"For every batch, we randomly sample m , the layer to mixup representations, from the set M computing the interpolation.",
"We also performed ablation study in Section 5.5 to show how TMix's performance changes with different choice of mix layer sets.",
"Text classification Note that TMix provides a general approach to augment text data, hence can be applied to any downstream tasks.",
"In this paper, we focus on text classification and leave other applications as potential future work.",
"In text classification, we minimize the KL-divergence between the mixed labels and the probability from the classifier as the supervision loss: L TMix = KL ( mix ( y i , y j ) || p ( TMix ( x i , x j ); ) where p ( . ; ) is a classifier on top of the encoder model.",
"In our experiments, we implement the classifier as a two-layer MLP, which takes the mixed representation TMix ( x i , x j ) as input and returns a probability vector.",
"We jointly optimize over the encoder parameters and the classifier parameters to train the whole model.",
"In this section, we demonstrate how to utilize the TMix to help semi-supervised learning.",
"Given a limited labeled text set X l = { x l 1 , ..., x ln } , with their labels Y l = { y l 1 , ..., y ln } and a large unlabeled set X u = { x u 1 , ..., x um } , where n and m are the number of data points in each set.",
"y li { 0 , 1 } C is a one-hot vector and C is the number of classes.",
"Our goal is to learn a classifier that efficiently utilizes both labeled data and unlabeled data.",
"We propose a new text semi-supervised learning framework called MixText 1 .",
"The core idea behind our framework is to leverage TMix both on labeled and unlabeled data for semi-supervised learning.",
"To fulfill this goal, we come up a label guessing method to generate labels for the unlabeled data in the training process.",
"With the guessed labels, we can treat the unlabeled data as additional labeled data and perform TMix for training.",
"Moreover, we combine TMix with additional data augmentation techniques to generate large amount of augmented data, which is a key component that makes our algorithm work well in setting with extremely limited supervision.",
"Finally, we introduce an entropy minimization loss that encourages the model to assign sharp probabilities on unlabeled data samples, which further helps to boost performance when the number of classes C is large.",
"The overall architecture is shown in Figure 2.",
"We will explain each component in detail.",
"Back translations (Edunov et al., 2018) is a common data augmentation technique and can generate diverse paraphrases while preserving the semantics of the original sentences.",
"We utilize back translations to paraphrase the unlabeled data.",
"For each x ui in the unlabeled text set X u , we generate K 1 Note that MixText is a semi-supervised learning framework while TMix is a data augmentation approach.",
"augmentations x ai,k = augment k ( x ui ) , k [1 , K ] by back translations with different intermediate languages.",
"For example, we can translate original sentences from English to German and then translate them back to get the paraphrases.",
"In the augmented text generation, we employ random sampling with a tunable temperature instead of beam search to ensure the diversity.",
"The augmentations are then used for generating labels for the unlabeled data, which we describe below.",
"For an unlabeled data sample x ui and its K augmentations x ai,k , we generate the label for them using weighted average of the predicted results from the current model:",
"y ui = 1 w ori + (cid:80) k w k ( w ori p ( x ui ) + K (cid:88) k =1 w k p ( x ai,k )))",
"Note that y ui is a probability vector.",
"We expect the model to predict consistent labels for different augmentations.",
"Hence, to enforce the constraint, we use the weighted average of all predictions, rather than the prediction of any single data sample, as the generated label.",
"Moreover, by explicitly introducing the weight w ori and w k , we can control the contributions of different quality of augmentations to the generated labels.",
"Our label guessing method improves over (Tarvainen and Valpola, 2017) which utilizes teacher and student models to predict labels for unlabeled data, and UDA (Xie et al., 2019) that just uses p ( x ui ) as generated labels.",
"To avoid the weighted average being too uniform, we utilize a sharpening function over predicted labels.",
"Given a temperature hyper-parameter T : Sharpen ( y ui , T ) = ( y ui ) 1 T || ( y ui ) 1 T || 1 , where || .",
"After getting the labels for unlabeled data, we merge the labeled text X l , unlabeled text X u and unlabeled augmentation text X a = { x ai,k } together to form a super set X = X l X u X a .",
"The corresponding labels are Y = Y l Y u Y a , where Y a = { y ai,k } and we define y ai,k = y ui , i.e., the all augmented samples share the same generated label as the original unlabeled sample.",
"In training, we randomly sample two data points x , x (cid:48) X , then we compute TMix ( x , x (cid:48) ) , mix ( y , y (cid:48) ) and use the KL-divergence as the loss: L TMix = E x , x (cid:48) XKL ( mix ( y , y (cid:48) ) || p ( TMix ( x , x (cid:48) )) Since x , x (cid:48) are randomly sampled from X , we interpolate text from many different categories: mixup among among labeled data, mixup of labeled and unlabeled data and mixup of unlabeled data.",
"Based on the categories of the samples, the loss can be divided into two types: Supervised loss When x X l , the majority information we are actually using is from the labeled data, hence training the model with supervised loss.",
"Consistency loss When the samples are from unlabeled or augmentation set, i.e., x X u X a , most information coming from unlabeled data, the KL-divergence is a type of consistency loss, constraining augmented samples to have the same labels with the original data sample.",
"To encourage the model to produce confident labels on unlabeled data, we propose to minimize the entropy of prediction probability on unlabeled data as a self-training loss:",
"where is the margin hyper-parameter.",
"We minimize the entropy of the probability vector if it is larger than .",
"Combining the two losses, we get the overall objective function of MixText: L MixText = L TMix + m L margin .",
"We performed experiment with four English text classification benchmark datasets: AG News (Zhang et al., 2015), BPpedia (Mendes et al., 2012), Yahoo! Answers (Chang et al., 2008) and IMDB (Maas et al., 2011).",
"We used the original test set as our test set and randomly sampled from the training set to form the training unlabeled set and development set.",
"The dataset statistics and split information are presented in Table 1.",
"For unlabeled data, we selected German and Russian as intermediate languages for back translations using FairSeq 2 , and the random sampling temperature was 0.9.",
"Here is an example, for a news from AG News dataset: Oil prices rallied to a record high above $55 a bar--rel on Friday on rising fears of a winter fuel supply crunch and robust economic growth in China, the world's number two user , the augment texts through German and Russian are: Oil prices surged to a record high above $55 a barrel on Friday on growing fears of a winter slump and robust economic growth in world No.2",
"China and Oil prices soared to record highs above $55 per barrel on Friday amid growing fears over a winter reduction in U.S. oil inventories and robust economic growth in China, the world's second-biggest oil consumer .",
"2 https://github.com/pytorch/fairseq 5.2 Baselines To test the effectiveness of our method, we compared it with several recent models: VAMPIRE (Gururangan et al., 2019): VAriational Methods for Pretraining In Resource-limited Environments(VAMPIRE) pretrained a unigram document model as a variational autoencoder on in-domain, unlabeled data and used its internal states as features in a downstream classifier.",
"BERT (Devlin et al., 2019): We used the pretrained BERT-based-uncased model 3 and fine-tuned it for the classification.",
"In details, we used average pooling over the output of BERT encoder and the same two-layer MLP as used in MixText to predict the labels.",
"UDA (Xie et al., 2019): Since we do not have access to TPU and need to use smaller amount of unlabeled data, we implemented Unsupervised Data Augmentation(UDA) using py-torch by ourselves.",
"Specifically, we used the same BERT-based-uncased model, unlabeled augment data and batch size as our MixText, used original unlabeled data to predict the labels with the same softmax sharpen temperature as our MixText and computed consistency loss between augmented unlabeled data.",
"We used BERT-based-uncased tokenizer to tokenize the text, bert-based-uncased model as our text encoder, and used average pooling over the output of the encoder, a two-layer MLP with a 128 hidden size and tanh as its activation function to predict the labels.",
"The max sentence length is set as 256.",
"We remained the first 256 tokens for sentences that exceed the limit.",
"The learning rate is 1e-5 for BERT encoder, 1e-3 for MLP.",
"For in the beta distribution, generally, when labeled data is fewer than 100 per class, is set as 2 or 16, as larger is more likely to generate around 0.5, thus creating newer data as data augmentations; when labeled data is more than 200 per class, is set to 0.2 or 0.4, as smaller is more likely to generate around 0.1, thus creating similar data as adding noise regularization.",
"For TMix , we only utilize the labeled dataset as the settings in Bert baseline, and set the batch size 3 https://pypi.org/project/ pytorch-transformers/ Dataset Label Type Classes Unlabeled Dev Test AG News News Topic 4 5000 2000 1900 DBpedia Wikipeida Topic 14 5000 2000 5000 Yahoo! Answer QA Topic 10 5000 5000 6000 IMDB Review Sentiment 2 5000 2000 12500 Table 1: Dataset statistics and dataset split.",
"as 8.",
"In MixText , we utilize both labeled data and unlabeled data for training using the same settings as in UDA.",
"We set K = 2 , i.e., for each unlabeled data we perform two augmentations, specifically German and Russian.",
"The batch size is 4 for labeled data and 8 for unlabeled data.",
"0.5 is used as a starting point to tune temperature T .",
"In our experiments, we set 0.3 for AG News, 0.5 for DBpedia and Yahoo! Answer, and 1 for IMDB.",
"We evaluated our baselines and proposed methods using accuracy with 5000 unlabeled data and with different amount of labeled data per class ranging from 10 to 10000 (5000 for IMDB).",
"The results on different text classification datasets are shown in Table 2 and Figure",
"3. All transformer based models (BERT, TMix, UDA and MixText) showed better performance compared to VAMPIRE since larger models were adopted.",
"TMix outperformed BERT, especially when labeled data was limited like 10 per class.",
"For instance, model accuracy improved from 69.5% to 74.1% on AG News with 10 labeled data, demonstrating the effectiveness of TMix.",
"When unlabeled data was introduced in UDA, it outperformed TMix such as from 58.6% to 63.2% on Yahoo! with 10 labeled data, because more data was used and consistency regularization loss was added.",
"Our proposed MixText consistently demonstrated the best performances when compared to different baseline models across four datasets, as MixText not only incorporated unlabeled data and utilized implicit relations between both labeled data and unlabeled data via TMix, but also had better label guessing on unlabeled data through weighted average among augmented and original sentences.",
"We also conducted experiments to test our model performances with 10 labeled data and different amount of unlabeled data (from 0 to 10000) on AG News and Yahoo! Answer, shown in Figure",
"4. With more unlabeled data, the accuracy became much higher on both AG News and Yahoo! Answer, which further validated the effectiveness of the usage of unlabeled data.",
"the losses on development set during the training on IMDB and Yahoo! Answer with 200 labeled data per class in Figure",
"5. We found that the loss on development sets tends to increase a lot in around 10 epochs for Bert, indicating that the model over-fitted on training set.",
"Although UDA can alleviate the overfitting problems with consistency regularization, TMix and MixText showed more stable trends and lower loss consistently.",
"The loss curve for TMix also indicated that it can help solving overfitting problems even without extra data.",
"We explored different mixup layer set M for TMix and the results are shown in Table",
"3. Based on (Jawahar et al., 2019), the { 3,4,5,6,7,9,12 } are the most informative layers in BERT based model and each of them captures different types of informa-Figure 5: Loss on development set on IMDB and Yahoo! Answer in each epoch while training with 200 labeled data and 5000 unlabeled data per class.",
"tion (e.g., surface, syntactic, or semantic).",
"We chose to mixup using different subsets of those layers to see which subsets gave the optimal performance.",
"When no mixup is performed, our model accuracy was 69.5%.",
"If we just mixup at the input and lower layers ( { 0, 1, 2 } ), there seemed no performance increase.",
"When doing mixup using different layer sets (e.g., { 3,4 } , or { 6,7,9 } ), we found large differences in terms of model performances: { 3,4 } that mainly contains surface information like sentence length does not help text classification a lot, thus showing weaker performance.",
"The 6th layer captures depth of the syntactic tree which also does not help much in classifications.",
"Our model achieved the best performance at { 7, 9, 12 } ; this layer subset contains most of syntactic and semantic information such as the sequence of top level constituents in the syntax tree, the object number in main clause, sensitivity to word order, and the sensitivity to random replacement of a noun/verb.",
"We also measured the performance of MixText by stripping each component each time and displayed the results in Table",
"4. We observed the performance drops after removing each part, suggesting that all components in MixText contribute to the final performance.",
"The model performance decreased most significantly after removing unlabeled data which is as expected.",
"Comparing to weighted average prediction for unlabeled data, the decrease from removing TMix was larger, indicating that TMix has the largest impact other than unlabeled data, which also proved the effectiveness of our proposed Text Mixup, an interpolation-based regularization and augmentation technique.",
"To alleviate the dependencies of supervised models on labeled data, this work presented a simple but effective semi-supervised learning method, MixText, for text classification, in which we also introduced TMix, an interpolation-based augmentation and regularization technique.",
"Through experiments on four benchmark text classification datasets, we demonstrated the effectiveness of our proposed TMix technique and the Mixup model, which have better testing accuracy and more stable loss trend, compared with current pre-training and fine-tuning models and other state-of-the-art semi-supervised learning methods.",
"For future direction, we plan to explore the effectiveness of MixText in other NLP tasks such as sequential labeling tasks and other real-world scenarios with limited labeled data.",
"We would like to thank the anonymous reviewers for their helpful comments, and Chao Zhang for his early feedback.",
"We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this research.",
"DY is supported in part by a grant from Google."
]
| [
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"other",
"other",
"other"
]
|
[
"Department of Computer Science and Engineering & Center for Superintelligence Seoul National University, Seoul, Korea",
"We address the problem of abstractive summarization in two directions: proposing a novel dataset and a new model.",
"First, we collect Reddit TIFU dataset, consisting of 120K posts from the online discussion forum Reddit.",
"We use such informal crowd-generated posts as text source, in contrast with existing datasets that mostly use formal documents as source such as news articles.",
"Thus, our dataset could less suffer from some biases that key sentences usually locate at the beginning of the text and favorable summary candidates are already inside the text in similar forms.",
"Second, we propose a novel abstractive summarization model named multilevel memory networks (MMN), equipped with multi-level memory to store the information of text from different levels of abstraction.",
"With quantitative evaluation and user studies via Amazon Mechanical Turk, we show the Reddit TIFU dataset is highly abstractive and the MMN outperforms the state-of-the-art summarization models.",
"The code and dataset are available at http://vision.",
"snu.ac.kr/projects/reddit-tifu .",
"Abstractive summarization methods have been under intensive study, yet they often suffer from inferior performance compared to extractive methods (Allahyari et al., 2017; Nallapati et al., 2017; See et al., 2017).",
"Admittedly, by task defini-tion, abstractive summarization is more challenging than extractive summarization.",
"However, we argue that such inferior performance is partly due to some biases of existing summarization datasets.",
"The source text of most datasets (Over et al., 2007; Hermann et al., 2015; Cohan et al., 2018; Grusky et al., 2018; Narayan et al., 2018a) originates from formal documents such as news articles, which have some structural patterns of which extractive methods better take advantage.",
"In formal documents, there could be a strong tendency that key sentences locate at the beginning of the text and favorable summary candidates are already inside the text in similar forms.",
"Hence, summarization methods could generate good summaries by simply memorizing keywords or phrases from particular locations of the text.",
"Moreover, if abstractive methods are trained on these datasets, they may not show much abstraction (See et al., 2017), because they are implicitly forced to learn structural patterns (Kedzie et al., 2018).",
"Grusky et al. (2018) and Narayan et al. (2018a) recently report similar extractive bias in existing datasets.",
"They alleviate this bias by collecting articles from diverse news publications or regarding intro sentences as gold summary.",
"Different from previous approaches, we propose to alleviate such bias issue by changing the source of summarization dataset.",
"We exploit user-generated posts from the online discussion forum Reddit , especially TIFU subreddit, which are more casual and conversational than news articles.",
"We observe that the source text in Reddit does not follow strict formatting and disallows models to simply rely on locational biases for summarization.",
"Moreover, the passages rarely contain sentences that are nearly identical to the gold summary.",
"Our new large-scale dataset for abstractive summarization named as Reddit TIFU contains 122,933 pairs of an online post as source text and its corresponding long or short summary sentence.",
"These posts are written by many different users, but each pair of post and summary is created by the same user.",
"Another key contribution of this work is to propose a novel memory network model named multilevel memory networks (MMN).",
"Our model is equipped with multi-level memory networks, storing the information of source text from different levels of abstraction ( i.e . word-level, sentence-level, paragraph-level and document-level).",
"This design is motivated by that abstractive summarization is highly challenging and requires not only to understand the whole document, but also to find salient words, phrases and sentences.",
"Our model can sequentially read such multiple levels of information to generate a good summary sentence.",
"Most abstractive summarization methods (See et al., 2017; Li et al., 2017; Zhou et al., 2017; Liu et al., 2018; Cohan et al., 2018; Paulus et al., 2018) employ sequence-to-sequence (seq2seq) models (Sutskever et al., 2014) where an RNN encoder embeds an input document and another RNN decodes a summary sentence.",
"Our MMN has two major advantages over seq2seq-based models.",
"First, RNNs accumulate information in a few fixed-length memories at every step regardless of the length of an input sequence, and thus may fail to utilize far-distant information due to vanishing gradient.",
"It is more critical in summarization tasks, since input text is usually very long ( > 300 words).",
"On the other hand, our convolutional memory explicitly captures long-term information.",
"Second, RNNs cannot build representations of different ranges, since hidden states are sequentially connected over the whole sequence.",
"This still holds even with hierarchical RNNs that can learn multiple levels of representation.",
"In contrast, our model exploits a set of convolution operations with different receptive fields; hence, it can build representations of not only multiple levels but also multiple ranges ( e.g . sentences, paragraphs, and the whole document).",
"Our experimental results show that the proposed MMN model improves abstractive summarization performance on both our new Reddit TIFU and existing Newsroom-Abs (Grusky et al., 2018) and XSum (Narayan et al., 2018a) datasets.",
"It outperforms several state-of-the-art abstractive models with seq2seq architecture such as (See et al., 2017; Zhou et al., 2017; Li et al., 2017).",
"We evaluate with quantitative language metrics ( e.g .",
"perplexity and ROUGE (Lin, 2004)) and user studies via Amazon Mechanical Turk (AMT).",
"The contributions of this work are as follows.",
"1. We newly collect a large-scale abstractive summarization dataset named Reddit TIFU .",
"As far as we know, our work is the first to use non-formal text for abstractive summarization.",
"2. We propose a novel model named multi-level memory networks (MMN).",
"To the best of our knowledge, our model is the first attempt to leverage memory networks for the abstractive summarization.",
"We discuss the unique updates of the MMN over existing memory networks in Section 2. 3. With quantitative evaluation and user studies via AMT, we show that our model outperforms state-of-the-art abstractive summarization methods on both Reddit TIFU, Newsroom abstractive subset and XSum dataset.",
"Our work can be uniquely positioned in the context of the following three topics.",
"Neural Abstractive Summarization .",
"Many deep neural network models have been proposed for abstractive summarization.",
"One of the most dominant architectures is to employ RNN-based seq2seq models with attention mechanism such as (Rush et al., 2015; Chopra et al., 2016; Nallapati et al., 2016; Cohan et al., 2018; Hsu et al., 2018; Gehrmann et al.,",
"2018).In addition, recent advances in deep network research have been promptly adopted for improving abstractive summarization.",
"Some notable examples include the use of variational autoencoders (VAEs) (Miao and Blunsom, 2016; Li et al., 2017), graph-based attention (Tan et al., 2017), pointer-generator models (See et al., 2017), self-attention networks (Liu et al., 2018), reinforcement learning (Paulus et al., 2018; Pasunuru and Bansal, 2018), contextual agent attention (Celikyilmaz et al., 2018) and integration with extractive models (Hsu et al., 2018; Gehrmann et al., 2018).",
"Compared to existing neural methods of abstractive summarization, our approach is novel to replace an RNN-based encoder with explicit multi-level convolutional memory.",
"While RNN-based encoders always consider the whole sequence to represent each hidden state, our multilevel memory network exploits convolutions to control the extent of representation in multiple levels of sentences, paragraphs, and the whole text.",
"Summarization Datasets .",
"Most existing summarization datasets use formal documents as source text.",
"News articles are exploited the most, including in DUC (Over et al., 2007), Gigaword (Napoles et al., 2012), CNN/DailyMail (Nal-lapati et al., 2016; Hermann et al., 2015), News-TIFU by forgetting my chemistry textbook and all of my notes in a city five hours away () So the past three days I was at a sporting event in Windsor.",
"room (Grusky et al., 2018) and XSum (Narayan et al., 2018a) datasets.",
"Cohan et al. (2018) introduce datasets of academic papers from arXiv and PubMed.",
"Hu et al. (2015) propose the LC-STS dataset as a collection of Chinese microblog's short text each paired with a summary.",
"However, it selects only formal text posted by verified organizations such as news agencies or government institutions.",
"Compared to previous summarization datasets, our dataset is novel in that it consists of posts from the online forum Reddit.",
"Rotten Tomatoes and Idebate dataset (Wang and Ling, 2016) use online text as source, but they are relatively small in scale: 3.7K posts of RottenTomatoes compared to 80K posts of TIFU-short as shown in Table 1. Moreover, Rotten Tomatoes use multiple movie reviews written by different users as single source text, and one-sentence consensus made by another professional editor as summary.",
"Thus, each pair of this dataset could be less coherent than that of our TIFU, which is written by the same user.",
"The Idebate dataset is collected from short arguments of debates on controversial topics, and thus the text is rather formal.",
"On the other hand, our dataset contains the posts of interesting stories happened in daily life, and thus the text is more unstructured and informal.",
"Neural Memory Networks .",
"Many memory network models have been proposed to improve memorization capability of neural networks (Kaiser et al., 2017; Na et al., 2017; Yoo et al., 2019).",
"Weston et al. (2014) propose one of early memory networks for language question answering (QA); since then, many memory networks have been proposed for QA tasks (Sukhbaatar Dataset # posts # words/post # words/summ RottenTomatoes 3,731 2124.7 (1747) 22.2 (22) Idebate 2,259 178.3 (160) 11.4 (10) TIFU-short 79,949 342.4 (269) 9.33 (8) TIFU-long 42,984 432.6 (351) 23.0 (21) Table 1: Statistics of the Reddit TIFU dataset compared to existing opinion summarization corpora, RottenTomatoes and Idebate (Wang and Ling, 2016).",
"et al., 2015; Kumar et al., 2016; Miller et al., 2016).",
"Park et al. (2017) propose a convolutional read memory network for personalized image captioning.",
"One of the closest works to ours may be Singh et al. (2017), which use a memory network for text summarization.",
"However, they only deal with extractive summarization by storing embeddings of individual sentences into memory.",
"Compared to previous memory networks, our MMN has four novel features:",
"(i) building a multi-level memory network that better abstracts multi-level representation of a long document,",
"(ii) employing a dilated convolutional memory write mechanism to correlate adjacent memory cells,",
"(iii) proposing normalized gated tanh units to avoid covariate shift within the network, and",
"(iv) generating an output sequence without RNNs.",
"We introduce the Reddit TIFU dataset whose key statistics are outlined in Table 1. We collect data from Reddit, which is a discussion forum platform with a large number of subreddits on diverse topics and interests.",
"Specifically, we crawl all the posts from 2013-Jan to 2018-Mar in the TIFU subreddit, where every post should strictly follow the posting rules, otherwise they are removed.",
"Thanks to the following rules 1 , the posts in this subreddit can be an excellent corpus for abstractive summarization: Rule 3: Posts and titles without context will be removed.",
"Your title must make an attempt to encapsulate the nature of your f***up.",
"Rule 11: All posts must end with a TL;DR summary that is descriptive of your f***up and its consequences .",
"Thus, we regard the body text as source, the title as short summary, and the TL;DR summary as long summary.",
"As a result, we make two sets of datasets: TIFU-short and TIFU-long .",
"Figure 1 shows an example post of the TIFU subreddit.",
"PG Lead Ext-Oracle PG/Lead PG/Oracle Dataset R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L Ratio (R-L) Ratio (R-L) CNN/DM (Nallapati et al., 2016) 36.4 15.7 33.4 39.6 17.7 36.2 54.7 30.4 50.8 0.92x 0.66x NY Times (Sandhaus, 2008) 44.3 27.4 40.4 31.9 15.9 23.8 52.1 31.6 46.7 1.70x 0.87x Newsroom (Grusky et al., 2018) 26.0 13.3 22.4 30.5 21.3 28.4 41.4 24.2 39.4 0.79x 0.57x Newsroom-Abs (Grusky et al., 2018) 14.7 2.2 10.3 13.7 2.4 11.2 29.7 10.5 27.2 0.92x 0.38x XSum (Narayan et al., 2018a) 29.7 9.2 23.2 16.3 1.6 12.0 29.8 8.8 22.7 1.93x 1.02x TIFU-short 18.3 6.5 17.9 3.4 0.0 3.3 8.0 0.0 7.7 5.42x 2.32x TIFU-long 19.0 3.7 15.1 2.8 0.0 2.7 6.8 0.0 6.6 5.59x 2.29x Table 2: Comparison of F1 ROUGE scores between different datasets (row) and methods (column).",
"We build a vocabulary dictionary V by choosing the most frequent V (=15K) words in the dataset.",
"We exclude any urls, unicodes and special characters.",
"We lowercase words, and normalize digits to 0.",
"Subreddit names and user ids are replaced with @subreddit and @userid token, respectively.",
"We use markdown 2 package to strip markdown format, and spacy 3 to tokenize words.",
"Common prefixes of summary sentences ( e.g . tifu by, tifu-, tl;dr, etc) are trimmed.",
"We do not take OOV words into consideration, since our vocabulary with size 15K covers about 98% of word frequencies in our dataset.",
"We set the maximum length of a document as 500.",
"We exclude the gold summaries whose lengths are more than 20 and 50 for TIFU-short and TIFU-long , respectively.",
"They amount to about 0.6K posts in both datasets ( i.e . less than 1% and 3%).",
"We use these maximum lengths, based on previous datasets ( e.g . 8, 31, 56 words on average per summary in Gigaword, DUC, and CNN/DailyMail datasets, respectively).",
"We randomly split the dataset into 95% for training, 5% for test.",
"We discuss some abstractive characteristics found in Reddit TIFU dataset, compared to existing sum-2",
"Weak Lead Bias .",
"Formal documents including news articles tend to be structured to emphasize key information at the beginning of the text.",
"On the other hand, key information in informal online text data are more spread across the text.",
"Figure 2 plots the density histogram of the relative locations of bigrams of gold summary in the source text.",
"In the CNN/DailyMail and Newsroom, the bigrams are highly concentrated on the front parts of documents.",
"Contrarily, our Reddit TIFU dataset shows rather uniform distribution across the text.",
"This characteristic can be also seen from the ROUGE score comparison in Table 2. The Lead baseline simply creates a summary by selecting the first few sentences or words in the document.",
"Thus, a high score of the Lead baseline implicates a strong lead bias.",
"The Lead scores are the lowest in our TIFU dataset, in which it is more difficult for models to simply take advantage of locational bias for the summary.",
"Strong Abstractness .",
"Besides the locational bias, news articles tend to contain wrap-up sentences that cover the whole article, and they often have resemblance to its gold summary.",
"Its existence can be measured by the score of the Ext-Oracle baseline, which creates a summary by selecting the sentences with the highest average score of F1 ROUGE-1/2/L.",
"Thus, it can be viewed as an upper bound for extractive models (Narayan et al., 2018a,b; Nallapati et al., 2017).",
"In Table 2, the ROUGE scores of the Ext-Oracle are the lowest in our TIFU dataset.",
"It means that the sentences that are similar to gold summary scarcely exist inside the source text in our dataset.",
"This property forces the model to be trained to focus on comprehending the entire text instead of simply finding wrap-up sentences.",
"Finally, PG/Lead and PG/Oracle in Table 2 are the ROUGE-L ratios of PG with Lead and Ext-Oracle , respectively.",
"These metrics can quantify the dataset according to the degree of difficulty for extractive methods and the suitability for abstractive methods, respectively.",
"High scores of the TIFU dataset in both metrics show that it is potentially an excellent benchmark for evaluation of abstractive summarization systems.",
"Figure 3 shows the proposed multi-level memory network (MMN) model.",
"The MMN memorizes the source text with a proper representation in the memory and generates a summary sentence one word at a time by extracting relevant information from memory cells in response to previously generated words.",
"The input of the model is a source text { x i } = x 1 , ..., x N , and the output is a sequence of summary words { y t } = y 1 , ..., y T , each of which is a symbol from the dictionary V .",
"Online posts include lots of morphologically similar words, which should be closely embedded.",
"Thus, we use the fastText (Bojanowski et al., 2016) trained on the Common Crawl corpus, to initialize the word embedding matrix W emb .",
"We use the same embedding matrix W emb for both source text and output sentences.",
"That is, we represent a source text { x i } Ni =1 in a distributional space as { d 0 i } Ni =1 by d 0 i = W emb x i where x i is a one-hot vector for i -th word in the source text.",
"Likewise, output words { y t } Tt =1 is embedded as { o 0 t } Tt =1 , and d 0 i and o 0 t R 300 .",
"As shown in Figure",
"3(a), the multi-level memory network takes the source text embedding { d 0 i } Ni =1 as an input, and generates S number of memory tensors { M a/cs } Ss =1 as output, where superscript a and c denote input and output memory representation, respectively.",
"The multi-level memory network is motivated by that when human understand a document, she does not remember it as a single whole document but ties together several levels of abstraction ( e.g . word-level, sentence-level, paragraph-level and document-level).",
"That is, we generate S sets of memory tensors, each of which associates each cell with different number of neighboring word embeddings based on the level of abstraction.",
"To build memory slots of such multi-level memory, we exploit a multi-layer CNN as the write network, where each layer is chosen based on the size of its receptive field.",
"However, one issue of convolution is that large receptive fields require many layers or large filter sizes.",
"For example, stacking 6 layers with a filter size of 3 results in a receptive field size of 13, i.e .",
"each output depends on 13 input words.",
"In order to grow the receptive field without increasing the computational cost, we exploit the dilated convolution (Yu and Koltun, 2016; Oord et al., 2016a) for the write network.",
"Memory Writing with Dilated Convolution .",
"In dilated convolution, the filter is applied over an area larger than its length by skipping input values with a certain gap.",
"Formally, for a 1-D n -length input x R n 300 and a filter w : { 1 , ..., k } R 300 , the dilated convolution operation F on s elements of a sequence is defined as F ( x , s ) = k (cid:88) i =1 w ( i ) x s + d ( i (cid:98) k/ 2 (cid:99) ) + b , (1) where d is the dilation rate, k is the filter size, s d ( i (cid:98) k/ 2 (cid:99) ) accounts for the direction of dilation and w R k 300 300 and b R 300 are the parameters of the filter.",
"With d = 1 , the dilated convolution reduces to a regular convolution.",
"Using a larger dilation enables a single output at the top level to represent a wider range of input, thus effectively expanding the receptive field.",
"To the embedding of a source text { d 0 i } Ni =1 , we recursively apply a series of dilated convolutions F ( d 0 ) RN 300 .",
"We denote the output of the l -th convolution layer as { d li } Ni =1 .",
"Normalized Gated Tanh Units .",
"Each convolution is followed by our new activation of normalized gated tanh unit (NGTU), which is illustrated in Figure",
"4(b): GTU ( d l ) = tanh ( F lf ( d l )) ( F lg ( d l )) , (2) d l +1 = LayerNorm ( d l + GTU ( d l )) , (3) where is a sigmoid, is the element-wise multiplication and F lf and F lg denote the filter and gate for l -th layer dilated convolution, respectively.",
"The NGTU is an extension of the existing gated tanh units (GTU) (Oord et al., 2016a,b) by applying weight normalization (Salimans and Kingma, 2016) and layer normalization (Ba et al., 2016).",
"This mixed normalization improves earlier work of Gehring et al. (2017), where only weight normalization is applied to the GLU.",
"As in Figure",
"4(a), it tries to preserve the variance of activations throughout the whole network by scaling the output of residual blocks by 0 .",
"5 .",
"However, we observe that this heuristic does not always preserve the variance and does not empirically work well in our dataset.",
"Contrarily, the proposed NGTU not only guarantees preservation of activation variances but also significantly improves the performance.",
"Multi-level Memory .",
"Instead of using only the last layer output of CNNs, we exploit the outputs of multiple layers of CNNs to construct S sets of memories.",
"For example, memory constructed from the 4-th layer, whose receptive field is 31, may have sentence-level embeddings, while memory from the 8-th layer, whose receptive field is 511, may have document-level embeddings.",
"We obtain each s -th level memory M a/cs by resembling key-value memory networks (Miller et al., 2016): M as = d m ( s ) , M cs = d m ( s ) + d 0 .",
"Recall that M as and M cs RN 300 are input and output memory matrix, respectively.",
"m ( s ) indicates an index of convolutional layer used for the s -th level memory.",
"For example, if we set S = 3 and m = { 3 , 6 , 9 } , we make three-level memories, each of which uses the output of the 3-rd, 6-th, and 9-th convolution layer, respectively.",
"To output memory representation M cs , we add the document embedding d 0 as a skip connection.",
"We discuss how to predict the next word y t +1 at time step t based on the memory state and previously generated words y 1: t .",
"Figure",
"3(b) visualizes the overall procedure of decoding.",
"We first apply max-pooling to the output of the last layer of the encoder network to build a whole document embedding d whole R 300 : d whole = maxpool ([ d L 1 ; ... ; d LN ]) .",
"The decoder is designed based on WaveNet (Oord et al., 2016a) that uses a series of causal dilated convolutions, denoted by F ( o l 1: t ) R t 300 .",
"We globally condition d whole to obtain embeddings of previously generated words o l 1: t as: h lf/g = F lf/g ( o l 1: t + W lf/g d whole ) , (6) h la = tanh ( h lf ) ( h lg ) , (7) o l +11: t = LayerNorm ( o l 1: t + h la ) , (8) where h lf/g are the filter and gate hidden state respectively, and learnable parameters are W lf and W lg R 300 300 .",
"We initialize o 0 t = W emb y t .",
"We set the level of the decoder network to L = 3 for TIFU-short and L = 5 for TIFU-long.",
"Next, we generate S number of query vectors { q st } Ss =1 at time t to our memory network as q st = tanh ( W sq o Lt + b sq ) , (9) where W sq R 300 300 and b sq R 300 .",
"Next, we obtain the output word probability: s t = softmax ( W o [ M 1 o t ; ... ; M So t ; o Lt ]) , (11) where W o R (300 ( S +1)) V .",
"Each of these query vectors { q st } Ss =1 is fed into the attention function of each level of memory.",
"As in (Vaswani et al., 2017), the attention function is M so t = softmax ( q st ( M as ) T d emb ) M cs , (10) where we set d emb = 300 for the embedding dimension and M so t R 300 .",
"Finally, we select the word with the highest probability y t +1 = argmax s V ( s t ) .",
"Unless y t +1 is an EOS token, we repeat generating the next word by feeding y t +1 into the output convolution layer of",
"Eq.(8).",
"We use the softmax cross-entropy loss from estimated y t to its target y GT,t .",
"However, it forces the model to predict extremes (zero or one) to distinguish among the ground truth and alternatives.",
"The label smoothing alleviates this issue by acting as a regularizer that makes the model less confident in its prediction.",
"We smooth the target distribution with a uniform prior distribution u (Pereyra et al., 2017; Edunov et al., 2017; Vaswani et al., 2017).",
"Thus, the loss over the training set D is L = (cid:88) log p ( y | x ) DKL ( u || p ( y | x )) .",
"We implement label smoothing by modifying the ground truth distribution for word y GT,t to be p ( y GT,t ) = 1 (cid:15) and p ( y (cid:48) ) = (cid:15)/ V for y (cid:48) (cid:54) = y GT,t where (cid:15) is a smoothing parameter set to 0.1.",
"Further details can be found in the Appendix.",
"Evaluation Metrics .",
"We evaluate the summarization performance with two language metrics: perplexity and standard F1 ROUGE scores (Lin, 2004).",
"We remind that lower perplexity and higher ROUGE scores indicate better performance.",
"Datasets .",
"In addition to Reddit TIFU, we also evaluate on two existing datasets: abstractive subset of Newsroom (Grusky et al., 2018) and XSum (Narayan et al., 2018a).",
"These are suitable benchmarks for evaluation of our model in two aspects.",
"First, they are specialized for abstractive summarization, which meets well the goal of this work.",
"Second, they have larger vocabulary size (40K, 50K) than Reddit TIFU (15K), and thus we can evaluate the learning capability of our model.",
"Baselines .",
"We compare with three abstractive summarization methods, one basic seq2seq model, two heuristic extractive methods and variants of our model.",
"We choose PG (See et al., 2017), SEASS (Zhou et al., 2017), DRGD (Li et al., 2017) as the state-of-the-art methods of abstractive summarization.",
"We test the attention based seq2seq model denoted as s2s-att (Chopra et al., 2016).",
"As heuristic extractive methods, the Lead-1 uses the first sentence in the text as summary, and the Ext-Oracle takes the sentence with the highest average score of F1 ROUGE-1/2/L with the gold summary in the text.",
"Thus, Ext-Oracle can be viewed as an upper-bound for extractive methods.",
"We also test variants of our method MMN-* .",
"To validate the contribution of each component, we exclude one of key components from our model as follows:",
"(i) -NoDilated with conventional convolutions instead,",
"(ii) -NoMulti with no multi-level memory",
"(iii) -NoNGTU with existing gated linear units (Gehring et al., 2017).",
"That is, -NoDilated quantifies the improvement by the dilated convolution, -NoMulti assesses the effect of multi-level memory, and -NoNGTU validates the normalized gated tanh unit.",
"Please refer to the Appendix for implementation details of our method.",
"Table 3 compares the summarization performance of different methods on the TIFU-short/long dataset.",
"Our model outperforms the state-of-the-art abstractive methods in both ROUGE and perplexity scores.",
"PG utilizes a pointer network to copy words from the source text, but it may not be a good strategy in our dataset, which is more abstractive as discussed in Table 2. SEASS shows strong performance in DUC and Gigaword dataset, in which the source text is a single long sentence and the gold summary is its shorter version.",
"Yet, it may not be sufficient to summarize much longer articles of our dataset, even with its second-level representation.",
"DRGD is based on the variational autoencoder with latent variables to capture the structural patterns of gold summaries.",
"This idea can be useful for the similarly structured formal documents but may not go well with di-TIFU-short TIFU-long vs. Baselines Win Lose Tie Win Lose Tie s2s-att 43.0 28.3 28.7 32.0 24.0 44.0 PG 38.7 28.0 33.3 42.3 33.3 24.3 SEASS 35.7 28.0 36.3 47.0 37.3 15.7 DRGD 46.7 17.3 15.0 61.0 23.0 16.0 Gold 27.0 58.0 15.0 22.3 73.7 4.0 Table 5: AMT results on the TIFU-short/long between our MMN and four baselines and gold summary.",
"These state-of-the-art abstractive methods are not as good as our model, but still perform better than extractive methods.",
"Although the Ext-Oracle heuristic is an upper-bound for extractive methods, it is not successful in our highly abstractive dataset; it is not effective to simply retrieve existing sentences from the source text.",
"Moreover, the performance gaps between abstractive and extractive methods are much larger in our dataset than in other datasets (See et al., 2017; Paulus et al., 2018; Cohan et al., 2018), which means too that our dataset is highly abstractive.",
"Table 4 compares the performance of our MMN on Newsroom-Abs and XSum dataset.",
"We report the numbers from the original papers.",
"Our model outperforms not only the RNN-based abstractive methods but also the convolutional-based methods in all ROUGE scores.",
"Especially, even trained on single end-to-end training procedure, our model outperforms T-ConvS2S , which necessitates two training stages of LDA and ConvS2S .",
"These results assure that even on formal documents with large vocabulary sizes, our multi-level memory is effective for abstractive datasets.",
"We perform two types of qualitative evaluation to complement the limitation of automatic language metrics as summarization evaluation.",
"User Preferences .",
"We perform Amazon Mechanical Turk (AMT) tests to observe general users' preferences between the summarization of different algorithms.",
"We randomly sample 100 test examples.",
"At test, we show a source text and two summaries generated by our method and one baseline in a random order.",
"We ask turkers to choose the more relevant one for the source text.",
"We obtain answers from three different turkers for each test example.",
"We compare with four abstractive baselines ( s2s-att , PG , SEASS and DRGD ) () I decided to go over to my friends house to a small party at 1 in the morning.",
"I knew my parents would say no so I snuck out of the house.",
"() I had been talking to my mom about how sad even hearing the theme song made me.",
"Also she had seen me watching a bunch of sad anime theme songs and tearing up a little so she must have thought I was depressed.",
"When I got home today my mom was practically in tears.",
"() [Source Text] (GT) sneaking out of my friends house last night (Ours) sneaking out of my friends house (s2s-att) sneaking out of town (PG) not watching my friends (SEASS) accidentally spoiling my mom song (DRGD) watching a movie [Short Summary] () Saturday was on my way to a party and this dog was walking in the road.",
"() Since it was a holiday I couldn't get her scanned for a chip but she was obviously neglected.",
"Missing fur from flea infestation, () Yesterday I was able to go get her scanned for a chip.",
"No chip.",
"So I get ready to take her home and deflea her.",
"() Anyway a third party today starts accusing me of stealing () and talking about pressing charges.",
"() [Source Text] (GT) Saved a dog.",
"Had to give dog back to possible abusers.",
"Being accused of stealing the fucking dog.",
"No good deed goes unpunished.",
"(Ours) tried to help a dog got a bit and got accused of stealing (s2s-att) got accused of being a dog by stealing a _UNK bit the dog and accused of stealing dog to the police (SEASS) called a dog a _UNK might get charged with _UNK [Long Summary] (PG) _EOS (DRGD) i was a _UNK dog and I wasn't playing attention and got arrested for being a _UNK _UNK Figure 5: Examples of abstractive summary generated by our model and baselines.",
"Table 5 summarizes the results of AMT tests, which validate that human annotators significantly prefer our results to those of baselines.",
"As expected, the gold summary is voted the most.",
"Summary Examples .",
"Figure 5 shows selected examples of abstractive summarization.",
"Baselines often generate the summary by mostly focusing on some keywords in the text, while our model produces the summary considering both keywords and the whole context thanks to multi-level memory.",
"We present more examples in the Appendix.",
"We introduced a new dataset Reddit TIFU for abstractive summarization on informal online text.",
"We also proposed a novel summarization model named multi-level memory networks (MMN).",
"Experiments showed that the Reddit TIFU dataset is uniquely abstractive and the MMN model is highly effective.",
"There are several promising future directions.",
"First, ROUGE metrics are limited to correctly capture paraphrased summaries, for which a new automatic metric of abstractive summarization may be required.",
"Second, we can explore the data in other online forums such as Quora, Stackoverflow and other subreddits.",
"We thank Chris Dongjoo Kim, Yunseok Jang and the anonymous reviewers for their helpful comments.",
"This work was supported by Kakao and Kakao Brain corporations and IITP grant funded by the Korea government (MSIT) (No. 2017-0-01772, Development of QA systems for Video Story Understanding to pass the Video Turing Test).",
"Gunhee Kim is the corresponding author."
]
| [
"abstain",
"objective",
"objective",
"objective",
"abstain",
"objective",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"objective",
"objective",
"objective",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other"
]
|
[
"As high-quality labeled data is scarce, unsupervised sentence representation learning has attracted much attention.",
"In this paper, we propose a new framework with a two-branch Siamese Network which maximizes the similarity between two augmented views of each sentence.",
"Specifically, given one augmented view of the input sentence, the online network branch is trained by predicting the representation yielded by the target network of the same sentence under another augmented view.",
"Meanwhile, the target network branch is bootstrapped with a moving average of the online network.",
"The proposed method significantly outperforms other state-of-the-art unsupervised methods on semantic textual similarity (STS) and classification tasks.",
"It can be adopted as a post-training procedure to boost the performance of the supervised methods.",
"We further extend our method for learning multilingual sentence representations and demonstrate its effectiveness on cross-lingual STS tasks.",
"Our code is available at https: //github.com/yanzhangnlp/BSL .",
"Sentence representation learning aims to map sentences into vectors that capture rich semantic information.",
"Among previous approaches, supervised methods achieve state-of-the-art performance by leveraging quality sentence labels.",
"For example, the recently proposed model Sentence-BERT (SBERT) (Reimers and Gurevych, 2019) fine-tunes a Siamese BERT network on natural language inference (NLI) tasks with labeled sentence pairs.",
"It achieves state-of-the-art results on multiple semantic textual similarity (STS) tasks.",
"However, such performance is mostly induced by high-quality supervision, while labeled data are difficult and exEqually Contributed.",
"pensive to obtain in practice.",
"Zhang et al. (2020) showed that SBERT generalizes poorly on target tasks that differ significantly from NLI on which SBERT is fine-tuned.",
"Many unsupervised methods learn sentence representations by optimizing over various self-supervised learning (SSL) objectives on a large-scale unlabeled corpus.",
"Early works often use auto-encoders (Socher et al., 2011; Hill et al., 2016) or next-sentence prediction (Kiros et al., 2015) for sentence representation learning.",
"Recently, more efforts have been devoted to representation learning with transformer-based networks using masked language modeling (MLM).",
"However, transformer-based methods do not directly produce meaningful sentence representations.",
"Instead, sig-nificant supervised fine-tuning steps with labeled data are commonly required to form good representations (Reimers and Gurevych, 2019).",
"Recently, Giorgi et al. (2020) and Zhang et al. (2020) proposed novel transformer-based frameworks to directly learn sentence representations from an unlabeled corpus, which even exhibited competitive performance to the supervised counterparts on some tasks.",
"However, Giorgi et al. (2020) required long text during training while the contrastive learning strategy employed by Zhang et al. (2020) need a careful treatment of negative pairs.",
"More important, there is still great room for improvement in terms of the quality of learned sentence representations.",
"In this paper, we introduce B ootstrapped S entence Representation L earning (BSL), a simple and lightweight framework that directly learns sentence representations without supervised fine-tuning.",
"Our work is inspired by the recent success of Siamese networks (Bromley et al., 1994) for unsupervised visual representation learning (Chen et al., 2020; Grill et al., 2020; Caron et al., 2020; Chen and He, 2020), especially the BYOL framework (Grill et al., 2020).",
"These models employed various kinds of unsupervised learning objectives to maximize the similarity between two augmented views of each image, yielding performance on par with supervised methods.",
"Unlike contrastive learning-based methods, which demand a carefully negative sampling process and large batch sizes, BYOL could achieve great performance without negative pairs.",
"The proposed BSL works as follows.",
"Given an input sentence, we first construct two augmented views through back-translation.",
"These two views are simultaneously fed into the two branches of the Siamese network, i.e., an online network and a target network following the terminology in (Grill et al., 2020).",
"In particular, the online and target networks use two pre-trained transformer networks with the same structure, e.g., BERT, to encode the two views separately.",
"During learning, the online network is trained to predict the representation of the other augmented view generated by the target network, and its parameters are updated by minimizing a predefined prediction loss.",
"As for the target network, we apply a stop-gradient strategy (Chen and He, 2020) and update it with a weighted moving average of the online network.",
"Hence, the outputs of the target network are iteratively bootstrapped to serve as targets, enabling enhanced representation learning of the online network while avoiding trivial solutions.",
"Our method is evaluated through extensive experiments.",
"Empirical results show that BSL significantly outperforms strong unsupervised baselines on a standard suite of STS and classification tasks from the SentEval benchmark (Conneau and Kiela, 2018).",
"We also demonstrate that BSL can serve as an effective post-training approach to boost the performance of the state-of-the-art supervised SBERT model.",
"We further extend our method for learning multilingual sentence representations and demonstrate that it is able to outperform strong multilingual baselines on cross-lingual STS tasks under both unsupervised and supervised settings.",
"Detailed analysis of a few factors that could affect the model performance is provided as well to motivate future research.",
"Prior approaches for sentence representation learning include two main categories supervised and unsupervised methods, while a few works",
"might leverage on both of them.",
"Most of the supervised methods are trained on labeled natural language inference (NLI) datasets including Stanford NLI (SNLI) (Bowman et al., 2015) and MultiNLI (Williams et al., 2018).",
"Early methods demonstrate good performance on a wide range of tasks (Conneau et al., 2017; Cer et al., 2018).",
"Recently, SBERT (Reimers and Gurevych, 2019) fine-tuned a pre-trained Siamese BERT network on NLI and demonstrated the state-of-the-art performance.",
"Though effective, those methods highly rely on labeled data and could be problematic to port to new domains.",
"Zhang et al. (2020) showed that SBERT generalizes poorly on target tasks with a data distribution significantly different from the NLI data.",
"There are also fruitful outcomes for unsupervised methods.",
"Some early studies attempt to learn from the internal structures within each sentence (Socher et al., 2011; Hill et al., 2016; Le and Mikolov, 2014) or utilize a distributional hypothesis to encode contextual information with generative (Kiros et al., 2015; Hill et al., 2016) or discriminative objectives (Jernite et al., 2017; Logeswaran and Lee, 2018).",
"Recently, transformer-based networks attract more attentions (Devlin et al., 2019; Liu et al., 2019), however, they do not yield meaningful sentence representations directly without supervised fine-tuning.",
"Reimers and Gurevych (2019) show that sentence embeddings obtained from BERT without fine-tuning even underperform the GloVe embeddings (Pennington et al., 2014) in terms of semantic textual similarity.",
"More recently, a few unsupervised methods were proposed to learn sentence representations from transformer-based networks without supervised fine-tuning.",
"Li et al. (2020) proposes to transform the representation obtained by a pre-trained language model to an isotropic Gaussian distribution.",
"Giorgi et al. (2020) minimizes the distance between different spans sampled from the same document.",
"However, it requires an extremely long document of 2,048 tokens as input, which limits its applications to domains with only short documents.",
"Zhang et al. (2020) proposed IS-BERT to maximize the mutual information between the global embedding and local n-gram embeddings of a given sentence.",
"However, IS-BERT requires careful negative sampling and the n-gram embeddings may be suboptimal in capturing sentence-level semantics.",
"Siamese networks have been increasingly used in various models (Chen and He, 2020; Grill et al., 2020; Caron et al., 2020) for unsupervised visual representation learning.",
"These models typically maximize the similarity between two augmented views of an image encoded by the Siamese network.",
"The main difference among these models is how they prevent undesired trivial solutions.",
"Most works rely on contrastive learning with negative sampling (Chen et al., 2020; Tian et al., 2020) to avoid collapsing.",
"Our method BSL is mainly inspired by BYOL (Grill et al., 2020), which shows that one can learn transferable visual representations via bootstrapping representations without negative sampling.",
"We transfer this learning strategy from images to texts with different network architectures and augmenting methods.",
"Given a sentence x sampled from the dataset D without label information, our goal is to learn a meaningful representation h (cid:44) f ( x ) .",
"In our framework, we adopt the idea from BYOL for unsupervised sentence representation learning with a Siamese network.",
"The architecture of the proposed BSL is illustrated in Figure 1.",
"Given a sentence x , we first obtain two augmented views x 1 (cid:44) T ( x ) and x 2 (cid:44) T (cid:48) ( x ) , where T and T (cid:48) are augmentation transformations.",
"The two views are fed into the Siamese network separately.",
"The online network contains an encoder network f ( ) and a predictor network p ( ) .",
"The target network contains an encoder network f ( ) without a predictor, leading to an asymmetric framework.",
"For the first augmented view x 1 , the online network outputs a representation z 1 (cid:44) p ( f ( x 1 )) .",
"For the second augmented view, the target network outputs a representation h 2 (cid:44) f ( x 2 ) .",
"Afterwards, we define a mean squared loss between the two normalized representations from the online and target networks, which can be simplified as minimizing their negative cosine similarity: D , ( z 1 , h 2 ) = < z 1 (cid:107) z 1 (cid:107) , h 2 (cid:107) h 2 (cid:107) >, (1) where (cid:107) (cid:107) denotes the l 2 -norm and <, > denotes the dot product between two vectors.",
"As the loss is asymmetric over the two views, we also feed x 2 to the online network and x 1 to the target network to get z 2 (cid:44) p ( f ( x 2 )) and h 1 (cid:44) f ( x 1 ) , leading to the final objective: L , = 1 2 D , ( z 1 , h 2 ) + 1 2 D , ( z 2 , h 1 ) .",
"Though we define the loss with parameters { , } , we only update during training, as shown in the stop-gradient operation Fig 1.",
"This stop-gradient operation is empirically demonstrated effective for Siamese network (Grill et al., 2020; Chen and He, 2020).",
"f is detached from the optimization graph of L , and will be updated with a weighted moving average of f .",
"The updating dynamics becomes: t t 1 + (cid:53) L , , (3) t t 1 + (1 ) t .",
"Here is the momentum.",
"When it is set to 1, the target network is never updated.",
"When it is set to 0, the target network is instantaneously synchronized to the online network at each training step.",
"At the inference stage, we obtain the representation of a sentence with the online encoder f .",
"Augmentation We use back-translation to obtain two augmented views x 1 and x 2 .",
"In this work, we only consider input sentence x in English.",
"We use an English-to-German machine translation (MT) system to translate x to y 1 , and subsequently use a German-to-English MT system to translate y 1 back to x 1 to obtain one augmented view.",
"Similarly, we use English-to-French and French-to-English MT systems to obtain another augmented view x 2 .",
"1 Besides back-translation, we also discuss other text augmentation approaches in 4.4.",
"Architecture The online network f and the target network f take x 1 and x 2 as inputs and output h 1 and h 2 .",
"We use pre-trained language models to initialize the weights in f and f such that they benefit from the knowledge obtained at the pretraining stage.",
"We apply average-pooling over outputs from the pre-trained language models to obtain h 1 and h 2 .",
"A multi-layer perceptron (MLP) p is stacked on top of f as the predictor to transform h 1 to predictions z 1 such as z 1 matches the target representation h 2 .",
"Design We conduct various experiments to evaluate the effectiveness of the proposed method.",
"Following prior works (Reimers and Gurevych, 2019; Zhang et al., 2020), our major evaluations are conducted on the Semantic Textual Similarity (STS) tasks and the classification tasks with the SentEval toolkit (Conneau and Kiela, 2018).",
"To demonstrate the flexibility of the proposed method, we further extend it for learning multilingual sentence representations and evaluate it on cross-lingual STS tasks.",
"Implementation The MLP contains three linear layers.",
"Given an input vector of dimension d , the output dimensions of the three layers are kd kd d , where k is a hyperparameter controlling the hidden size.",
"Batch normalization and rectified linear units (ReLU) are applied to the intermediate linear layers.",
"We use BERT-base or RoBERTa-base to initialize the online and target networks in monolingual settings.",
"Hyperparameter We tune learning rate, batch size, momentum , and the hyperparameter k on 1 We use Google translation engine.",
"the development set of STS-B (Cer et al., 2017).",
"For all unsupervised experiments, we set learning rate to 5e-4, momentum to 0.999, and k to",
"8. Adam (Kingma and Ba, 2015) is used as the optimizer.",
"2 Baselines Under a unsupervised learning setting, we compare to the unigram-TFIDF model, the Sequential Denoising Auto-Encoder ( SDAE ) (Hill et al., 2016), the Skipthought (Kiros et al., 2015) and FastSent (Hill et al., 2016).",
"Those models are all trained on the Toronto book corpus with 70M sentences (Zhu et al., 2015).",
"We also compare with sentence representations obtained with the average of GloVe embeddings ( GloVe avg. ), the average of BERT embeddings ( BERT avg. ), and the [CLS] representation of BERT ( BERT [CLS] ), as those are common ways to get sentence-level representations.",
"We compare with BERT-flow (Li et al., 2020), a recent method that transforms the representation obtained by BERT to an isotropic Gaussian distribution.",
"In addition, we compare with two unsupervised BERT fine-tuning methods.",
"The first is to finetune BERT with masked language modeling (MLM) objective ( BERT-mlm ) (Gururangan et al., 2020).",
"The second is IS-BERT (Zhang et al., 2020) which employs a mutual information maximization objective for fine-tuning BERT.",
"We denote our model initialized by BERT-base (RoBERTa-base) as BSL-BERT ( BSL-RoBERTa ).",
"Under a supervised learning setting, we compared to InferSent (Conneau et al., 2017), Universal Sentence Encoder ( USE ) (Cer et al., 2018), and sentence BERT/RoBERTa ( SBERT/SRoBERTa ) (Reimers and Gurevych, 2019), which are all trained on the SNLI and MultiNLI datasets.",
"To adapt BSL to a supervised learning setting, we first train a SBERT (SRoBERTa) model and then use the learned weights to initialize the online and target networks of BSL and perform BSL training.",
"We denote this model variant as BSL-SBERT ( BSL-SRoBERTa ).",
"SentEval contains a suite of STS datasets including the STS tasks 2012-2016 (Agirre et al., 2012, 2013, 2014, 2015, 2016), the STS benchmark (STS-B) (Cer et al., 2017), and the SICK-Relatedness dataset (Marelli et al., 2014).",
"These datasets con-2 Hyperparameters and implementation details are attached in Appendix A Model STS-12 STS-13 STS-14 STS-15 STS-16 STS-B SICK-R Avg.",
"sist of sentence pairs with scores from 0 to 5, where a larger score indicates higher semantic relatedness of the two sentences.",
"We use Spearman's rank correlation between the cosine-similarities of the sentence pairs and the gold scores as an evaluation metric, following prior works (Reimers and Gurevych, 2019; Zhang et al., 2020).",
"Most of the prior unsupervised methods were trained on the Toronto book corpus (Zhu et al., 2015), while the most recent and the best performed unsupervised method IS-BERT was trained on unlabeled texts from SNLI and Multi-Genre NLI (MultiNLI) datasets.",
"To have a fair comparison with IS-BERT, we follow its setting to train BSL on unlabeled texts from the SNLI and MultiNLI datasets.",
"The BERT-mlm baseline is also trained with the same setting for a fair comparison.",
"We illustrate the effect of corpus choice in 4.4.",
"SNLI contains 570k sentence pairs and MultiNLI contains 430k sentence pairs from a wider range of genres of spoken and written texts.",
"In both datasets, each sentence pair is labeled with contradiction , entailment , and neutral .",
"Note that the labels are excluded when training BSL in unsupervised settings.",
"Table 1 presents the comparison results.",
"Models are divided into two sets: trained on unlabeled data, or trained on labeled data.",
"For unsupervised models, Unigram-TFIDF, SDAE, SkipThought and FastSent are trained on the Toronto book corpus while BERT-mlm, IS-BERT, BERT-flow and our proposed method are trained on NLI.",
"In the supervised setting, BSL-SBERT and BSL-SRoBERTa only take labeled entailment pairs as the inputs to the online and target networks.",
"We make the following observations.",
"First, BSL outperforms all prior unsupervised methods by large margins.",
"On average, it outperforms IS-BERT and BERT-flow trained with the same encoder and training corpus by 5.45%, and 6.65%, respectively.",
"It even outperforms supervised baselines InferSent and USE.",
"Second, unsupervised BSL still underperforms SBERT since the latter was fine-tuned on labeled NLI data.",
"We show that by using BSL as a post-training approach, BSL-SBERT ( BSL-SRoBERTa) can further increase the average result Model MR CR SUBJ MPQA SST TREC MRPC Avg.",
"by 2.6% (4.7%) from SBERT.",
"This suggests that BSL can also be used as an effective post-training approach after supervised fine-tuning.",
"Following prior works (Reimers and Gurevych, 2019; Zhang et al., 2020), we evaluate sentence representations on a set of classification tasks from SentEval.",
"The evaluation is done by the SentEval toolkit.",
"It takes sentence representations as fixed input features to a logistic regression classifier, which is trained in a 10-fold cross-validation setup and the prediction results is computed on the test-fold.",
"The sentence encoder is not fine-tuned in the training process.",
"This set of tasks is the common bechmark used to evaluate the transferability of sentence representations on downstream tasks.",
"Table 2 presents the comparison results.",
"On average, BSL outperforms all prior unsupervised baselines.",
"It also outperforms supervised baselines InferSent and USE, and only slightly underperforms SBERT.",
"BSL-SBERT can marginally improve the results of SBERT.",
"BSL-SRoBERTa achieves the best performance.",
"In this subsection, we show that BSL can be easily extended for learning multilingual sentence representations.",
"Following (Reimers and Gurevych, 2020), we conduct evaluation on the multilingual STS 2017 dataset (Cer et al., 2017) which contains annotated pairs for EN-EN, AR-AR, ES-ES, EN-AR, EN-ES, EN-TR, EN-DE, and EN-FR.",
"To learn multilingual representations under the unsupervised setting, we process the NLI data as follows.",
"We translate the English NLI sentences to AR, ES, TR, DE and FR using Google translation engine and pair the original English sentence to each of its translations.",
"We obtain 5 pairs (EN-AR/ES/TR/DE/FR) from one sentence and treat the English sentence as one view and its translation as the other view.",
"We concatenate all pairs as the training data.",
"We use multilingual BERT (mBERT) to initialize f and f , such that the token-level representations between the different languages are aligned.",
"The remaining training procedure is the same as described in 3.",
"We denote our unsupervised model as BSL-uns .",
"We compare with sentence representations obtained with mean pooling of mBERT and XLM-R (Conneau et al., 2020) embeddings under the unsupervised setting.",
"ods from (Reimers and Gurevych, 2020): mBERT/ XLM-R-nli-stsb denotes the setting where we fine-tune XLM-R and mBERT on the English NLI and the English training set of the STS benchmark (STS-B); mBERT/XLM-R SBERT-nli-stsb is the knowledge-distillation method proposed in their paper where we learn mBERT and XLM-R to imitate the output of the English SBERT trained on NLI and STS-B with multilingual parallel sentence pairs.",
"We also compared to results of mUSE (Chi-dambaram et al., 2019) and LaBSE (Feng et al., 2020), which use dual encoder transformer architectures.",
"mUSE was trained on question-answer pairs, SNLI, translated SNLI data, and parallel corpora over 16 languages.",
"LaBSE was trained on 6 billion translation pairs for 109 languages.",
"For BSL, we initialize our online and target networks with the learned weights from XLM-R SBERT-nli-stsb 3 and then perform BSL training in a same way as described above.",
"We denote our model in this setting as BSL-sup .",
"Table 3 presents the results.",
"Under the unsupervised setting, averaging the multilingual token representations yields poor results.",
"BSL-uns achieves promising results with scores higher than 70.",
"For the supervised methods, we observe that directly fine-tuning multilingual pre-trained models on English NLI and STS-B datasets does not generalize well in a cross-lingual setting.",
"Knowledge distillation-based models are strong baselines.",
"Applying BSL as a post-training approach can boost the results of the distilled models by large margins.",
"These observations demonstrate that BSL has the 3 Downloaded from https://www.sbert.net/ docs/pretrained_models.html flexibility to be applied to learning multilingual sentence representations.",
"In this subsection, we discuss a few factors that could affect the model performance.",
"We use BERT-base as the encoder for analysis.",
"Choice of Corpus Previous works (Hill et al., 2016; Cer et al., 2018) indicated that the dataset used for learning sentence representations in a supervised setting significantly impacts their performance on STS tasks.",
"They found learning with NLI datasets is particularly useful and yields good results on common STS benchmarks.",
"We have similar observations with the proposed unsupervised method.",
"In Table 4, we show the results of training our model with a subset of 5 million sentences from the Toronto book corpus.",
"This setting achieves an average result of 69.65 on STS tasks, still outperforming prior best unsupervised model IS-BERT by 3.07%, which again demonstrates the effectiveness of the proposed framework.",
"However, we observe that the average result obtained from training with the book corpus is 2.38% lower than the result of training with the NLI datasets even the number of training pairs of the latter is only 1 million.",
"Training on both of them still underperforms training on NLI alone.",
"This finding indicates that the choice of training corpus is a key factor that affects model performance.",
"When evaluating the common STS benchmarks as used in our experiments, the NLI datasets are better choices as they are semantically related to the STS data.",
"We also conduct an evaluation on an Argument Facet Similarity task, which is more domain-specific and Model STS-12 STS-13 STS-14 STS-15 STS-16 STS-B SICK-R Avg.",
"dissimilar to the NLI tasks.",
"The results are provided in Appendix B. We find that in this scenario, training with NLI data yields poor generalization results on the target test set while training on the target raw text yields a much better performance.",
"The results indicate that semantically related corpus to the target task should be adopted as the training set.",
"Augmentation Techniques It has been shown that data augmentation plays a crucial role in unsupervised visual representation learning (He et al., 2020; Chen et al., 2020; Grill et al., 2020).",
"The images can be augmented easily by rotating, resizing, or cropping (Chen et al., 2020).",
"However, less work has been done on augmentation techniques for texts (Fang et al., 2020; Giorgi et al., 2020).",
"Here, we study how different augmentation techniques would affect the model performance.",
"We present the results of another two augmentation approaches besides back-translation in Table 4.",
"Synonym denotes the setting where we randomly replace a few words with their synonyms.",
"MLM denotes the setting where we first randomly mask a few tokens and then use a pre-trained masked language model to generate the masked tokens.",
"Specifically, for both methods, given a sentence x , we make x 1 = x and obtain x 2 with the respective augmentation technique.",
"We found that using one augmented view performs slightly better than using two augmented views for synonym-and MLM-based methods.",
"One possible reason is that these methods may generate augmented sentences with semantics totally different from the original sentences as we will show in this subsection.",
"Such kind of augmentation may bring in too much randomness and noise.",
"Therefore using two augmented views might instead harm the model performance.",
"For Synonym , we select 30% of words and substitute them with similar words according to Word-Net (Miller, 1995).",
"For MLM , we mask 20% of tokens and use RoBERTa-base for token generation.",
"In addition, we show results of a setting where we treat the sentence pairs labeled with entailment from the NLI datasets as the two views ( NLI entail ) for our model, as well as a setting using the combination of NLI unlabeled text with back-translations and the entailment pairs as the training corpus( Back-translation+NLI entail ).",
"The purpose is to illustrate how our model would perform with high quality augmented data.",
"The results in Table 4 show that our proposed framework can work with both Synonym and MLM , as they still outperform IS-BERT on the average result by 1.63% and 2.81%, respectively.",
"However, they are less effective compared to Backt-Momen.",
"translation .",
"We observe that training with entailment pairs yields good results, with only 300k training pairs, NLI entail is comparable to the model trained on all data from the NLI datasets augmented with back-translation (1 million training pairs).",
"In addition, when training on both ( Back-translation + NLI entail ), a 2.91% improvement on the average result over Back-translation is observed.",
"The results indicate that the quality of the augmented pairs directly affects the performance of the proposed framework.",
"Table 5 presents an example of augmentations generated to the same sentence.",
"4 We observe that Synonym substitutes words without considering the context while MLM generates words based on the context but losing the original word semantics.",
"Back-translation yields a relatively better sentence, however, the drawback of which is that it relies on external machine translation systems.",
"The Entailment refers to the sentence in the NLI datasets to which the original sentence has an entailment relation.",
"It can be regarded as an ideal augmentation of the original sentence.",
"How to automatically generate such augmentations remains an open question, and we leave it to future research.",
"Momentum The momentum in Equation (4) is an important hyperparameter.",
"When it is set to 1, the target network is never updated and remains the same to its initialization.",
"When it is set to 0, the target network is updated to the online network at each training step.",
"Table 6 shows the results of our method with different values of momentum.",
"We observe that our proposed method works better with larger momentum near but not equals to 1 .",
"A similar phenomenon has also been observed in BYOL (Grill et al., 2020).",
"In addition, we find that 4 More examples are provided in Appendix C although directly averaging the token embeddings from BERT yields poor sentence representations as shown in Table 1, initializing the target network using BERT and keeping it unchanged (set momentum to 1 ) during the learning procedure helps the online network learn much better representations, yielding a 21.84% improvement on STS-B.",
"Batch Size & Contrastive Learning Lastly, we analyze the effect of batch size.",
"Table 7 shows how the proposed model performs with batch sizes in { 16, 32, 64, 128 } .",
"We also compare to a setting where contrastive learning is used as the self-supervised learning objective since it is more commonly used in visual representation learning (Chen et al., 2020).",
"Specifically, in this setting, given a batch of n augmented sentence pairs ( 2 n sen-tences), each of them is treated as a positive pair.",
"For each positive pair, we treat the other 2( n 1) augmented examples within the minibatch as negative examples.",
"The results in Table 7 show that for BSL, setting the batch size to 64 yields the best result.",
"Overall BSL is less sensitive to changes in batch size while contrastive learning tends to perform better with a larger batch size such that sufficient negative samples can be obtained.",
"Contrastive learning may achieve better performance with a larger batch size while we leave it for future investigation due to its large memory consumption.",
"In this paper, we propose BSL for unsupervised sentence representation learning.",
"The experimental results demonstrate that our method could significantly outperform the state-of-the-art unsupervised methods and it can be further extended for learning multilingual sentence representations.",
"In future work, we expect both theoretically advance of Siamese networks for representation learning, e.g., why stop-gradient works so well and how to further improve the updating dynamics, as well as specifi-cally designated ideas for NLP, e.g., augmentation or learning objectives.",
"This work is partly supported by Human-Robot Interaction Phase 1 (Grant No. 19225 00054), National Research Foundation (NRF) Singapore under the National Robotics Programme; Human Robot Collaborative AI for AME (Grant No. A18A2b0046), NRF Singapore."
]
| [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"other"
]
|
[
"Discourse relations among arguments reveal logical structures of a debate conversation.",
"However, no prior work has explicitly studied how the sequence of discourse relations influence a claim's impact.",
"This paper empirically shows that the discourse relations between two arguments along the context path are essential factors for identifying the persuasive power of an argument.",
"We further propose DISCOC to inject and fuse the sentence-level structural discourse information with contextualized features derived from large-scale language models.",
"Experimental results and extensive analysis show that the attention and gate mechanisms that explicitly model contexts and texts can indeed help the argument impact classification task defined by Durmus et al. (2019), and discourse structures among the context path of the claim to be classified can further boost the performance.",
"It is an interesting natural language understanding problem to identify the impact and the persuasiveness of an argument in a conversation.",
"Previous works have shown that many factors can affect the persuasiveness prediction, ranging from textual and argumentation features (Wei et al., 2016), style factors (Baff et al., 2020), to the traits of source or audience (Durmus and Cardie, 2018, 2019; Shmueli-Scheuer et al., 2019).",
"Discourse relations, such as Restatement and Instantiation , among arguments reveal logical structures of a debate conversation.",
"It is natural to consider using the discourse structure to study the argument impact.",
"As shown in Figure 1, it consists of arguments, impact labels, stances where every argument is located in an argument tree for a controversial topic.",
"They argue contexts reflect the discourse of arguments and conduct experiments to utilize historical arguments.",
"They find BERT with flat context concatenation is the best, but discourse structures are not easily captured by this method because it is difficult to reflect implicit discourse relations by the surface form of two arguments (Prasad et al., 2008; Lin et al., 2009; Xue et al., 2015; Lan et al., 2017; Varia et al., 2019).",
"Therefore, there is still a gap to study how discourse relations and their sequential structures or patterns affect the argument impact and persuasiveness prediction.",
"In this paper, we acquire discourse relations for argument pairs with the state-of-the-art classifier for implicit discourse relations.",
"Then we train a BiLSTM whose input is the sequence of discourse relations between two adjacent arguments to predict the last argument's impact, and the performance is comparable to that of a BiLSTM on raw text.",
"This indicates that a sequence of discourse relations is one of the essential factors for identifying the persuasive power of an argument.",
"Based on this intuition, we further propose a new model called DISCOC ( Dis course C ontext O riented C lassifier) to explicitly produce discourse-dependent contextualized representations, fuse context representations in long distances, and make predictions.",
"By simple finetuning, our model beats the backbone RoBERTa (Liu et al., 2019) over 1.67% and previous best model BERT over 2.38%.",
"Extensive experiments show that DISCOC results in steady increases when longer context paths with discourse structures, e.g., stances and discourse relations, are provided.",
"On the contrary, encoders with full-range attentions are hard to capture such interactions, and narrow-range attentions cannot handle complex contexts and even become poisoned.",
"Our contributions can be highlighted as follows:",
"1. To the best of our knowledge, we are the first to explicitly analyze the effect of discourse among contexts and an argument on the persuasiveness.",
"2. We propose a new model called DISCOC to utilize attentions to imitate recurrent networks for sentence-level contextual representation learning.",
"3. Fair and massive experiments demonstrate the significant improvement; detailed ablation studies prove the necessities of modules.",
"4. Last, we discover distinct discourse relation path patterns in a machine learning way and conduct consistent case studies.",
"Code is publicly released at https://github.",
"Kialo dataset is collected by Durmus et al. (2019), which consists of 47,219 argument claim texts from kialo.com for 741 controversial topics and corresponding impact votes.",
"Arguments are organized as tree structures, where a tree is rooted in an argument thesis, and each node corresponds to an argument claim.",
"Along a path of an argument tree, every claim except the thesis was made to either support or oppose its parent claim and propose a viewpoint.",
"As shown in Figure 1, an argument tree is rooted at the thesis Physical torture of prisoners is an acceptable interrogation tool..",
"There is one claim to support this thesis ( S1 in green) and one to oppose it ( O2 in fuchsia).",
"Moreover, S1 is supported by its child claim S2 and opposed by O1 , and S3 holds the same viewpoint of O2 .",
"As each claim was put in view of all its ancestral claims and surrounding siblings, the audience evaluated the claim based on how timely and appropriate it is.",
"Therefore, the context information is of most interest to be discussed and researched in the Kialo dataset.",
"We define that a claim denoted as C is the argumentative and persuasive text to express an idea for the audience, and a context path of a claim of length l is the path from the ancestor claim to its parent claim, denoted as ( C 0 , C 1 , , C l 1 ) where C l 1 is the parent of C .",
"For simplicity, we may use C l instead of C without causing ambiguity.",
"The longest path of C starts from the thesis.",
"Statistically, the average length of the longest paths is 3.5.",
"In a controversial topic, each argument claim except the thesis would have a stance, whether to support or oppose the argument thesis or its parent claim.",
"In Kialo , users need to directly add a stance tag ( Pro or Con ) to show their agreement or disagreement about the chosen parent argument when they post their arguments.",
"We use s i to denote the stance whether C i is to support or oppose its parent C i 1 when i 1 .",
"The statistics of these stances are shown in Table",
"1. 2.4 Impact Label After reading claims as well as the contexts, users may agree or disagree about these claims.",
"The impact vote for each argument claim is provided by users who can choose from 1 to",
"5. Durmus et al. (2019) categorize votes into three impact classes ( Not Impactful , Medium Impact , and Impactful ) based on the agreement and the valid vote numbers to reduce noise.",
"We can see the overall distribution from Table",
"1. The argument impact classification is defined to predict the impact label y of C given the claim text C and its corresponding context path ( C 0 , C 1 , , C l 1 ) .",
"3 Discourse Structure Analysis 3.1 Argument Impact from the Perspective of Discourse As paths under a controversial topic are strongly related to Comparison (e.g., Contrast ), Contingency (e.g., Reason ), Expansion (e.g., Restatement ), and Temporal (e.g., Succession ) discourse relations (Prasad et al., 2008), we model the discourse structures from a view of discourse relations.",
"The first step is to acquire discourse relation annotations.",
"BMGF-RoBERTa (Liu et al., 2020) is the state-of-the-art model proposed to detect implicit discourse relations from raw text.",
"In the following experiments, we use that as our annotation model to predict discourse relation distributions for each adjacent claim pair.",
"Specifically, for a given argument claim C l and its context path ( C 0 , C 1 , , C l 1 ) , we denote p disco ( C l ) = ( r 1 , r 2 , , r l ) as a discourse relation path such that r i R indicates the discourse relation between C i 1 and C i when i 1 .",
"In this work, we adopt the 14 discourse relation senses in CoNLL2015 Shared Task (Xue et al., 2015) as R .",
"And we also define the corresponding distributed discourse relation path to be p dist ( C l ) = ( d 1 , d 2 , , d l ) such that d i = F ( C i 1 , C i ) is the predicted discourse relation distribution between claims C i 1 and C i ( i 1 ) by a predictive model F .",
"In experiments, F is BMGF-RoBERTa 1 .",
"8 out of 14 relations appear in the predictions, and the statistics of 7 frequent predictions are shown in Table",
"2. As discourse contexts would affect the persuasive power of claims, we first discover the correlations between impacts and stances as well as correlations between impacts and discourse relations, illustrated in Figure",
"2. From the label distribution and correlations, we find there are some clear trends: 1) Stances have little influence on argument impact, but discourse relations do.",
"Correlations indicate that it is the contents instead of standpoints that contribute to potential impacts; 2) It is a smart choice to show some examples to convince others 1 The official open-source code is at https://github.",
"com/HKUST-KnowComp/BMGF-RoBERTa .",
"We train such a classifier on CoNLL2015 Shared Task training data, and achieve 57.57% accuracy on the test set.",
"because Instantiation is more relevant to Impactful than any other relations; 3) Similarly, explaining is also helpful to make voices outstanding; 4) Restatement is also positively correlated with Impactful so that we can also share our opinions by paraphrasing others' viewpoints to command more attention.",
"On the contrary, Chosen Alternative is a risky method because the audience may object.",
"To investigate the role of discourse relations in impact analysis, we design a simple experiment that a single-layer BiLSTM followed by a 2-layer MLP with batch normalization predicts the impact by utilizing the distributed discourse relation path p dist ( C l ) .",
"For the purposes of comparison and analysis, we build another BiLSTM on the raw text.",
"Each claim has [BOS] and [EOS] tokens to clarify boundaries and we use 300-dim pretrained GloVe word embeddings (Pennington et al., 2014) and remain them fixed.",
"We set different thresholds for context path lengths so that we can control how many discourse relations or contexts are provided.",
"From Figure 3, discourse features can result in comparable performance, especially when longer discourse paths are provided.",
"Instead, the model with raw text gets stuck in complex contexts.",
"It is generally agreed that the informative context can help understand the text to be classified.",
"However, it is still unclear how to determine whether a context is helpful.",
"One drawback of a broader context is the increasing ambiguity, especially in the scenario of the argument context path from different users like the results shown in Figure",
"3. Take claims in Figure 1 for example, S1 and O2 give two different consequences to support or oppose Figure 3: Performance of BiLSTM on discourse relations and BiLSTM on raw text.",
"the thesis .",
"And O1 objects S1 by a contrast conclusion.",
"It is hard to build a connection between the thesis and O1 if S1 is not given because it is challenging to build a connection between reveal desired information with interrogation tool without a precondition Torture can help force prisoners to reveal information.",
"On the contrary, thesis and S2 are still compatible as S2 is also a kind of result.",
"Hence, a recurrent model with the gating mechanism that depicts pair-wise relations and passes to the following texts makes more sense.",
"LSTM has gates to decide whether to remember or forget during encoding, but it cannot handle long-range information with limited memory.",
"Recently, transformer-based encoders have shown remarkable performance in various complicated tasks.",
"These models regard sequences as fully connected graphs to learn the correlations and representations for each token.",
"People assume that transformers can learn whether two tokens are relevant and how strong the correlation is by back-propagation.",
"Table 3 illustrates different possible ways to aggregation context information.",
"Transformer (Vaswani et al., 2017) and BERT (Devlin et al., 2019) adopt full-range attentions while TransformerXL (Dai et al., 2019) and XLNet (Yang et al., 2019) regard historical encoded representations as memories to reuse hidden states.",
"SparseTransformer (Child et al., 2019), in the opposite direction, stacks hundreds of layers by narrow the attention scope by sparse factorization.",
"Information can still spread after propagations in several layers.",
"Inspired by these observations, we design DISCOC ( Dis course C ontext O riented C lassifier) to capture contextualized features by localized attentions and imitate recurrent models to reduce noises from long distance context.",
"As shown in Figure 4, DISCOC predicts the argument impact through three steps.",
"A difficult problem in such an argument claim tree is the noise in irrelevant contexts.",
"A claim is connected to its parent claim because of a supporting or opposing stance, but claims in long distances are not high-correlated.",
"Based on this observation, DISCOC conduct word-level representations by encoding claim pairs instead of the whole contexts.",
"Given a claim C l and its context path ( C 0 , C 1 , , C l 1 ) , all adjacent pairs are coupled together, i.e., ( C 0 , C 1 ) , , ( C l 1 , C l ) .",
"We can observe that each claim appears twice except the first and the last.",
"Next, each pair ( C i 1 , C i ) is fed into the RoBERTa encoder to get the contextualized word representations.",
"C 0 and C l are also encoded separately so that each claim has been encoded twice.",
"We use H i to denote the encoded word representations of C i when this claim is encoded with its parent C i 1 , or when it is computed alone as C 0 .",
"Similarly, H i is the representations when encoding ( C i , C i +1 ) , or when it is fed as C l .",
"The encoding runs in parallel but we still use the term phase to demonstrate for better understanding.",
"In 0-th phase, RoBERTa outputs H 0 .",
"One particular relationship between a parent-child pair is the stance, and we insert the one special token [Pro] or [Con] between them.",
"It makes the sentiment and viewpoint of the child claim more accurate.",
"On the other hand, discourse relations can also influence impact prediction, as reported in Section 3.1.",
"However, discourse relations are not mutually exclusive, let alone predictions from BMGF-RoBERTa are not precise.",
"Thus, we use the relation distributions as weights to get sense-related embeddings over 14 relations.",
"We add additional W 1 d i for the parent and W 2 d i for the child except position embeddings and segment embeddings, where d i is predicted discourse relation distribution for ( C i 1 , C i ) , W 1 and W 2 are trainable transformations for parents and children.",
"Hence, RoBERTa outputs H i 1 and H i with the concatenation of two claims, [CTX] C i 1 [SEP] [CLS] s i C i [SEP] in the i -th phase !",
"s i refers to the stance between C i 1 and C i , d i is the discourse relation distribution obtained from F ( C i 1 , C i ) .",
"Gray boxes represent the RoBERTa encoder and the violet is a gated transformer layer.",
"[CTX], [CLS], and [SEP] are omitted in this figure.",
"( i { 1 , 2 , , l } ), where [CTX] is a special token to indicate the parent claim and distinguish from [CLS].",
"Its embedding is initialized as a copy embedding of [CLS] but able to update by itself.",
"And H l is computed by self-attention with no context in the last phase.",
"In the end, each claim C i has two contextualized representations H i and H i with limited surrounding context information.",
"As claim representations { H i } and { H i } from RoBERTa are not bidirectional, we need to combine them and control which of them matters more.",
"The gated fusion (Liu et al., 2020) has been shown of a better mixture than the combination of multihead attention and layer normalization.",
"We use it to maintain the powerful representative features and carry useful historical context information: H i = MultiHead ( H i , H i , H i ) (1) A j = Sigmoid ( W a [ H i , H i ] j + b a ) (2) U i = A (cid:12) H i + (1 A ) (cid:12) H i , (3) where MultHead is the multi-head attention operation (Vaswani et al., 2017) whose query is H i and key & value is H i , A j is the fusion gate for the j -th word embedding, [ ] is the concatenation, (cid:12) is the element product operation, and W a and b a are trainable matrix and bias for fusion gating.",
"There are two reasons why using H i as the key of the multi-head attention: 1) [CLS] exists in the H i while the replaced token [CTX] appears in H i when i (cid:54) = 0 ; 2) The position ids start from 0 when computing H i .",
"The fused [CLS] token embedding u i is selected to represent the whole claim.",
"After extracting sentence-level claim representations u 0 , u 1 , , u l , a transformer layer is used to gather longer-range context representations.",
"The transformer layer includes a position embedding layer to provide sinusoid positional embeddings, a gated multi-head attention layer, a feed-forward network, and a layer normalization.",
"The position embedding layer in DISCOC is different from that in the vanilla Transformer because it generates position ids in a reversed order, i.e. l, l 1 , , 0 .",
"The reversed order is helpful to model the contexts of variable length because the claim to be classified has the same position embedding.",
"We also choose a gate to maintain the scale instead of using a residual connection.",
"The gated transformer can generate meaningful representations because each claim can attend any other claims and itself.",
"On the other hand, it perfectly fits the pair-wise encoding that imitates the recurrent networks to reduce the noise in irrelevant contexts and enhance the nearest context's correlations.",
"For example, in Figure 1, S2 is predicted as a result of S1 (with a probability of 39.17%) and a restatement (with a probability of 19.81%), and S1 is also a result of thesis (with a probability of 70.57%).",
"Consequently, S2 is high-relevant to the thesis as a potential result if physical torture is acceptable, which can be captured by DISCOC.",
"Finally, a 2-layer MLP with batch normalization is applied to v l of the last claim to predict its impact.",
"SVM.",
"Durmus et al. (2019) created linguistic features for a SVM classifier, such as named entity types, POS tags, special marks, tf-idf scores for n-grams, etc.",
"We report the result from their paper.",
"HAN.",
"HAN (Yang et al., 2016) computes document vectors in a hierarchical way of encoding and aggregation.",
"We replace its BiGRU with BiLSTM for the sake of comparison.",
"And we also extend it with pretrained encoders and transformer layers.",
"Flat-MLMs.",
"Pretrained masked languages, e.g., RoBERTa, learn word representations and predict masked words by self-attention.",
"We use these encoders to encode the flat context concatenation like [CTX] C 0 [SEP] [CTX] [CTX] C l 1 [SEP] as Segment A and [CLS] C l [SEP] as Segment B. After getting [CTX] and [CLS] representations, a gated transformer layer and a MLP predict impacts.",
"As for XLNet, we follow its default setting so that [CTX] and [CLS] are located at the end of claims.",
"Interval-MLMs.",
"Flat-MLMs regard the context path as a whole segment and ignore the real discourse structures except the adjacency, e.g., distances between two claims are missing.",
"We borrow the idea from BERT-SUM (Liu and Lapata, 2019): segment embeddings of C i are assigned depending on whether the distance to C l is odd or even.",
"Context-MLMs.",
"We also compare pretrained encoders with context masks.",
"A context mask is to localize the attention scope from the previous to the next.",
"That is, C i can attends words in C i 1 and C i +1 except for itself if 1 i < l ; C 0 can only attend C 0 , C 1 , and C l can only attend C l 1 , C l .",
"Memory-MLMs.",
"XLNet utilizes memory to extend the capability of self-attention to learn super long historical text information.",
"We also extend Flat-MLMs under this setting.",
"We use pretrained base models 2 in DISCOC and baselines.",
"We follow the same finetuning setting: classifiers are optimized by Adam (Kingma and Ba, 2015) with a scheduler and a maximum learning rate 2e-5.",
"The learning rate scheduler consists of a linear warmup for the 6% steps and a linear decay for the remaining steps.",
"As for BiLSTM and HAN, the maximum learning rate is 1e-3.",
"The hidden state dimension of linear layers, the hidden units of LSTM layers, and projected dimensions for attention are 128.",
"The number of the multi-head attention is set as 8.",
"Dropout is applied after each layer and the probability is 0.1.",
"We pick the best context path length l for each model by grid search from 0 to 5 on validation data with the batch size of 32 in 10 epochs.",
"Each model runs five times.",
"Table 4 shows experimental results of different models.",
"It is not surprising that neural models can easily beat traditional feature engineering methods in overall performance.",
"But linguistic features still bring the highest precision.",
"We also observe a significant 3.49% improvement with context vectors aggregating in HAN-BiLSTM compared with the simple BiLSTM.",
"This indicates that it is necessary to model contexts with higher-level sentence features.",
"Models with pretrained encoders ben-efit from representative embeddings, and HAN-RoBERTa achieves a gain of 5.49%.",
"Flat context paths contain useful information to help detect the argument impact, but they also involve some noise from unrelated standpoints.",
"Interval segment embeddings do not reduce noise but make BERT confused.",
"It is counterintuitive that the segment embeddings depend on whether the distance is odd or even because BERT uses these for next sentence prediction.",
"Since XLNet uses relative segment encodings instead of segment embeddings, Interval-XNet is better than Flat-XLNet in all three metrics.",
"On the other hand, context masks bring another side effect for BERT, RoBERTa, and XLNet.",
"Although these masks limit the attention scope at first sight, distant word information is able to flow to words with the increment of transformer layers.",
"As a result, the uncertainty and attention bias increase after adding context masks.",
"The memory storing context representations is also not helpful.",
"The main reason is 2 BERT-base-uncased, RoBERTa-base, and XLNet-base-cased are downloaded from huggingface.co Model Precision Recall F1 Majority 19.43 33.33 24.55 SVM (Durmus et al., 2019) 65.67 38.58 35.42 BiLSTM 46.94 1.08** 46.64 0.71** 46.51 1.11** HAN-BiLSTM 51.93 1.37** 49.08 1.52** 50.00 1.49** HAN-BERT 53.72 0.80** 53.45 0.51** 53.46 0.47** HAN-RoBERTa 55.71 1.12** 55.95 0.90** 55.49 0.62** HAN-XLNet 53.91 0.96** 55.56 1.59** 54.53 1.22** BERT (Durmus et al., 2019) 57.19 0.92 55.77 1.05** 55.98 0.70** Flat-BERT 57.34 1.56 57.07 0.74* 56.75 0.82** Flat-RoBERTa 58.11 1.34 56.40 0.61** 56.69 0.63** Flat-XLNet 55.86 1.74* 56.20 1.17** 55.57 0.95** Interval-BERT 55.56 2.03* 55.52 1.44** 55.34 1.50** Interval-RoBERTa 58.31 0.89 56.46 1.44* 56.61 1.24* Interval-XLNet 57.54 0.50 56.78 1.63* 56.52 1.00** Context-BERT 54.96 0.93** 56.09 0.83** 55.44 0.83** Context-RoBERTa 57.28 0.97 55.29 0.26** 55.83 0.54** Context-XLNet 54.56 0.71** 56.28 1.22** 55.10 0.72** Memory-BERT 54.33 0.83** 57.57 0.67* 55.22 0.61** Memory-RoBERTa 55.08 0.89** 55.55 1.59** 54.76 1.38** Memory-XLNet 55.44 1.15** 55.45 1.25** 54.91 0.96** DISCOC 57.90 0.70 59.41 1.41 58.36 0.52 Table 4: The averages and standard deviations of different models on the argument impact classification.",
"that the last claim's update signal can not be used to update previous context representations.",
"That is, Memory-models degenerate to models with frozen path features or even worth.",
"DISCOC that we proposed can capture useful contexts and fuse in a comprehensive manner.",
"Finally, DISCOC outperforms the second best model Flat-BERT over 1.61% and its backbone Flat-RoBERTa over 1.67%, the previous best model BERT by 2.38%.",
"Different claims have different contexts.",
"We only report the best performance with a fixed maximum context path length in Table",
"4. Figure 5 shows F1 scores of models with different hyper-parameters.",
"DISCOC always benefits from longer discourse contexts while other models get stuck in performance fluctuation.",
"Most models can handle one context claim, which is consistent with our idea of pair-wise encoding.",
"DISCOC has consistent performance gains; instead, other models cannot learn long-distance structures better.",
"Each token in Flat-RoBERTa and Interval-RoBERTa can attend all other tokens, and the two are the most competitive baselines.",
"However, Context-RoBERTa and Memory-RoBERTa limit the attention scope to the tokens of one previous claim, making models unable to make use of long-distance context information.",
"As shown in Table 4, there is little difference between the performance of RoBERTa variants and that of BERT variants.",
"We conduct the experiment for DISCOC (E-BERT) with BERT as the encoder reported in Table",
"5. Its performance has achieved a significant boost over 1.29% despite the small gap between itself and DISCOC.",
"We also remove either the stance token embedding or the discourse sense embeddings from DISCOC.",
"The results in Table 5 suggest that both sides of structures are essential for modelling the correlation between the parent claim and the child claim.",
"By comparison, discourse sense embeddings are more vital.",
"We add a gated transformer layer to gather sentence-level vectors.",
"Such gathering is necessary for the proposed framework because each claim can only attend limited contexts.",
"BiLSTM and convolutions can also be used for this purpose, so we replace the gated transformer layer with a BiLSTM or a convolutional layer.",
"Moreover, we also remove it to make predictions by u l directly.",
"The results in Table 5 show that the gated transformer is the irreplaceable part of DISCOC because it retains the contextualized representations and remains their scales.",
"Simple removing it hurts recall enormously.",
"We use Logistic Regression to mine several interesting discourse relation patterns.",
"Detailed settings are described in Appendix A, and results including the most high-coefficient patterns are listed in Table",
"6. We observe that some discourse relation path patterns are distinguishing for classifying individual impact labels.",
"Instantiation is a typical relation that only occurs in the top patterns of Impactful .",
"Also, Restatement is relatively frequent for Impactful (5 of top 10), but it is the relation between the grandparent and the parent.",
"Providing additional resources ( Restatement Result ) or objecting others' repetitions ( Restatement Contrast ) can increase the persuasive power.",
"For the Medium Impact class, its top 10 significant patterns are the longest on aver-Discourse Patterns DISCOC DISCOC (w/o DiscoE) Reason-Contrast 65.56 43.33 Restatement 56.63 57.59 Reason 58.91 54.96 Conjunction-Reason 78.97 72.14 Conjunction-Contrast 80.64 66.17 Contrast-Conjunction 55.15 42.38 Restatement-Reason 38.00 37.35 Contrast-Restatement 66.10 76.24 Chosen Alternative 73.33 42.86 All 59.04 58.06 Table 7: F1 score differences between two best models on top 9 discourse relation patterns and all patterns.",
"age.",
"That indicates some views are usually considered ordinary in complex structures.",
"Conjunction is the dominant relation (8 of top 10) so that we are suggested to avoid to go along with others.",
"The case of Not Impactful is a little clearer, in the sense that it has a unique relation Chosen Alternative as one of the most significant patterns.",
"Restatement also appears frequently, showing neither generalization, nor specification, nor paraphrasing of others' views can help make claims stand out.",
"In Appendix A, we define P r ( r 1 , , r l ) as the joint probability to generate the discourse relation path ( r 1 , , r l ) given the context ( C 0 , C 1 , , C l 1 ) and the claim C l .",
"For example, the P r ( Reason , Contrast ) is 56.59% which corresponds to an Impactful claim There is no evidence for this with its parent claim Our bodies know how to recognise and process current foods; changing them through genetic modification will create health issues.",
"Furthermore, we find 5 of top 5 and 8 of top 10 are voted as Impactful claims after sorting based on P r ( Reason , Contrast ) .",
"For a complex pattern Restatement Restatement appearing in both top patterns of the Impactful and the Not Impactful , 3 cases with the maximum probabilities are Not Impactful while the following 7 cases are Impactful .",
"It is interesting that the thesis of the top 3 claims is the same discussion about an American politician.",
"There are 25 Impactful claims and 22 Not Impactful claims in this topic, 24 of which are restatements of their parent claims.",
"As for Restatement Reason , the most top pattern of the Not Impactful , we find 7 of the top 10 claims relevant to politics, 2 of them about globalization, and one food-related.",
"Therefore, there is no perfect answer in these quite controversial topics, and that is why Restatement and Reason appear frequently.",
"On the other hand, we check the performance of testing examples to verify the effectiveness of these discourse relation patterns.",
"We choose the best model of DISCOC, whose F1 score is 59.04% as well as the best model of DISCOC (w/o DiscoE) whose F1 score is 58.06%.",
"We select testing examples with specific discourse patterns, and performance differences are shown in Table",
"7. DISCOC benefits from 7 of the top 9 patterns and the performance margins are even more significant than the improvement of the overall results.",
"Without giving discourse relation patterns, the model still has trouble capturing such implicit context influences.",
"Empirical results support our idea that implicit discourse relations could affect the persuasiveness.",
"There is an increasing interest in computational argumentation to evaluate the qualitative impact of arguments based on corpus extracted from Web Argumentation sources such as CMV sub-forum of Reddit (Tan et al., 2016).",
"Studies explored the importance and effectiveness of various factors on determining the persuasiveness and convincingness of arguments, such as surface texture, social interaction and argumentation related features (Wei et al., 2016), characteristics of the source and audience (Durmus and Cardie, 2019; Shmueli-Scheuer et al., 2019; Durmus and Cardie, 2018), sequence ordering of arguments (Hidey and McK-eown, 2018), and argument structure features (Li et al., 2020).",
"The style feature is also proved to be significant in evaluating the persuasiveness of news editorial argumentation (Baff et al., 2020).",
"Habernal and Gurevych (2016) conducted experiments in an entirely empirical manner, constructing a corpus for argument quality label classification and proposing several neural network models.",
"In addition to the features mentioned above, the role of pragmatic and discourse contexts has shown to be crucial by not yet fully explored.",
"Zeng et al. (2020) examined how the contexts and the dynamic progress of argumentative conversations influence the comparative persuasiveness of an argumentation process.",
"Durmus et al. (2019) created a new dataset based on argument claims and impact votes from a debate platform kialo.com , and experiments showed that incorporating contexts is useful to classify the argument impact.",
"Understanding discourse relations is one of the fundamental tasks of natural language understanding, and it is beneficial for various downstream tasks such as sentiment analysis (Nejat et al., 2017; Bhatia et al., 2015), machine translation (Li et al., 2014) and text generation (Bosselut et al., 2018).",
"Discourse information is also considered indicative for various tasks of computational argumentation.",
"Eckle-Kohler et al. (2015) analyzed the role of discourse markers for discriminating claims and premises in argumentative discourse and found that particular semantic group of discourse markers are highly predictive features.",
"Hidey and McK-eown (2018) concatenated sentence vectors with discourse relation embeddings as sentence features for persuasiveness prediction and showed that discourse embeddings helped improve performance.",
"In this paper, we explicitly investigate how discourse structures influence the impact and the persuasiveness of an argument claim.",
"We present DISCOC to produce discourse-dependent contextualized representations.",
"Experiments and ablation studies show that our model improves its backbone RoBERTa around 1.67%.",
"Instead, HAN and other attention mechanisms bring side effects.",
"We discover distinct discourse relation path patterns and analyze representatives.",
"In the future, we plan to explore discourse structures in other NLU tasks.",
"This paper was supported by the NSFC Grant (No. U20B2053) from China, the Early Career Scheme (ECS, No. 26206717), the General Research Fund (GRF, No. 16211520), and the Research Impact Fund (RIF, No. R6020-19 and No. R6021-20) from the Research Grants Council (RGC) of Hong Kong, with special thanks to the Huawei Noah's Ark Lab for their gift fund."
]
| [
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"result",
"abstain",
"method",
"objective",
"other"
]
|
[
"Multi-task learning (MTL) has achieved success over a wide range of problems, where the goal is to improve the performance of a primary task using a set of relevant auxiliary tasks.",
"However, when the usefulness of the auxiliary tasks w.r.t. the primary task is not known a priori, the success of MTL models depends on the correct choice of these auxiliary tasks and also a balanced mixing ratio of these tasks during alternate training.",
"These two problems could be resolved via manual intuition or hyper-parameter tuning over all combinatorial task choices, but this introduces inductive bias or is not scalable when the number of candidate auxiliary tasks is very large.",
"To address these issues, we present AUTOSEM, a two-stage MTL pipeline, where the first stage automatically selects the most useful auxiliary tasks via a Beta-Bernoulli multi-armed bandit with Thompson Sampling, and the second stage learns the training mixing ratio of these selected auxiliary tasks via a Gaussian Process based Bayesian optimization framework.",
"We conduct several MTL experiments on the GLUE language understanding tasks, and show that our AUTOSEM framework can successfully find relevant auxiliary tasks and automatically learn their mixing ratio, achieving significant performance boosts on several primary tasks.",
"Finally, we present ablations for each stage of AUTOSEM and analyze the learned auxiliary task choices.",
"Multi-task Learning (MTL) (Caruana, 1997) is an inductive transfer mechanism which leverages information from related tasks to improve the primary model's generalization performance.",
"It achieves this goal by training multiple tasks in parallel while sharing representations, where the training signals from the auxiliary tasks can help improve the performance of the primary task.",
"Multi-task learning has been applied to a wide range of natural language processing problems (Luong et al., 2015; Pasunuru and Bansal, 2017; Hashimoto et al., 2017; Ruder et al., 2017b; Kaiser et al., 2017; McCann et al., 2018).",
"Despite its impressive performance, the design of a multitask learning system is non-trivial.",
"In the context of improving the primary task's performance using knowledge from other auxiliary tasks (Lu-ong et al., 2015; Pasunuru and Bansal, 2017), two major challenges include selecting the most relevant auxiliary tasks and also learning the balanced mixing ratio for synergized training of these tasks.",
"One can achieve this via manual intuition or hyper-parameter tuning over all combinatorial task choices, but this introduces human inductive bias or is not scalable when the number of candidate auxiliary tasks is considerable.",
"To this end, we present AUTOSEM, a two-stage Bayesian optimization pipeline to this problem.",
"In our AUTOSEM framework 1 , the first stage addresses automatic task selection from a pool of auxiliary tasks.",
"For this, we use a non-stationary multi-armed bandit controller (MAB) (Bubeck et al., 2012; Raj and Kalyani, 2017) that dynamically alternates among task choices within the training loop, and eventually returns estimates of the utility of each task w.r.t. the primary task.",
"We model the utility of each task as a Beta distribution, whose expected value can be interpreted as the probability of each task making a non-negative contribution to the training performance of the primary task.",
"Further, we model the observations as Bernoulli variables so that the posterior distribution is also Beta-distributed.",
"We use Thompson sampling (Chapelle and Li, 2011; Russo et al., 2018) to trade off exploitation and exploration.",
"selected in the first stage and automatically learns the training mixing ratio of these tasks, through the framework of Bayesian optimization, by modeling the performance of each mixing ratio as a sample from a Gaussian Process (GP) to sequentially search for the optimal values (Rasmussen, 2004; Snoek et al., 2012).",
"For the covariance function in the GP, we use the Matern kernel which is parameterized by a smoothness hyperparameter so as to control the level of differentiability of the samples from GP.",
"Further, following Hoffman et al. (2011), we use a portfolio of optimistic and improvement-based policies as acquisition functions (Shahriari et al., 2016) for selecting the next sample point from the GP search space.",
"We conduct several experiments on the GLUE natural language understanding benchmark (Wang et al., 2018), where we choose each of RTE, MRPC, QNLI, CoLA, and SST-2 as the primary task, and treat the rest of the classification tasks from the GLUE benchmark as candidate auxiliary tasks.",
"Results show that our AUTOSEM framework can successfully find useful auxiliary tasks and automatically learn their mixing ratio, achieving significant performance boosts on top of strong baselines for several primary tasks, e.g., 5.2% improvement on QNLI, 4.7% improvement on RTE, and 2.8%/0.8% improvement on MRPC.",
"We also ablate the usefulness of our two stages of auxiliary task selection and automatic mixing ratio learning.",
"The first ablation removes the task selection stage and instead directly performs the second GP mixing ratio learning stage on all auxiliary tasks.",
"The second ablation performs the task selection stage (with multi-armed bandit) but replaces the second stage Gaussian Process with manual tuning on the selected tasks.",
"Our 2-stage model performs better than both these ablations, showing that both of our stages are crucial.",
"Further, we also discuss the learned auxiliary task choices in terms of their intuitive relevance w.r.t. the corresponding primary task.",
"Multi-task learning (Caruana, 1998), known for improving the generalization performance of a task with auxiliary tasks, has successfully been applied to many domains of machine learning, including natural language processing (Col-lobert and Weston, 2008; Girshick, 2015; Luong et al., 2015; Pasunuru and Bansal, 2017; Pasunuru",
"Pasunuru et al., 2017), computer vision (Misra et al., 2016; Kendall et al., 2017; Dai et al., 2016), and reinforcement learning (Teh et al., 2017; Parisotto et al., 2015; Jaderberg et al., 2016).",
"Although there are many variants of multi-task learning (Ruder et al., 2017b; Hashimoto et al., 2017; Luong et al., 2015; McCann et al., 2018), our goal is to improve the performance of a primary task using a set of relevant auxiliary tasks, where different tasks share some common model parameters with alternating mini-batches optimization, similar to Luong et al. (2015).",
"To address the problem of automatic shared parameter selection, Ruder et al. (2017a) automatically learned the latent multi-task sharing architecture, and Xiao et al. (2018) used a gate mechanism that filters the feature flows between tasks.",
"On the problem of identifying task relatedness, Ben-David and Schuller (2003) provided a formal framework for task relatedness and derived generalization error bounds for learning of multiple tasks.",
"Bingel and Sgaard (2017) explored task relatedness via exhaustively experimenting with all possible two task tuples in a nonautomated multi-task setup.",
"Other related works explored data selection, where the goal is to select or reorder the examples from one or more domains (usually in a single task) to either improve the training efficiency or enable better transfer learning.",
"These approaches have been applied in machine translation (van der Wees et al., 2017), language models (Moore and Lewis, 2010; Duh et al., 2013), dependency parsing (Sgaard, 2011), etc.",
"In particular, Ruder and Plank (2017) used Bayesian optimization to select relevant training instances for transfer learning, and Tsvetkov et al. (2016) applied it to learn a curriculum for training word embeddings via reordering data.",
"Graves et al. (2017) used the bandit approach (Exp3.S algorithm) in the context of automated curriculum learning, but in our work, we have two stages with each stage addressing a different problem (auto-matic task selection and learning of the training mixing ratio).",
"Recently, Sharma and Ravindran (2017) used multi-armed bandits (MAB) to learn the choice of hard vs. easy domain data selection as input feed for the model.",
"Guo et al. (2018) used MAB to effectively switch across tasks in a dynamic multi-task learning setup.",
"In our work, we use MAB with Thompson Sampling for the novel paradigm of automatic auxiliary task selection; and next, we use a Matern-kernel Gaussian Process to automatically learn an exact (static) mixing ratio (i.e., relatedness ratio) for the small number of selected tasks.",
"Many control problems can be cast as a multiarmed bandits problem, where the goal of the agent is to select the arm/action from one of the N choices that minimizes the regrets (Bubeck et al., 2012).",
"One problem in bandits learning is the trade-off between exploration and exploitation, where the agent needs to make a decision between taking the action that yields the best payoff on current estimates or exploring new actions whose payoffs are not yet certain.",
"Many previous works have explored various exploration and exploitation strategies to minimize regret, including Boltzmann exploration (Kaelbling et al., 1996), adversarial bandits (Auer et al., 2002b), UCB (Auer et al., 2002a), and information gain using variational approaches (Houthooft et al., 2016).",
"In this work, for task selection, we use Thompson Sampling (Russo et al., 2018; Chapelle and Li, 2011), an algorithm for sequential decision making problems, which addresses a broad range of problems in a computationally efficient manner and is therefore enjoying wide use.",
"Gaussian Process (GP) is a non-parametric Bayesian approach, and it can capture a wide variety of underlying functions or relations between inputs and outputs by taking advantage of the full information provided by the history of observations and is thus very data-efficient (Ras-mussen, 2004; Shahriari et al., 2016; Schulz et al., 2018).",
"Gaussian Processes have been widely used as a black-box optimizer and hyper-parameter optimization (Snoek et al., 2012; Brochu et al., 2010; Knudde et al., 2017; Cully et al., 2018; Swersky et al., 2013; Golovin et al., 2017).",
"In our work, we use Gaussian Process for automatic learning of the multi-task mixing ratio in our stage-2 among the selected tasks from stage-1.",
"We will first introduce our baseline model and its integration for multiple classification tasks in a multi-task learning (MTL) setup.",
"Next, we will introduce our AUTOSEM framework, an automatic way of selecting auxiliary tasks and learning their optimal training mixing ratio w.r.t. the primary task, via a Beta-Bernoulli bandit with Thompson Sampling and a Gaussian Process framework.",
"Let s 1 and s 2 be the input sentence pair in our classification task, where we encode these sentences via bidirectional LSTM-RNN, similar to that of Conneau et al. (2017).",
"Next, we do max-pooling on the output hidden states of both encoders where u and v are the outputs from the max-pooing layer for s 1 and s 2 respectively.",
"Later, we map these two representations ( u and v ) into a single rich dense representation vector h : h = [ u ; v ; u (cid:63) v ; | u v | ] (1) where [; ] represents the concatenation and u (cid:63) v represents the element-wise multiplication of u and v .",
"We project this final representation h to label space to classify the given sentence pair (see Fig. 1).",
"We also use ELMo (Peters et al., 2018) representations for word embeddings in our model.",
"For this, we extract the three ELMo layer representations for each of the sentence pair and use their weighted sum as the ELMo output representation, where the weights are trainable.",
"In this work, we focus on improving a task (pri-mary task) by allowing it to share parameters with related auxiliary tasks via multi-task learning (MTL).",
"Let { D 1 , ..., DN } be a set of N tasks, where we set D 1 to be the primary task and the rest of them as auxiliary tasks.",
"We can extend TaskUtility Gaussian Process MR-1 Multi-Armed Bandit Controller Arm1 Arm2 Arm3 Arm4 Arm5 Arm6 PrimaryTask SampledTask MR-2 MR-3 F eedba ck Sample N e x t S a m p l e N e x t S a m p l e Mixing Ratios Figure 2: Overview of our AUTOSEM framework.",
"our single-task learning baseline (see Sec. 3.1) into multi-task learning model by augmenting the model with N projection layers while sharing the rest of the model parameters across these N tasks (see Fig. 1).",
"We employ MTL training of these tasks in alternate mini-batches based on a mixing ratio 1 : 2 :",
".. N , similar to previous work (Luong et al., 2015), where we optimize i mini-batches of task i and go to the next task.",
"In MTL, choosing the appropriate auxiliary tasks and properly tuning the mixing ratio can be important for the performance of multi-task models.",
"The naive way of trying all combinations of task selections is hardly tractable.",
"To solve this issue, we propose AUTOSEM, a two-stage pipeline in the next section.",
"In the first stage, we automatically find the relevant auxiliary tasks (out of the given N 1 options) which improve the performance of the primary task.",
"After finding the relevant auxiliary tasks, in the second stage, we take these selected tasks along with the primary task and automatically learn their training mixing ratio.",
"Tuning the mixing ratio for N tasks in MTL becomes exponentially harder as the number of auxiliary tasks grows very large.",
"However, in most circumstances, only a small number of these auxiliary tasks are useful for improving the primary task at hand.",
"Manually searching for this optimal choice of relevant tasks is intractable.",
"Hence, in this work, we present a method for automatic task selection via multi-armed bandits with Thompson Sampling (see the left side of Fig. 2).",
"Let { a 1 , ..., a N } represent the set of N arms (corresponding to the set of tasks { D 1 , ..., DN } ) of the bandit controller in our multi-task setting, where the controller selects a sequence of ac-tions/arms over the current training trajectory to maximize the expected future payoff.",
"At each round t b , the controller selects an arm based on the noisy value estimates and observes rewards r t b for the selected arm.",
"Let k [0 , 1] be the utility (usefulness) of task k .",
"Initially, the agent begins with an independent prior belief over k .",
"We take these priors to be Beta-distributed with parameters k and k , and the prior probability density function of k is: p ( k ) = ( k + k ) ( k )( k ) k 1 k (1 k ) k 1 (2) where denotes the gamma function.",
"We formulate the reward r t b { 0 , 1 } at round t b as a Bernoulli variable, where an action k produces a reward of 1 with a chance of k and a reward of 0 with a chance of 1 k .",
"The true utility of task k , i.e., k , is unknown, and may or may not change over time (based on stationary vs. non-stationary of task utility).",
"We define the reward as whether sampling the task k improves (or maintains) the validation metric of the primary task, r t b = (cid:40) 1 , if R t b R t b 1 0 , otherwise (3) where R t b represents the validation performance of the primary task at time t b .",
"With our reward setup above, the utility of each task ( k ) can be intuitively interpreted as the probability that multi-task learning with task k can improve (or maintain) the performance of the primary task.",
"The conjugacy properties of the Beta distribution assert that the posterior distribution is also Beta with parameters that can be updated using a simple Bayes rule, which is defined as follows (Russo et al., 2018), p ( k | r ) Bern ( r ) Beta , ( k ) Beta + r, +1 r ( k ) (4) ( k , k ) = (cid:40) ( k , k ) , if x st b (cid:54) = k ( k , k ) + ( r t b , 1 r t b ) , if x st b = k (5) where x st b is the sampled task at round t b .",
"Finally, at the end of the training, we calculate the expected value of each arm as follows: E p [ k ] = k k + k (6) Here, the expectation measures the probability of improving (or maintaining) the primary task by sampling this task.",
"To decide the next action to take, we apply Thompson Sampling (Russo et al., 2018; Chapelle and Li, 2011) to trade off exploitation (maximizing immediate performance) and exploration (investing to accumulate new information that might improve performance in the fu-ture).",
"In Thompson Sampling (Russo et al., 2018), instead of taking action k that maximizes the expectation (i.e., arg max k E p [ k ] ), we randomly sample the primary task improvement probability k from the posterior distribution k p ( k ) , and take the action k that maximizes the sampled primary task improvement probability, i.e., arg max k k .",
"At the end of the training, the task selection can proceed either via a threshold on the expectation, or take the topK tasks, and run stage-2 using the selected task subset as auxiliary tasks (details in Sec. 3.4).",
"Stronger Prior for Primary Task Note that at the beginning of training, model performance is usually guaranteed to improve from the initial random choices.",
"This causes issues in updating arm values because less useful tasks will be given high arm values when they happen to be sampled at the beginning.",
"To resolve this issue, we initially set a slightly stronger prior/arm-value in favor of the arm corresponding to the primary task.",
"Intuitively, the bandit will then sample the primary model more often at the beginning, and then start exploring auxiliary tasks when the primary model's Algorithm 1 BernThompson ( N, , , , 0 , 0 ) 1: for t b = 1 , 2 , . . . do 2: # sample model: 3: for k = 1 , . . . , N do 4: Sample k Beta ( k , k ) 5: end for 6: # select and apply action: 7: x st b arg max k k 8: Apply x st b and observe r t b 9: # non-stationarity 10: for k = 1 , . . . , N do 11: k = (1 ) k + 0 12: k = (1 ) k + 0 13: if k (cid:54) = x st b then 14: ( k , k ) ( k , k ) 15: else 16: ( k , k ) ( k , k ) + ( r t b , 1 r t b ) 17: end if 18: end for 19: end for performance stabilizes (as the arm value of the primary model will start decreasing because sampling it in later rounds produces smaller additional improvements).",
"Non-Stationary Multi-Armed Bandit Also note that the intrinsic usefulness of each task varies throughout the training (e.g., the primary task might be more important at the beginning, but not necessarily at the end), and thus the agent faces a non-stationary system.",
"In such cases, the agent should always be encouraged to explore in order to track changes as the system drifts.",
"One simple approach to inject non-stationarity is to discount the relevance of previous observations.",
"Thus we introduce a tunable decay ratio , and modify Eq.",
"3.3 as follows: ( k , k ) = (cid:40) ( k , k ) , if k (cid:54) = x st b ( k , k ) + ( r t b , 1 r t b ) , if k = x st b (7) where k = (1 ) k + 0 and k = (1 ) k + 0 , and controls how quickly uncertainty is injected into the system ( 0 , 0 are parameters of the prior).",
"Algorithm 1 presents the Thompson Sampling algorithm with a Beta-Bernoulli MAB.",
"The right side of Fig. 2 illustrates our Gaussian Process controller for automatic learning of the MTL training mixing ratio (see definition in Sec. 3.2).",
"Given the selected auxiliary tasks from the previous section, the next step is to find a proper mixing ratio of training these selected tasks along with the primary task.",
"2 Manual tuning of this mixing ratio via a large grid search over the hyperparameter values is very time and compute expensive (even when the number of selected auxiliary tasks is small, e.g., 2 or 3).",
"Thus, in our second stage, we instead apply a nonparametric Bayesian approach to search for the approximately-optimal mixing ratio.",
"In particular, we use a Gaussian Process' to sequentially search for the mixing ratio by trading off exploitation and exploration automatically.",
"Next, we describe our Gaussian Process approach in detail.",
"A Gaussian Process (Rasmussen, 2004; Snoek et al., 2012; Shahriari et al., 2016), GP ( 0 , k ) , is a non-parametric model that is fully characterized by a mean function 0 : X (cid:55) R and a positive-definite kernel or covariance function k : X X (cid:55) R .",
"Let x 1 , x 2 , ..., x n denote any finite collections of n points, where each x i represents a choice of the mixing ratio (i.e., the ratio 1 : 2 : .. N described in Sec. 3.2), and f i = f ( x i ) is the (unknown) function values evaluated at x i (true performance of the model given the selected mixing ratio).",
"Let y 1 , y 2 , ..., y n be the corresponding noisy observations (the validation performance at the end of training).",
"In the context of GP Regression (GPR), f = { f 1 , ..., f n } are assumed to be jointly Gaussian (Rasmussen, 2004), i.e., f | X N ( m , K ) , where, m i = 0 ( x i ) is the mean vector, and K i,j = k ( x i , x j ) is the covariance matrix.",
"Then the noisy observations y = y 1 , ..., y n are normally distributed around f as follows: y | f N ( f , 2 I ) .",
"Given D = ( x 1 , y 1 ) , ..., ( x n 0 , y n 0 ) , the set of random initial observations, where x i represents a mixing ratio and y i represents the corresponding model's validation performance.",
"Next, we model the GP based on these initial observations as described above.",
"We sample a next point x n 0 +1 (a mixing ratio in our case) from this GP and get its corresponding model performance y n 0 +1 , and update the GP again by now considering the n 0 + 1 points (Rasmussen, 2004).",
"We continue this process for a fixed number of steps.",
"Next, we will discuss how we perform the sampling (based on acquisition functions) and the kernels used for cal-2 Note that ideally Gaussian Process can also learn to set the mixing ratio of less important tasks to zero, hence allowing it to essentially also perform the task selection step.",
"However, in practice, first applying our task selection ThompsonSampling model (Sec. 3.3) allows GP to more efficiently search the mixing ratio space for the small number of filtered auxiliary tasks, as shown in results of Sec. 6.1.",
"Acquisition Functions Here, we describe the acquisition functions for deciding where to sample next.",
"While one could select the points that maximize the mean function, this does not always lead to the best outcome (Hoffman et al., 2011).",
"Since we also have the variance of the estimates along with the mean value of each point x i , we can incorporate this information into the optimization.",
"In this work, we use the GP-Hedge approach (Hoffman et al., 2011; Auer et al., 1995), which probabilistically chooses one of three acquisition functions: probability of improvement, expected improvement, and upper confidence bound.",
"Probability of improvement acquisition functions measure the probability that the sampled mixing ratio x i leads to an improvement upon the best observed value so far ( ), P ( f ( x i ) > ) .",
"Expected improvement additionally incorporates the amount of improvement, E [( f ( x i ) ) I ( f ( x i ) > )] .",
"The Gaussian Process upper confidence bound (GP-UCB) algorithm measures the optimistic performance upper bound of the sampled mixing ratio (Srinivas et al., 2009), i ( x i ) + i ( x i ) , for some hyper-parameter .",
"Matern Kernel The covariance function (or kernel) defines the nearness or similarity of two points in the Gaussian Process.",
"Here, we use the automatic relevance determination (ARD) Matern kernel (Rasmussen, 2004), which is parameterized by > 0 that controls the level of smoothness.",
"In particular, samples from a GP with such a kernel are differentiable (cid:98) 1 (cid:99) times.",
"When is half-integer (i.e. = p + 1 / 2 for non-negative integer p ), the covariance function is a product of an exponential and a polynomial of order p .",
"In the context of machine learning, usual choices of include 3 / 2 and 5 / 2 (Shahriari et al., 2016).",
"Datasets : We evaluate our models on several datasets from the GLUE benchmark (Wang et al., 2018): RTE, QNLI, MRPC, SST-2, and CoLA.",
"For all these datasets, we use the standard splits provided by Wang et al. (2018).",
"For dataset details, we refer the reader to the GLUE paper.",
"3 3 We did not include the remaining tasks as primary tasks, because STS-B is a regression task; MNLI is a very large dataset and does not benefit much from MTL with other tasks in the GLUE benchmark; and QQP and WNLI have dev/test discrepancies and adversarial label issues as per the GLUE Models RTE MRPC QNLI CoLA SST-2 BiLSTM+ELMo (Single-Task) (Wang et al., 2018) 50.1 69.0/80.8 69.4 35.0 90.2 BiLSTM+ELMo (Multi-Task) (Wang et al., 2018) 55.7 76.2/83.5 66.7 27.5 89.6 Our Baseline 54.0 75.7/83.7 74.0 30.8 91.3 Our AUTOSEM 58.7 78.5/84.5 79.2 32.9 91.8 Table 1: Test GLUE results of previous work, our baseline, and our AUTOSEM MTL framework.",
"Training Details : We use pre-trained ELMo 4 to obtain sentence representations as inputs to our model (Peters et al., 2018), and the Gaussian Process implementation is based on Scikit-Optimize 5 , and we adopt most of the default configurations.",
"We use accuracy as the validation criterion for all tasks.",
"For all of our experiments except QNLI and SST-2, we apply early stopping on the validation performance plateau.",
"6 The set of candidate auxiliary tasks consists of all 2-sentence classification tasks when the primary task is a classification of two sentences, whereas it consists of all two-sentence and single-sentence classification tasks when the primary task is a classification of a single sentence.",
"7 Since the utility estimates from the multi-armed bandit controller are noisy, we choose the top two tasks based on expected task utility estimates, and include additional tasks if their utility estimate is above 0.5.",
"All the results reported are the aggregate of the same experiment with two runs (with different random seeds) unless explicitly mentioned.",
"8 We use a two-layer LSTM-RNN with hidden size of 1024 for RTE and 512 for the rest of the models, and use Adam Optimizer (Kingma and Ba, 2014).",
"The prior parameters of each task in stage-1 are set to be 0 = 1 , 0 = 1 , which are commonly used in other literature.",
"For stage-1, the bandit controller iteratively selects batches of data from different tasks during training to learn the approximate importance of each auxiliary task (Graves et al., 2017).",
"In stage-2 (Gaussian Process), we sequentially draw samples of mixing ratios and evaluate each sample after full training (Snoek et al., 2012).",
"Without much tuning, we used approximately 200 rounds website's FAQ: https://gluebenchmark.com/faq 4 https://allennlp.org/elmo 5 https://scikit-optimize.github.io 6 In our initial experiments, we found early stopping on larger datasets led to sub-optimal performance, and hence we used a pre-specified maximum number of steps instead.",
"7 We made this design decision because there are only two single-sentence tasks in GLUE, so we mix them with 2sentence tasks to allow more auxiliary choices.",
"8 We use the average of validation results across runs as the tuning criterion, and use the ensemble of models across runs for reporting the test results.",
"for the stage-1 bandit-based approach, where each round consist of approximately 10 mini-batches of optimization.",
"For stage-2, we experimented with 15 and 20 as the number of samples to draw and found that 15 samples for MRPC and 20 samples for the rest of the tasks work well.",
"This brings the total computational cost for our two-stage pipeline to be approximately (15+1)x and (20+1)x, where x represents the time taken to run the baseline model for the given task.",
"This is significantly more efficient than a grid-search based manually-tuned mixing ratio setup (which would scale exponentially with the number of tasks).",
"Table 1 shows the results of our baseline and previous works (Wang et al., 2018).",
"We can see that our single-task baseline models achieve stronger performance on almost all tasks in comparison to previous work's single-task models.",
"9 Next, we present the performance of our AUTOSEM framework on top of these strong baselines.",
"Table 1 also presents the performance of our AUTOSEM framework-based MTL models.",
"As can be seen, our MTL models improve significantly (see Table 3 for standard deviations) upon their corresponding single-task baselines for all tasks, and achieve strong improvements as compared to the fairly-comparable 9 multi-task results of previous work (Wang et al., 2018).",
"10 During the task 9 Note that we do not report previous works which fine-tune large external language models for the task (e.g., OpenAI-GPT and BERT), because they are not fairly comparable w.r.t. our models.",
"Similarly, we report the nonattention based best GLUE models (i.e., BiLSTM+ELMo) for a fair comparison to our non-attention baseline.",
"Our approach should ideally scale to large pre-training/fine-tuning models like BERT, given appropriate compute resources.",
"10 Note that even though the performance improvement gaps of Wang et al. (2018) (MTL vs. baseline) and our improvements (AUTOSEM vs. our improved baseline) are similar, these are inherently two different setups.",
"Wang et al. (2018) MTL is based on a one model for all' setup (Kaiser et al., 2017; McCann et al., 2018), whereas our approach in-selection stage of our AUTOSEM framework, we observe that MultiNLI is chosen as one of the auxiliary tasks in all of our MTL models.",
"This is intuitive given that MultiNLI contains multiple genres covering diverse aspects of the complexity of language (Conneau et al., 2017).",
"Also, we observe that WNLI is sometimes chosen in the task selection stage; however, it is always dropped (mixing ratio of zero) by the Gaussian Process controller, showing that it is not beneficial to use WNLI as an auxiliary task (intuitive, given its small size).",
"Next, we discuss the improvements on each of the primary tasks and the corresponding auxiliary tasks selected by AUTOSEM framework.",
"RTE : Our AUTOSEM approach achieves stronger results w.r.t. the baseline on RTE (58.7 vs. 54.0).",
"During our task selection stage, we found out that QQP and MultiNLI tasks are important for RTE as auxiliary tasks.",
"For the second stage of automatic mixing ratio learning via Gaussian Process, the model learns that a mixing ratio of 1:5:5 works best to improve the primary task (RTE) using related auxiliary tasks of QQP and MultiNLI.",
"MRPC : AUTOSEM here performs much better than the baseline on MRPC (78.5/84.5 vs. 75.7/83.7).",
"During our task selection stage, we found out that RTE and MultiNLI tasks are important for MRPC as auxiliary tasks.",
"In the second stage, AUTOSEM learned a mixing ratio of 9:1:4 for these three tasks (MRPC:RTE:MultiNLI).",
"QNLI : Again, we achieve substantial improvements with AUTOSEM w.r.t. baseline on QNLI (79.2 vs. 74.0).",
"Our task selection stage learned that WNLI and MultiNLI tasks are best as auxiliary tasks for QNLI.",
"We found that the Gaussian Process further drops WNLI by setting its mixing ratio to zero, and returns 20:0:5 as the best mixing ratio for QNLI:WNLI:MultiNLI.",
"CoLA : We also observe a strong performance improvement on CoLA with our AUTOSEM model w.r.t. our baseline (32.9 vs. 30.8).",
"During our task selection stage, we found out that MultiNLI and WNLI tasks are important for CoLA as auxiliary tasks.",
"In the second stage, GP learns to drop WNLI, and found the mixing ratio of 20:5:0 for CoLA:MultiNLI:WNLI.",
"terpretably chooses the 2-3 tasks that are most beneficial for the given primary task.",
"Also see Sec. 4 for comparison of training speeds for these two setups.",
"and WNLI as auxiliary tasks and the stage-2 Gaussian Process model drops MRPC and WNLI by setting their mixing ratio to zero (learns ratio of 13:5:0:0 for SST-2:MultiNLI:MRPC:WNLI).",
"In this section, we examine the usefulness of each stage of our two-stage MTL pipeline.",
"11 Removing Stage-1 : The purpose of the Beta-Bernoulli MAB in stage-1 is to find useful auxiliary tasks for the given primary task.",
"Here, to understand its importance, we remove the task selection part, and instead directly run the Gaussian Process (GP) model on all tasks (see w/o Stage-1' row in Table 2).",
"We can see that by removing the task selection stage, the Gaussian Process model can still outperform the baseline, indicating the usefulness of the GP, but the large mixing ratio search space causes the GP to be unable to efficiently find the best mixing ratio setting.",
"Removing Stage-2 : Given the selected tasks from stage-1, the goal of the Gaussian Process in stage-2 is to efficiently find the approximately-optimal mixing ratio.",
"To examine its usefulness, we replace the Gaussian Process controller by manually tuning a grid of mixing ratios, where the number of tuning experiments equals to the number of steps used in the Gaussian Process model (for a fair comparison).",
"Table 2 shows the results by removing stage-2.",
"We can see that a grid search over hyper-parameters can improve upon the baseline, indicating the usefulness of stage-1 task selection, but a reasonable-sized fair-comparison grid search (i.e., not exhaustive over all ratio values) is not able to match our stage-2 GP process that leverages prior experimental results to more efficiently find the best setting.",
"11 We present this ablation only on MRPC for now, because GP stage-2 takes a lot of time without the task selection stage.",
"In this section, we provide the mean and standard deviation of our baseline and multi-task models (over three runs) on the validation set.",
"Note that the test set is hidden, so we cannot do these studies on it.",
"As seen in Table 3, our multi-task models clearly surpass the performance of baseline models w.r.t. standard deviation gaps, in all tasks.",
"In Fig. 3, we show an example of the task utility estimates from the stage-1 multi-armed bandit controller (Eq. 3.3) on SST-2.",
"The x-axis represents the task utility, and the y-axis represents the probability density over task utility.",
"Each curve represents a task (the blue curve corresponds to the primary task, SST-2, and the rest of the curves correspond to auxiliary tasks), and the width of the bars represents the confidence interval of their estimates.",
"We can see that the bandit controller gives the highest (and most confident) utility estimate for the primary task, which is intuitive given that the primary task should be the most useful task for learning itself.",
"Further, it gives 2-3 tasks moderate utility estimates (the corresponding expected values are around 0.5), and relatively lower utility estimates for the remaining tasks (the corresponding expected values are lower than 0.5).",
"We additionally experimented with educated-guess' baseline models, where MTL is performed using manual intuition mixtures that seem a",
"priori sensible.",
"12 For example, with MRPC as the primary task, our first educated-guess baseline is to choose other similar paraphrasing-based auxiliary tasks, i.e., QQP in case of GLUE.",
"This MRPC+QQP model achieves 80.8, whereas our AUTOSEM framework chose MRPC+RTE+MultiNLI and achieved 81.2.",
"Furthermore, as our second educated-guess baseline, we added MultiNLI as an auxiliary task (in addition to QQP), since MultiNLI was helpful for all tasks in our MTL experiments.",
"This educated-guess MRPC+QQP+MultiNLI model achieves 80.9 (vs. 81.2 for our AUTOSEM model).",
"This suggests that our AUTOSEM framework (that automatically chose the seemingly less-related RTE task for MRPC) is equal or better than manual intuition based educated-guess models.",
"We presented the AUTOSEM framework, a two-stage multi-task learning pipeline, where the first stage automatically selects the relevant auxiliary tasks for the given primary task and the second stage automatically learns their optimal mixing ratio.",
"We showed that AUTOSEM performs better than strong baselines on several GLUE tasks.",
"Further, we ablated the importance of each stage of our AUTOSEM framework and also discussed the intuition of selected auxiliary tasks.",
"We thank the reviewers for their helpful comments.",
"This work was supported by DARPA (YFA17-D17AP00022), ONR (N00014-18-1-2871), Google, Facebook, Baidu, Salesforce, and Nvidia.",
"The views contained in this article are those of the authors and not of the funding agency.",
"12 These educated-guess models replace our stage-1 automatic auxiliary task section with manual intuition taskmixtures; but we still use our stage-2 Gaussian Process for mixing ratio learning, for fair comparison."
]
| [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"objective",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"result",
"abstain",
"other",
"other",
"other",
"other"
]
|
[
"Image captioning is a multimodal problem that has drawn extensive attention in both the natural language processing and computer vision community.",
"In this paper, we present a novel image captioning architecture to better explore semantics available in captions and leverage that to enhance both image representation and caption generation.",
"Our models first construct caption-guided visual relationship graphs that introduce beneficial inductive bias using weakly supervised multi-instance learning.",
"The representation is then enhanced with neighbouring and contextual nodes with their textual and visual features.",
"During generation, the model further incorporates visual relationships using multi-task learning for jointly predicting word and object/predicate tag sequences.",
"We perform extensive experiments on the MSCOCO dataset, showing that the proposed framework significantly outperforms the baselines, resulting in the state-of-the-art performance under a wide range of evaluation metrics.",
"The code of our paper has been made publicly available.",
"1 1 Introduction Automatically generating a short description for a given image, a problem known as image captioning (Chen et al., 2015), has drawn extensive attention in both the natural language processing and computer vision community.",
"Inspired by the success of encoder-decoder frameworks with the attention mechanism, previous efforts on image captioning adopt variants of pre-trained convolution neural networks (CNN) as the image encoder and recurrent neural networks (RNN) with visual attention as the decoder (Lu et al., 2017; Anderson et al., 2018; Xu et al., 2015; Lu et al., 2018).",
"explicitly investigating semantic cues from texts and images.",
"To remedy that, some research has also explored to detect high-level semantic concepts presented in images to improve caption generation (Wu et al., 2016; Gan et al., 2017; You et al., 2016; Fang et al., 2015; Yao et al., 2017).",
"It is believed by many that the inductive bias that leverages structured combination of concepts and visual relationships is of importance, which has led to better captioning models (Yao et al., 2018; Guo et al., 2019; Yang et al., 2019).",
"These approaches obtain visual relationship graphs using models pre-trained from visual relationship detection (VRD) datasets, e.g., Visual Genome (Krishna et al., 2017), where the visual relationships capture semantics between pairs of localized objects connected by predicates , including spatial (e.g., cake-on-desk ) and non-spatial semantic relationships (e.g., man-eat-food ) (Lu et al., 2016).",
"As in many other joint text-image modeling problems, it is crucial to obtain a good semantic representation in image captioning that bridges semantics in language and images.",
"The existing approaches, however, have not yet adequately leveraged the semantics available in captions to construct image representation and generate captions.",
"As shown in Figure 1, although VRD detection models present a strong capacity in predicting salient objects and the most common predicates, they often ignore predicates vital for captioning (e.g., grab in this example).",
"Exploring better models would still be highly desirable.",
"A major challenge for establishing a structural connection between captions and images is that the links between predicates and the corresponding object regions are often ambiguous: within the image-level label ( obj 1 , pred, obj 2 ) extracted from captions, there may exist multiple object regions corresponding to obj 1 and obj 2 .",
"In this paper, we propose to use weakly supervised multi-instance learning to detect if a bag of object (region) pairs in an image contain certain predicates, e.g., predicates appearing in ground-truth captions here (or in other applications, they can be any given predicates under concerns).",
"Based on that we can construct caption-guided visual relationship graphs.",
"Once the visual relationship graphs (VRG) are built, we propose to adapt graph convolution operations (Marcheggiani and Titov, 2017) to obtain representation for object nodes and predicate nodes.",
"These nodes can be viewed as image representation units used for generation.",
"During generation, we further incorporate visual relationshipswe propose multi-task learning for jointly predicting word and tag sequences, where each word in a caption could be assigned with a tag, i.e., object , predicate , or none , which takes as input the graph node features from the above visual relationship graphs.",
"The motivation for predicting a tag in each step is to regularize which types of information should be taken into more consideration for generating words: predicate nodes features, object nodes features, or the current state of language decoder.",
"We study different types of multi-task blocks in our models.",
"As a result, our models consist of three major components: constructing caption-guided visual relationship graphs (CGVRG) with weakly-supervised multi-instance learning, building context-aware CGVRG, and performing multi-task generation to regularize the network to take into account explicit predicate object/predicate constraints.",
"We perform extensive experiments on the MSCOCO (Lin et al., 2014) image captioning dataset with both supervised and Reinforcement learning strategy (Rennie et al., 2017).",
"The experiment results show that the proposed models significantly outperform the baselines and achieve the state-of-the-art performance under a wide range of evaluation metrics.",
"The main contributions of our work are summarized as follows: We propose to construct caption-guided visual relationship graphs that introduce beneficial inductive bias by better bridging captions and images.",
"The representation is further enhanced with neighbouring and contextual nodes with their textual and visual features.",
"Unlike existing models, we propose multi-task learning to regularize the network to take into account explicit object/predicate constraints in the process of generation.",
"The proposed framework achieves the state-of-the-art performance on the MSCOCO image captioning dataset.",
"We provide detailed analyses on how this is attained.",
"Image Captioning A prevalent paradigm of existing image captioning methods is based on the encoder-decoder framework which often utilizes a CNN-plus-RNN architecture for image encoding and text generation (Donahue et al., 2015; Vinyals et al., 2015; Karpathy and Fei-Fei, 2015).",
"Soft or hard visual attention mechanism (Xu et al., 2015; Chen et al., 2017) has been incorporated to focus on the most relevant regions in each generation step.",
"Furthermore, adaptive attention (Lu et al., 2017) has been developed to decide whether to rely on visual features or language model states in each decoding step.",
"Recently, bottom-up attention techniques (Anderson et al., 2018; Lu et al., 2018) have also been proposed to find the most relevant regions based on bounding boxes.",
"There has been increasing work focusing on filling the gap between image representation and caption generation.",
"Semantic concepts and attributes detected from images have been demonstrated to be effective in boosting image captioning when used in the encoder-decoder frameworks (Wu et al., 2016; You et al., 2016; Gan et al., 2017; Yao et al., 2017).",
"Visual relationship (Lu et al., 2016) and scene graphs (Johnson et al., 2015) have been further employed for image encoder in a unimodal (Yao et al., 2018) or multi-modal (Yang et al., 2019; Guo et al., 2019) manner to improve the over-Figure 2: An overview of the proposed image captioning framework.",
"all performance via the graph convolutional mechanism (Marcheggiani and Titov, 2017).",
"Besides, Kim et al. (2019) proposes a relationship-based captioning task to lead better understanding of images based on relationship.",
"As discussed in introduction, we will further explore the relational semantics available in captions for both constructing image representation and generating caption.",
"Visual Relationship Detection Visual relations between objects in an image have attracted more studies recently.",
"Conventional visual relation detection have dealt with (cid:104) subject-predicate-object (cid:105) triples, including spatial relation and other semantic relation.",
"Lu et al. (2016) detect the triples by performing subject, object, and predicate classifica-tion separately.",
"Li et al. (2017) attempt to encode more distinguishable visual features for visual relationships detection.",
"Probabilistic output of object detection (Dai et al., 2017; Zhang et al., 2017) is also considered to reason about the visual relationships.",
"Given an image I , the goal of image captioning is to generate a visually grounded natural language sentence.",
"We learn our model by minimizing the cross-entropy loss with regard to the ground truth caption S = { w 1 , w 2 , ..., w T } : LXE = log p ( S | I ) (1) = T (cid:88) t =1 log p ( w t | w <t , I ) (2) The model is further tuned with a Reinforcement Learning (RL) objective (Rennie et al., 2017) to maximize the reward of the generated sentence S : JRL = ES p ( S | I ) ( d ( S , S )) (3) where d is a sentence-level scoring metric.",
"An overview of our image captioning framework is depicted in Figure 2, with the detail of the components described in the following sections.",
"A general challenge of modeling p ( S | I ) is obtaining a better semantic representation in the multimodal setting to bridge captions and images.",
"Our framework first focuses on constructing caption-guided visual relationship graphs (CGVRG).",
"The process of constructing CGVRG first extracts relationship triples from captions using textual scene graph parser as described in (Schuster et al., 2015).",
"Our framework employs Faster R-CNN (Ren et al., 2015) to recognize instances of objects and returns a set of image regions for objects: V = { v 1 , v 2 , , v n } .",
"The main focus of CGVRG is constructing visual relationship graphs.",
"As discussed in introduction, the existing approaches use pre-trained VRD (vi-sual relationship detection) models, which often ignore key relationships needed for captioning.",
"This gap can be even more prominent if the do-main/data used to train image-captioning is farther from where VRD is pretrained.",
"A major challenge to use predicate triples from captions to construct CGVRG is that, the links between predicates and the corresponding object regions are often ambiguous as discussed in introduction.",
"To solve this problem, we use weakly supervised, multi-instance learning.",
"Obtaining Representation for Object Region Pairs For an image I with a list of salient object regions obtained in object detection { v 1 , v 2 , , v n } , we have a set of region pairs U = { u 1 , u 2 , , u N } , where N = n ( n 1) .",
"As shown in Figure",
"3(b), the visual features of any two object regions and their union box will be collected to compute p r j u n , the probability that a region pair u n is associated with the predicate r j , where r j R and R = { r 1 , r 2 , , r M } include frequent predicates obtained from the captions in training data.",
"The feed-forward network of Figure",
"3(b) will be trained in weakly supervised training.",
"Weakly Supervised Multi-Instance Training As shown in Figure",
"3(c), during training, one object pair t = ( o 1 , o 2 ) , e.g., ( women , hat ), can correspond to multiple pairs of object regions: the four women-hat combinations between the two women and two hats.",
"To make our description clearer, we refer to t = ( o 1 , o 2 ) as an object pair , and the four women-hat pairs in the image as object region pairs .",
"Accordingly, for a triple we extracted t = ( o 1 , r, o 2 ) , r R , e.g., ( woman , in , hat ), the predicate r (i.e., in ) can be associated with multiple object region pairs (here, ( w0 , h0 ), ( w0 , h1 ), ( w1 , h0 ), and ( w1 , h1 )).",
"To predict predicates over object region pairs, we propose to use Multi-Instance Learning (Fang et al., 2015) as our weakly supervised learning approach.",
"Multi-Instance Learning receives a set of labeled bags, each bag containing a set of instances.",
"A bag would be labeled negative if all the instances in it are negative.",
"On the other hand, a bag is labeled positive if there is at least one positive instance in the bag.",
"In our problem, an instance is a region pair.",
"Therefore for a candidate predicate r R (e.g., in ), we use N r to denote the object region pairs corresponding to predicate r .",
"If r appears in the caption S , N r would be a positive bag.",
"We use N \\ N r to denote the negative bag for r .",
"When r is not contained in the caption, the entire N would be the negative bag (the last row of Figure",
"3(c)).",
"The probability of a bag b having the predicate r j is measured with noisy-OR: p r j b = 1 (cid:89) n b (1 p r j u n ) (4) where p r j u n has been introduced above.",
"We adopt the cross-entropy loss on the basis of all predicate Figure 3: Subcomponents in constructing CGVRG:",
"caption S : L ( I )= M (cid:88) j =1 (cid:104) 1 ( r j S ) (log p r j N rj +log(1 p r j N\\N rj )) + 1 ( r j / S ) (log(1 p r j N )) (cid:105) (5)",
"Constructing the Graphs Once obtaining the trained module, we can build a CGVRG graph G = ( V , E ) for a given image I , where the node set V includes two types of nodes: object nodes and predicate nodes.",
"We denote o i as the i th object node and r ij as a predicate node that connects o i and o j (refer to Figure 1 or the middle part of Figure 2).",
"The edges in E are added based on triples; i.e., ( o i , r ij , o j ) will assign two directed edges from node o i to r ij and from r ij to o j , respectively.",
"Note that due to the use of the proposed weakly supervised models, the acquired graphs can now contain predicates that exist in captions but not in the VRD models used in the previous work that does not explicitly consider predicates in captions.",
"We will show in our experiments that this improves captioning quality.",
"We further enhance CGVRG in the context of both modalities, images and text, using graph convolution networks.",
"We first integrate visual and textual features: the textual features for each node are from a word embedding and the visual features are regional visual representations extracted via RoI pooling from Faster R-CNN.",
"The specific features g o i , g r ij for object o i and predicate r ij are shown as follows: g o i = o ([ g to i ; g vo i ]) (6) g r ij = r ( g tr ij ) (7) where r and o are feed-forward networks using ReLU activation; g to i , g tr ij , and g vo i denote textual features of o i , r ij and visual features of o i , respectively.",
"We present the process of encoding G to produce a new set of context-aware representation X .",
"The representation of predicate r ij and o i are computed as follows: x r ij = f r ([ g o i ; g o j ; g r ij ]) (8) (9) x o i = 1 N i (cid:88) r N out ( o i ) f out ([ g o i ; g r ]) + (cid:88) r N in ( o i ) f in ([ g o i ; g r ]) where f r , f in , f out are feed-forward networks using ReLU activation.",
"N in and N out denote the adjacent nodes with o i as head and tail, respectively.",
"N i is the total number of adjacent nodes.",
"Unlike the existing image-captioning models, we further incorporate visual relationships into generation we propose multi-task learning for jointly predicting word and tag sequences as each word in a caption will be assigned a tag, i.e., object , predicate , or none .",
"The module takes as input the graph node features from the context-aware CGVRG.",
"The output of the generation module is hence the sequence of words y = { y 1 , , y T } as well as the tags z = { z 1 , , z T } .",
"Two different approaches are leveraged to train the two tasks jointly.",
"denote hidden states of bottom and top LSTM in time step t 1 , respectively; e is the word embedding table.",
"The state h 1 t is then used as a query to attend over graph node features { x o } and { x r } separately to get attended features x rt and x ot : x rt = ATT( h 1 t , { x r } ) (11) x ot = ATT( h 1 t , { x o } ) (12) where ATT is a soft-attention operation between a query and graph node features.",
"The top LSTM works as a language model decoder, in which the hidden state h 20 is initialized with the mean-pooled semantic representation of all detected predicates { r } .",
"In time step t , the input consists of the output from the bottom LSTM layer h 1 t and attended graph features x rt , x ot : h 2 t = LSTM( h 2 t 1 , [ h 1 t ; x ot ; x rt ]) (13) 3.3.1 Multi-task Learning We propose two different blocks to perform the two tasks jointly, as shown in Figure 4. In each step, a multi-task learning block deals with task s 1 as predicting a tag z t and task s 2 as predicting a word y t .",
"Specifically MT-I treats the two tasks independent of each other: p ( z t | y <t , I ) = softmax( f z ( h 2 t )) (14) p ( y t | y <t , I ) = softmax( f y ( h 2 t )) (15) where f z and f y are feed-forward networks with ReLU activation.",
"Inspired by the adaptive attention mechanism (Lu et al., 2017), MT-II further exploits the probability from p ( z t | y <t , I ) to integrate the representation of current hidden state h 2 t and attended features from graph x rt , x ot : p ( y t | y <t , I ) = softmax( f y ( h 2 t )) , (16) h 2 t = h 2 t p na + x rt p r + x ot p o (17) p ( z t | y <t , I ) = softmax( f z ( h 2 t )) (18) where p na , p r , p o denote the probabilities of tag z t being none, predicate, and object, respectively.",
"The multi-task loss function is as follows: LMT ( I ) = T (cid:88) t =1 log p ( y t | y <t , I )+ log p ( z t | y <t , I ) (19) where is the hyper-parameter to balance the two tasks.",
"The overall training process can be broken down into two parts: the CGVRG detection module training period and the caption generator training period; the latter includes cross-entropy optimization and the CIDEr-D optimization.",
"For CGVRG detection module training, the detection module is optimized with the multi-instance learning loss in Equation 5. For caption generator training, the model is first optimized with the cross-entropy loss in Equation 19, and then we directly optimize the model with the expected sentence-level reward (CIDEr-D in this work) shown in Equation 3 by self critical sequence learning (Rennie et al., 2017).",
"In the inference stage, given an image, the CGVRG detection module obtains a graph upon them.",
"The graph convolution network encodes graphs to obtain the context aware multi-modal representations.",
"Then graph object/predicate node features are further provided to the multi-task caption generation module to generate sequences with beam search.",
"MSCOCO We perform extensive experiments on the MSCOCO benchmark (Lin et al., 2014).",
"The Karpathy split (Karpathy and Fei-Fei, 2015) is adopted for our model selection and offline testing, which contains 113K training images, 5K validation images and 5K testing images.",
"As for the online test server, the result is trained on the entire training and validation set (123K images).",
"To evaluate the generated captions, we employ standard evaluation metrics: SPICE (Anderson et al., 2016), CIDEr-D (Vedantam et al., 2015), METEOR (Denkowski and Lavie, 2014), ROUGE-L (Lin, 2004), and BLEU (Papineni et al., 2002).",
"Visual Genome We use the Visual Genome (Kr-ishna et al., 2017) dataset to pre-train our object detection model.",
"The dataset includes 108K images.",
"To pre-train the object detection model with Faster R-CNN, we strictly follow the setting in (An-derson et al., 2018), taking 98K/5K/5K for training, validation, and testing, respectively.",
"The split is carefully selected to avoid contamination of the MSCOCO validation and testing sets, since nearly 51K Visual Genome images are also included in the MSCOCO dataset.",
"Implementation Details We use Faster R-CNN (Ren et al., 2015) to identify and localize instances of objects.",
"The object detection phase consists of two modules.",
"The first module proposes object regions using a deep CNN, i.e., ResNet-101 (He et al., 2016).",
"The second module extracts feature maps using region-of-interest pooling for each box proposals.",
"Practically, we take the final output of the ResNet-101 and perform non-maximum suppression for each object class with an IoU threshold.",
"As a result, we obtain a set of image regions, V = { v 1 , v 2 , , v n } , where n [10 , 100] varies with input images and confi-dence thresholds.",
"Each region is represented as a 2,048-dimensional vector obtained from the pool5 layer after the RoI pooling.",
"We then apply a feed-forward network with a 1000-dimensional output layer for predicates classification.",
"The network of the same size is also used for feature projection ( o , i ) and GCN ( f r , f in , f out ).",
"In the decoder LSTM, the word embedding dimension is set to be 1,000 and the hidden unit dimension in the top-layer and bottom-layer LSTM is set to be 1,000 and 512, respectively.",
"The trade-off parameter in multi-task learning is 0.15.",
"The whole system is trained with the Adam optimizer.",
"We set the initial learning rate to be 0.0005 and mini-batch size to be 100.",
"The maximum number of training epochs is 30 for Cross-entropy and CIDEr-D optimization respectively.",
"For sequence generation in the inference stage, we adopt the beam search strategy and set the beam size to be 3.",
"We construct object and predicate categories for VRD training.",
"Similar to (Lu et al., 2018), we manually expand the original 80 object categories to Cross entropy CIDEr-D optimization B1 B4 ME RG CD SP B1 B4 ME RG CD SP SCST -31.3 26.0 54.3 101.3 -33.3 26.3 55.3 111.4 LSTM-A 75.4 35.2 26.9 55.8 108.8 20.0 78.6 35.5 27.3 56.8 118.3 20.8 Up-Down (Baseline) 77.2 36.2 27.0 56.4 113.5 20.3 79.8 36.3 27.7 56.9 120.1 21.4 StackCap 76.2 35.2 26.5 -109.1 -78.6 36.1 27.4 -120.4 CAVP ----38.6 28.3 58.5 126.3 21.6 GCN-LSTM 77.3 36.8 27.9 57.0 116.3 20.9 80.5 38.2 28.5 58.3 127.6 22.0 VSUA ----38.4 28.5 58.4 128.6 22.0 SGAE 77.6 36.9 27.7 57.2 116.7 20.9 80.8 38.4 28.4 58.6 127.8 22.1 This Work (MT-I) 78.1 38.4 28.2 58.0 119.0 21.1 80.8 38.9 28.8 58.7 129.6 22.3 This Work (MT-II) 77.9 38.0 28.1 57.6 117.8 21.3 80.5 38.6 28.7 58.4 128.7 22.4 Table 1: Single-model performances on the MSCOCO dataset (Karpathy split) in both cross-entropy and RL training period.",
"413 fine-grained categories by utilizing a list of caption tokens.",
"For example, the object category person is expanded to a list of fine-grained categories [ boy , man , ] .",
"Then for all extracted triples that have both objects appearing in the 413 category list, we select the 200 most frequent predicates as our predicate categories.",
"Model Comparison We compare our models with the following state-of-the-art models: (1) SCST (Rennie et al., 2017) employs an improved policy gradient algorithm by utilizing its own inference output to normalize the rewards; (2) LSTM-A (Yao et al., 2017) integrates the detected image attributes into the CNN-plus-RNN image captioning framework; (3) Up-Down (Anderson et al., 2018) uses both a bottom-up and top-down attention mechanism to focus more on salient object regions; (4) GCN-LSTM (Yao et al., 2018) leverages graph convolutional networks over the detected objects and relations; (5) CAVP (Liu et al., 2018) proposes a context-aware policy network by accounting for visual attentions as context for generation; (6) VSUA (Guo et al., 2019) exploits the alignment",
"between words and different categories of graph nodes; (7) SAGE (Yang et al., 2019) utilizes an additional graph encoder to incorporate language inductive bias into the encoder-decoder framework.",
"Our baseline is built on Up-Down (Anderson et al., 2018).",
"We propose two variants of final models using different multi-task blocks, namely MT-I and MT-II shown in Fig",
"4(b).",
"We conduct extensive comparisons on the dataset with the above state-of-the-art techniques.",
"We also perform detailed analysis to demonstrate the impact of different components of our framework.",
"Table 1 lists the results of various single models on the MSCOCO Karpathy split.",
"Our model outperforms the baseline model significantly, with CIDEr-D scores being improved from 113.5 to 119.0 and 120.1 to 129.6 in the cross-entropy and CIDEr-D optimization period, respectively.",
"In addition, the model with MT-II shows an advantage over that with MT-I on SPICE, which implies that the proposed adaptive visual attention mechanism works in multi-task block II.",
"Table 2 compares our model with three models that also incorporate VRG, plus the baseline model, on the MSCOCO online test server.",
"Our model improves significantly from the baseline (from 120.5 to 126.7 in CIDEr-D) and has achieved the best results across all evaluation metrics on c40 (40 reference captions).",
"Figure 5 shows the effect of taking different weights in the multi-task loss item (Equation 19).",
"The results indicate that the weight around 0.15 yields the best performance in both multi-task blocks.",
"Meanwhile, Figure 6 shows the ablation analysis by removing the multi-task caption generation and graph convolution operation, respectively, to check the effect of these components.",
"The results MTI MTII 0 .",
"show that both the graph convolution operation and multi-task learning help improve the quality of the generated captions.",
"Note that the code of our paper has been made publicly available in the webpage provided in the abstract.",
"Human evaluation We performed human evaluation with three non-author human subjects, using a five-level Likert scale.",
"For each image and each pair of systems in comparison (MT-I vs. Up-Down, MT-I vs. GCN-LSTM, and MT-I vs. SGAE), we show the captions generated by the two systems to the human subjects.",
"We ask each subject if the first caption sentence is: significantly better ( 2 ), better ( 1 ), equal ( 0 ), worse ( 1 ), or significantly worse ( 2 ), compared to the second.",
"Following (Zhao et al., 2019), we obtain the subjects' ratings for fidelity (the first caption is superior in terms of making less mistakes?), informativeness (the first caption provides more informative and detailed description?), and fluency (the first caption is more fluent?).",
"For each question asked for an image, we calculate the average of the three subjects' scores.",
"For each pair of models in comparison, we randomly sampled 50 images from the Karpathy testset.",
"MT-I vs. Up-Down: For fidelity, MT-I is better or significantly better on 44% images (where the average of the three human subjects' scores is larger than 0 . 5 ), equal to Up-Down on 46% images (the average is in range [ 0 . 5 , 0 . 5] ), and worse or significantly worse on 10% images (average is less than 0 . 5 ).",
"For informativeness, MT-I is better or significantly better on 60% images, equal on 34%, and worse or significantly worse on 6%.",
"For fluency, the numbers are 18%, 72%, and 10%.",
"MT-I vs. GCN-LSTM: For fidelity, MT-I is better or significantly better on 40% images, equal to GCN-LSTM on 52%, and worse or significantly worse on 8%.",
"For informativeness, the numbers are 32%, 50%, and 18%, respectively.",
"For fluency, the numbers are 12%, 76%, and 12%.",
"MT-I vs. SGAE: For fidelity, MT-I is better or significantly better on 36% images, equal to SGAE on 56%, and worse or significantly worse on 8%.",
"For informativeness, the numbers are 30%, 48%, and 22%, respectively.",
"For fluency, the numbers are 6%, 90%, and 4%.",
"Figure 7 shows several specific examples, each including an image, a detected caption guided visual relationship graph, a ground truth sentence, a generated word sequence, and a learned visual relationship composition.",
"We can see that the proposed model generates more accurate captions coherent to the visual relationship detected in the image.",
"Consider the upper middle demo as an example; our model extracts a visual relationship graph covering the critical predicates filled with and in front of for understanding the image, thus producing a comprehensive description.",
"In addition, we observe that the model generates the triple ( table, filled with, food ) , which is a new composition that has not appeared in the training set.",
"Figure 8 visualizes the effect of our tag sequence generation process.",
"Specifically, we visualize the tag probabilities of the object, predicate, and none category in each generation step.",
"Our model successfully learns to distinguish the correct category for each time step, which is in consistent with the tag of the predicted word.",
"For example, for the generated words flying over, the probability for the predicate category is the highest, which is also true for words like bird and water.",
"This paper presents a novel image captioning architecture that constructs caption-guided visual relationship graphs to introduce beneficial inductive",
"bias to better utilize captions.",
"The representation is further enhanced with text and visual features of neighbouring nodes.",
"During generation, the network is regularized to take into account explicit object/predicate constraints with multi-task learning.",
"Extensive experiments are performed on the MSCOCO dataset, showing that the proposed framework significantly outperforms the baselines, resulting in the state-of-the-art performance under various evaluation metrics.",
"In the near future we plan to extend the proposed approach to several other language-vision modeling tasks.",
"We would like to thank the anonymous reviewers for their valuable comments.",
"This research of the first and last author is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC)."
]
| [
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"method",
"abstain",
"objective",
"objective",
"objective",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other"
]
|
[
"The COVID-19 pandemic has spawned a diverse body of scientific literature that is challenging to navigate, stimulating interest in automated tools to help find useful knowledge.",
"We pursue the construction of a knowledge base (KB) of mechanisms a fundamental concept across the sciences, which encompasses activities, functions and causal relations, ranging from cellular processes to economic impacts.",
"We extract this information from the natural language of scientific papers by developing a broad, unified schema that strikes a balance between relevance and breadth.",
"We annotate a dataset of mechanisms with our schema and train a model to extract mechanism relations from papers.",
"Our experiments demonstrate the utility of our KB in supporting interdisciplinary scientific search over COVID-19 literature, outperforming the prominent PubMed search in a study with clinical experts.",
"Our search engine, dataset and code are publicly available.",
"1 1 Introduction Some experts are familiar with one field, such as AI or nanotechnology [...] no one is capable of connecting the dots and seeing how breakthroughs in AI might impact nanotechnology, or vice versa. Yuval Noah Harari, Homo Deus, 2016 The effort to mitigate the COVID-19 pandemic is an interdisciplinary endeavor the world has rarely seen (Apuzzo and Kirkpatrick, 2020).",
"As one recent example, expertise in virology, physics, epidemiology and engineering enabled a group of 200 scientists to understand and bring attention to the * Equal contribution.",
"1 https://covidmechanisms.apps.allenai.org/ a deep learning framework for design of antiviral candidate drugs Temperature increase can facilitate the destruction of SARS-COV-2 gpl16 antiserum blocks binding of virions to cellular receptors ...food price inflation is an unintended consequence of COVID-19 containment measures Retrieved from CORD-19 papers Ent1: deep learning Ent2: drugs Query mechanism relations Ent1: heat Ent2: SARS-CoV-2 Ent1: ?",
"airborne transmissibility of the SARS-CoV-2 virus (Morawska et al., 2020).",
"The diverse and rapidly expanding body of past and present findings related to COVID-19 (Wang et al., 2020b) makes it challenging to keep up, hindering scientists' pace in making new discoveries and connections.",
"Research in natural language processing (NLP) has provided important resources to extract fine-grained relations from scientific papers in specific areas, such as certain subfields of biomedicine (Kim et al., 2013; Nye et al., 2018) or computer science (Wadden et al., 2019).",
"However, these cover only a fraction of all concepts in the literature; in biomedicine alone, there are myriad concepts (Salvadores et al., 2013) not covered by NLP resources.",
"For COVID-19 research, the challenge is especially pronounced due to diversity and emerging concepts; even reading just one paper may require background knowledge in multiple biomedical subfields, physics, chemistry, engineering, computer science and the social sciences.",
"For example, consider a paper studying the indoor dynamics of aerosolized SARS-CoV-2 and the effect of ventilation on transmission by using simulation models, or work on economic impacts of COVID-19 on prices and consumption.",
"To make progress in consolidating such diverse information, we introduce a unified schema of mechanisms as a unified language covering activities, functions and influences across the sciences.",
"These can be proteins that block viral binding, algorithms to design drugs, the effect heat has on viruses, or COVID-19 has on food prices (Fig. 1).",
"We build on the fact that mechanisms underlie much of the natural language of scientific papers (Rhl, 2012), and construct a unified schema with two coarse-grained mechanism relations: Direct Mechanisms : mechanistic activities (e.g., viral binding) or functions engendered by natural or artificial entities (e.g., a protein used for binding or algorithm used for diagnosis).",
"Indirect Mechanisms : influences and associations such as economic effects of COVID-19 or complications associated with medical procedures.",
"Our coarse-grained relation schema, over freeform text spans, strikes a balance between the granular information extracted by Closed-IE approaches (Freitag, 1998; Hoffmann et al., 2010) and the schema-free breadth of Open IE approaches (Et-zioni et al., 2008; Stanovsky et al., 2018), which often lead to generic and uninformative relations for scientific applications (Kruiper et al., 2020).",
"Furthermore, our schema facilitates construction of a high-quality KB that synthesizes interdisciplinary knowledge.",
"We construct precisely this, releasing MECHANIC ( Mech anisms AN otated in C OVID-19 papers) an annotated dataset of 2,400 mechanisms based on our schema.",
"We train a state-of-the-art model to extract this information from scientific papers, and use it to build COMB ( C OVID-19 O pen M echanism Knowledge B ase) a broad-coverage KB of 1.5M mechanisms in COVID-19 papers.",
"We analyze the characteristics of COMB , showing the distribution of relations across scientific subfields and comparing their quality to other IE approaches.",
"We demonstrate the utility of COMB in two studies with experts.",
"In the first study, our system achieves high precision and recall in scientific search with structured queries on both diverse viral mechanisms and applications of AI in the literature.",
"In the second study, we evaluate COMB in a usability study with MDs active in treating and researching COVID-19.",
"Our system is rated higher than PubMed search by the clinical experts, in terms of utility and quality.",
"Our main contributions include: We introduce a unified schema for mechanisms that generalizes across many types of activities, functions and influences.",
"We construct and distribute MECHANIC , an annotated dataset of papers related to COVID-19, with 2,400 instances of our mechanism relation.",
"Using MECHANIC , we train an IE model and apply it to 160K abstracts in COVID-19 literature, constructing COMB , a KB of 1.5M mechanism instances.",
"Manual evaluation of relations sampled from our KB shows them to have 88% accuracy.",
"We also find a model trained on our data reaches roughly 80% accuracy on a sample of general biomedical papers from across the PubMed corpus, with no additional training, demonstrating the generalization of our approach.",
"We showcase the utility of COMB in structured search for mechanisms in the literature.",
"In a study with MDs working to combat COVID-19, our system is rated higher than PubMed search in terms of utility and quality.",
"Mechanisms in science The concept of mechanisms , also referred to as functional relations , is fundamental across the sciences.",
"For example mechanisms are described in biomedical ontologies (Burek et al., 2006; Rhl, 2012; Keeling et al., 2019), engineering (Hirtz et al., 2002), and across science.",
"Mechanisms can be natural (e.g., the mechanism by which amylase in saliva breaks down starch into sugar), artificial (electronic de-vices), non-physical constructs (algorithms, economic policies), and very often a blend (a pacemaker regulating the beating of a heart through electricity and AI algorithms).",
"Although seemingly intuitive, exact definitions of mechanisms are subject to debate in the philosophy of science (Rhl, 2012; Keeling et al., 2019).",
"An Oxford dictionary definition of mechanisms refers to a natural or established process by which something takes place or is brought about .",
"More intricate definitions discuss complex systems producing a behavior, entities and activities productive of regular changes, a structure performing a function in virtue of its parts and operations, or the Schema Entity types Relations Example SciERC CS methods/tasks (free-form spans) used-for Use GNNs for relation extraction.",
"distinction between correlative property changes and activity determining how a correlative change is achieved (Rhl, 2012).",
"Abstract definitions can help with generalization across many important types of mechanisms.",
"The schema we propose (Sec. 3) is inspired by such definitions, operationalizing them and making them more concrete, and also simple enough for models and human annotators to identify.",
"Information extraction from scientific texts There is a large body of literature on extracting information from scientific papers, primarily in the biomedical sphere.",
"This information often corresponds to very specific types of mechanisms, as shown in Tab.",
"1. Examples include ChemProt (Li et al., 2016) with mechanisms of chemical-protein regulation, drug interactions in the DDI dataset (Segura Bedmar et al., 2013), genetic and cellular activities/functions in GENIA (Kim et al., 2013), semantic roles of clinical entities (Kilicoglu et al., 2011), PICO interventions and outcomes (Wallace et al., 2016; Nye et al., 2018), and computer science methods/tasks in SciERC (Luan et al., 2018).",
"Such schemas have been used, for example, to extract genomic KBs (Poon et al., 2014) and automate systematic reviews (Nye et al., 2020).",
"Our schema draws on these approaches, but with a much broader reach across concepts seen in COVID-19 papers (Tab. 1, Fig. 2).",
"An important area in information extraction focuses on open concepts, with prominent approaches being Open IE (Etzioni et al., 2008) and Semantic Role Labeling (SRL; Carreras and Mrquez, 2005), which share similar properties and predictions (Stanovsky et al., 2018).",
"While such methods are intended to be domain independent, they perform significantly worse in the scientific domain (Groth et al., 2018).",
"Kruiper et al. (2020) developed a multi-stage process to post-process Open IE outputs, involving trained models and humans to find a balance between generic and fine-grained clusters of relation arguments and omitting noisy clusters.",
"In contrast, our unified schema enables annotating a dataset of mechanism relations between free-form spans and training IE models to automatically generalize across diverse relation types.",
"Our schema is also related broadly to the task of training reading comprehension models on procedural texts describing scientific processes (such as short paragraphs written by crowd workers to explain photosynthesis in simple language; Dalvi et al., 2018).",
"Our representation of scientific texts in terms of a graph of causal relations can potentially help infer processes across science.",
"COVID-19 IE Recent work (Verspoor et al., 2020a) has focused on extracting information from the CORD-19 corpus (Wang et al., 2020b).",
"PICO concepts are extracted and visualized in an exploratory interface in the COVID-SEE system (Ver-spoor et al., 2020b).",
"In Wang et al. (2020a), genes, diseases, chemicals and organisms are extracted and linked to existing biomedical KBs with information such as gene-disease relations.",
"Additional relations based on the GENIA schema are extracted from the text.",
"To address the novel COVID-19 domain, the schema is enriched with new entity types such as viral proteins and immune responses.",
"In this paper, we focus on a more general schema that captures diverse concepts appearing in literature related to COVID-19, an emerging domain with novel concepts coming from many fields and subfields.",
"The mechanism KG we construct includesas a subset diverse biomolecular and clinical information (such as chemical-disease relations) as part of a general mechanism schema.",
"We present a schema that builds upon and consolidates many of the types of mechanisms discussed in Sec.",
"2. Our defined schema has three key properties: (1) it uses a generalized concept of mechanism relations, capturing specific types of mechanisms in existing schema and extending them broadly; (2) it includes flexible, generic entities not limited to predefined types, and (3) it is simple enough for human annotators and models to identify in the natural language of scientific texts.",
"This schema enables forming our KB by identifying a set of mechanism relations in a corpus of scientific documents (Sec. 4.3).",
"We formally define each mechanism as a relation ( E 1 , E 2 , class ) between entities E 1 and E 2 , where each entity E is a text span and the class indicates the type of the mechanism relation.",
"Entities all share a single common type and can be either natural (e.g., protein functions, viral mechanistic activities) or artificial (e.g., algorithms, de-vices), to capture the generality of the concepts in science (see Fig. 2).",
"We allow each entity to take part in multiple relations (tuples) within a given text, leading to a mechanism graph.",
"Mechanisms are categorized into two coarse-grained classes: 2 Direct mechanisms include activities of a mechanistic nature actions explicitly performed by an entity, such as descriptions of a virus binding to a cell, and explicit references to a function (e.g., a use of a drug for treatment, or the use of AI for drug design as in Fig. 1).",
"Indirect mechanisms include influences or associations without explicit mechanistic information or mention of a function (such as describing observed effects, without the process involved).",
"These relations correspond more to input-output cor-2 We also provide a dataset and extraction model for ternary relations in the form of (subject, object, predicate) . We focus on the coarse-grained mechanism schema due its broader flexibility and coverage. See App. A.1 for details. CS / m a t h / e n g . b i o m e d m e t h o d s c h e m i s t r y / p h y s i c s e c o l o g y / z o o l o g y e p i d e m i o l o g y g e n e t i c s i m m u n o l o g y m e d ./ p h a r m a m o l e c . b i o . s o c i a l / p u b li c v i r o l o g y + m i c r o b i o . 0 50 100 150 200 250 Figure 2: MECHANIC covers a diverse set of scientific fields. Histogram of domains in MECHANIC (sample of 350 relations). Manually labeled relation entities, based on a list of scientific disciplines from Wikipedia. relations (Rhl, 2012), such as indicating that COVID-19 may lead to economic impacts but not how (Fig. 1), as opposed to direct mechanisms describing inner workings revealing more of the intermediate states that lead from initial conditions (COVID-19) to final states (price inflation) or explicitly describing a function.",
"As an example for the utility of this distinction between direct and indirect relations, consider an MD looking to generate a structured list of all uses of a treatment (direct mechanism), but not include side effects or complications (indirect).",
"We describe our approach (depicted in Fig. 3) for extracting a knowledge base of mechanisms using our unified schema.",
"We first curate MECHANIC , an annotated dataset of general mechanisms from a small collection of scientific papers (Sec. 4.1).",
"We then train a model on our annotated data to extract mechanism relations from the entire CORD-19 corpus of scientific papers; we use it to build COMB , a knowledge base of mechanisms across the entire CORD-19 corpus of (Sec. 4.2), which supports semantic search for relations (Sec. 4.3).",
"We construct a dataset of mechanism relations in texts randomly sampled from the CORD-19 corpus (Wang et al., 2020b) that includes scientific papers connected to COVID-19.",
"To circumvent annotation challenges in scientific datasets (Luan et al., 2018) and ensure high-quality annotations, we follow a three-stage process of (1) annotating entities and relations using biomedical experts, (2) unifying span boundaries with an NLP expert, and (3) verifying annotations with a bio-NLP expert.",
"Our annotation process is a relatively low-resource and generalizable approach for a rapid response to the ...deep reinforcement learning can be used to learn mitigation policies in epidemiological models...",
"In the first stage, five annotators with biomedical and engineering background annotate all mechanism relations as defined in Sec. 3 (full annotation guidelines are available in our code repository).",
"Relations are annotated as either direct/indirect.",
"Entities are annotated as the longest span of text that is involved in a relation with another entity, while not including redundant or irrelevant tokens.",
"As in related tasks (Luan et al., 2018), annotators are guided to resolve doubt on span boundaries by selecting the longest relevant span.",
"Annotators had a one-hour training session.",
"In the first part of the training session, annotation guidelines were reviewed.",
"The guidelines included simple explanations of direct/indirect mechanisms along with introductory examples (e.g., the virus makes use of spike protein to bind to a cell , A virus leads to respiratory infection ).",
"In the second part, annotators saw examples from papers in the annotation interface (see Fig. 6, App. A), and performed a few live training annotations.",
"We initially observed significant variation between annotators in identifying span boundaries for entity annotations, stemming from inherent subjectivity in such annotation tasks (Stanovsky et al., 2018; Luan et al., 2018) and from lack of NLP experience by some annotators.",
"In the second stage, an NLP expert annotator conducted a round of style unification by viewing annotations and adjusting span boundaries to be more cohesive while preserving the original meaning, focusing on boundaries that capture essential but not redundant or generic information (e.g., adjusting the span substantial virus replication by unknown mechanisms to include only virus replication ).",
"Finally, in the third stage, a bio-NLP expert with experience in annotating scientific papers verified the annotations and corrected them as needed.",
"The expert accepted 81% of the annotations from the second stage without modification, confirming the high quality of the stage-2 data.",
"Relation label mismatches accounted for 5% of the remaining 19%.",
"Other sources of disagreement were span mismatches and new relations added by the bio-NLP expert adjudicator.",
"The resulting dataset (MECHANIC : Mech anisms AN otated in C OVID-19 papers) contains 2,370 relation instances (1645 direct, 725 indirect) appearing in 1,000 sentences from 250 abstracts.",
"3 Average span length is 4 tokens, while the average distance between relation arguments is 11.40 tokens.",
"Using MECHANIC , we train an IE model to extract mechanism relations from sentences in scientific documents.",
"We train DyGIE++ (Wadden et al., 2019), a state-of-the-art end-to-end IE model which extracts entities and relations jointly (without assuming to have entity spans given), classifying each relation as one of { DIRECT , INDIRECT } .",
"4 To form our corpus-level KB, we apply the trained model to each document in our corpus (all 160K abstracts in the CORD-19 corpus) to extract mechanism relations and then integrate the extracted relations.",
"We find that our trained model achieves high precision scores for high confidence predictions (precision 80% within top20 predicted relations; see P @ K figure, App. B).",
"Therefore, our corpus-level KB is constructed by filtering predictions with low confidence.",
"3 The dataset is similar in size to related scientific IE datasets (Luan et al., 2018) which share related challenges in collecting expert annotations of complex or ambiguous concepts over difficult texts.",
"4 We use DyGIE++ with SciBERT (Beltagy et al., 2019) embeddings fine-tuned on our task and perform hyperparameter grid search (for dropout and learning rate only) and select the best-performing model on the development set ( 7 e 4 and 0 . 43 , respectively).",
"Full details are in App.",
"B.3.",
"To integrate relations and entities across the corpus, we use standard surface-level string normalization (such as removing punctuation, lemmatiz-ing, and lowercasing) and unify and normalize entity mentions using coreference clusters of entities within a document.",
"5 Each coreference cluster is assigned a representative entity as the mention with the longest span of text, and all other entities in that cluster are replaced with the representative entity.",
"This is particularly useful for normalizing pronouns such as it with the original mention they referred to (e.g., a specific virus or method it refers to).",
"Our final KB (COMB ) consists of 1.5M relations in the form of ( E 1 , E 2 , DIRECT/INDIRECT ) fil-tered by high confidence score ( >= 90% ), where entities E i are standardized free-form spans of text.",
"The constructed KB enables applications for retrieving relations across concepts from many disciplines.",
"For example, searching for all documents that include mechanisms to incorporate AI in studies of heart disease ( E 1 = AI , E 2 = heart disease , DIRECT ) requires going beyond simply finding documents that mention AI and heart disease .",
"Here, we describe our approach for searching over the KB by encoding entities and relations, capturing related concepts (such as cardiac disease and heart conditions ), as well as simpler surface matches ( artificial intelligence methods , artificial intelligence models ).",
"Specifically, for a given query q ( E q 1 , E q 2 , class ) , our goal is to find mechanisms r i in COMB whose entities are free-form texts similar to E q 1 , E q 2 in the query.",
"The class is used to filter for the type of relationfor example, when explicitly requiring DIRECT mechanisms.",
"Entity encoding We obtain an encoding function f E R d to encode all unique spans (entities) in the KB to a d dimensional vector space.",
"The encoding function is derived by fine-tuning a language model (LM) originally trained on PubMed papers (Gururangan et al., 2020) on semantic similarity tasks.",
"For fine-tuning, we use sentence pairs in STS (Cer et al., 2017) and SNLI (Bowman et al., 2015) following Reimers and Gurevych (2019), and add biomedical sentence pairs from the BIOSSES dataset (Sogancoglu et al., 2017).",
"5 We use a pre-trained DyGIE++ model trained on SciERC to obtain coreference clusters.",
"Relation similarity Given a query q , we rank the set of all COMB relations with the same class as the query.",
"For each candidate relation r = ( E 1 , E 2 , class ) in COMB , we compute its similarity to the query relation q as the minimum similarity between encodings of their corresponding entities: min j { 1 , 2 } f ( E j ) f ( E qj ) .",
"With this definition, a relation ( E 1 , E 2 ) with E 1 very similar to the first entity of the query E q 1 but E 2 distant from E q 2 will be ranked low.",
"For example, with the query ( E q 1 = deep learning , E q 2 = drugs ) , the relation ( E 1 = microscope , E 2 = drugs ) will be ranked low due to the pair (deep learning, microscope).",
"For efficient search, we create an index of embeddings corresponding to the 900K unique surface forms in COMB and employ a system designed for fast similarity-based search (Johnson et al., 2017).",
"In this section, we evaluate the constructed KB of mechanisms in terms of correctness and informativeness (Sec. 5.1), and its utility in searching for mechanisms (Sec. 5.2).",
"Our main goal is to ensure the mechanism relations have high quality to support our large-scale KB and search applications.",
"We further show that our schema is useful as compared to other schema.",
"We employ two annotators with biomedical and CS backgrounds to judge the quality of the predicted relations in COMB .",
"In particular, following Groth et al. (2018), annotators are given a predicted relation together with the sentence from which it was extracted.",
"We collapse all entities/relations into one generic type for this analysis.",
"Annotators are asked to label the predicted relation as correct if (1) it accurately reflects a mechanistic relation mentioned in the sentence ( correctness ), and (2) the extracted entities and relation label are sufficient to convey the meaning of the relation, without referring to the source sentence ( informativeness ).",
"We collect human judgements for 300 predicted relations for our approach and baselines, sampled from 150 randomly selected sentences.",
"Agreement is 71% by Cohen's Kappa and 73% by Matthew's Correlation Coefficient.",
"Comparing KB quality to other schemas To showcase the benefit of our approach, we compare the relations extracted using a DyGIE model trained on MECHANIC , versus a DyGIE model MECHANICS c i ERCSRLS e m R ep O pen IE 10 20 30 40 50 60 70 80 90 A cc u r a cy AI Viral Metric PubMed COMB Search 71% 90% Utility 69.5% 92% Interface 78% 90% Overall 74% 91% Figure 4: Evaluating COMB in studies with experts.",
"trained on other resources that are most related to our mechanisms: SemRep (Kilicoglu et al., 2011) captures a wide range of biomedical relations (such as drug-drug interactions), and SciERC (Luan et al., 2018) contains relations relevant to computer science (such as method-task and used-for rela-tions).",
"6 In addition, we compare with a Semantic Role Labeling (SRL) method (Shi and Lin, 2019) that captures broad relations between free-form spans that focus on agents and actions, and a neural OpenIE model (Stanovsky et al., 2018).",
"Fig. 4 (left) shows that 88% of relations from COMB are marked as correct by human raters, demonstrating that our approach extracts mechanism relations with better quality than external resources.",
"7 These results suggest that our predicted relations are of overall high quality and can be used to build our corpus-level KB and explore its utility.",
"Examining Generalization COVID-19 papers are highly diverse both topically and chronologically.",
"We conduct a small-scale preliminary experiment examining whether a model trained on MECHANIC can generalize to capture mechanism relations in the general biomedical papers, from a much larger corpus of open access papers on PubMed Cen-6 We use an expert annotator to align external resources to our direct or indirect mechanism annotations, e.g., USED-FOR is mapped to direct mechanism).",
"7 We also experiment with automated evaluation.",
"We split MECHANIC into train/dev/test sets (170/30/50 abstracts), and obtain F 1 = 50 .",
"2 for entity detection, F 1 = 45 .",
"6 for relation detection and F 1 = 42 .",
"8 for classification, on par with performance in other similar scientific IE tasks (Luan et al., 2018).",
"See more details in App.",
"B.4.",
"tral (PMC).",
"8 We randomly sample a set of 200 predicted relations from papers across the entire PMC corpus, and label them using the same criteria used above.",
"As expected, we find that performance drops, but encouragingly is still considerably high: after filtering predictions with confidence lower than 90% in the same way we construct COMB , 76% of relations are considered correct.",
"When filtering for confidence with a threshold of 95% (which captures 70% of the samples), the rate of correct predictions is 78%.",
"In future work it would be interesting to fine-tune our model on a small set of labeled examples from the general PMC corpus to potentially improve these results.",
"We design several search tasks and user studies to evaluate the utility of the constructed KB (Sec. 5.2.1) and compare it with the PubMed medical KB and search engine (Sec. 5.2.2), as judged by medical doctors working on the front lines of COVID-19 treatment and research.",
"All tasks are designed to evaluate our framework's utility in helping researchers and clinicians looking to quickly search for mechanisms or cause-effect relations in the literature and retrieve a list of structured results.",
"We form search queries based on a wide range of topics pertaining to (1) SARS-CoV-2 mechanisms (such as modes of transmission, drug effects, climatic influences, molecular-level properties) and",
"(2) applications of AI in this area.",
"Tab.",
"2a and 2b show queries and example relations returned from COMB , along with the context sentences from which they were extracted.",
"Viral mechanism search Queries are formed based on statements in recent scientific claim-verification work (Wadden et al., 2020; see full list in App. C.2).",
"For example, for the statement the coronavirus cannot thrive in warmer climates , we form the query as ( E 1 = Warm climate , E 2 = coronavirus ) (see Tab. 2a row 1).",
"For statements reflecting an indirect association/influence, we filter for INDIRECT relations (Tab. 2a row 2).",
"For statements that reflect an undirected mechanism relation (e.g., Lymphopenia is associated with severe COVID-19 disease ), we query for both directions.",
"AI applications search This task is designed to explore the uses of AI in COVID-19 papers (Tab. 2b).",
"We use queries where the first entity E 1 is a leading subfield or method within AI (e.g., deep reinforcement learning or text analysis ), and the second entity E 2 is left unspecified.",
"Since all queries relate to uses of AI, we filter for DIRECT relations.",
"These open-ended queries simulate an exploratory search scenario, and can potentially surface inspirations for new applications of AI against COVID-19 or help users discover where AI is being harnessed.",
"if the sentence actually expresses the mechanism.",
"These annotations are used as ground-truth labels to compute precision/recall scores of the relations extracted by our algorithm.",
"Since it is not feasible to label every relation, annotators are shown a list of 20 relations for each query including high and low rank relations returned by our search algorithm.",
"9 In total, we use 5 annotators to obtain 1,700 relevance labels across both tasks.",
"Inter-annotator agreement is high by several metrics, ranging from 0 .",
"7 0 .",
"8 depending on the metric and task; see App.",
"C.2.",
"Annotators have graduate/PhD-level background in medicine or biology (for the first task) and CS or biology (for the second task).",
"Results Fig. 4 (center) shows our results for both tasks.",
"For biomedical search queries, we observe 90% precision that remains stable for recall values as high as 70% .",
"For AI applications we observe a precision of 85% at a recall of 40% that drops more quickly.",
"This lower precision is likely due to the fact that E 2 is unspecified, leading to a wider range of results with more variable quality.",
"be-9 Specifically, for each query we retrieve the top-1000 similar relations from COMB , ranked as described in Sec. 4, and select the top and bottom 10 relations (20 per query, 200(=20x10) per task, 400(=200x2) in total), shuffle their order, and present to annotators together with the original sentence from which each relation was extracted.",
"This experiment compares the utility of COMB in structured search for causal relationships of clinical relevance to COVID-19 with PubMed 10 a prominent search engine for biomedical literature that clinicians and researchers frequently peruse as their go-to tool.",
"PubMed allows users to control structure (e.g., with MeSH terms or pharmacological actions), is supported by a KB of biomedical entities used for automatic query expansion, and has many other functions.",
"Expert evaluation We recruit five expert MDs with a wide range of specialities including gastroenterology, cardiology, pulmonary and critical care who are active in treating COVID-19 patients and in research.",
"Each expert completed search randomly ordered tasks using both PubMed and our COMBUI, showing the full set of ranked relations, as well as the sentence snippet mentioning the relation, the paper title, and hyperlink to abstract.",
"At the end of the study after all search tasks are completed for both our system and PubMed, experts are given a questionnaire of 21 7-point Likert-scale questions to judge system utility, interface, and search quality.",
"The first 16 questions are taken from a Post Study System Usability Questionnaire (PSSUQ; Lewis, 2002) widely used in system quality research.",
"The last 5 questions are designed by the authors to evaluate search quality such as overall result relevance and ranking (for the full question list, see App. C.2).",
"Each question is asked twice, once for PubMed and once for our system, leading to 21 2 5 = 210 responses.",
"Search queries We provide experts with seven search queries that were created by an expert medical researcher, relating to causal links (e.g., between COVID-19 and cardiac arrhythmias) and functions (e.g., Ivermectin as a treatment).",
"See full set of queries in App.",
"C. Results Fig. 4 (right) shows the average Likert scores (normalized to [0%,100%]) across all questions and users for COMB and PubMed.",
"The results show that the medical experts strongly prefer COMB to PubMed (overall average of 91% vs. 74%, with non-normalized scores of 6.6 vs. 5.2).",
"On average across the 21 questions, the majority of the five experts assigned our interface a higher score than PubMed, at an average rate of 3.5/5.",
"This rate in-10 https://pubmed.ncbi.nlm.nih.gov/ creases further when considering tieson average 4.75/5 of the experts assigned our system a score equal or higher than PubMed.",
"Overall, our system significantly outperforms PubMed in this task, with an average gap of roughly 20% for search and utility-related questions (Wilcoxon signed rank test p-value is significant at 4 . 77 10 7 ).",
"These results are particularly interesting and indicate the potential of COMB because of the experts' strong familiarity with PubMed and the simple nature of our UI.",
"Our system searches and retrieves relations only texts explicitly mentioning relations that match the input query.",
"This often more precisely reflects the query than results returned by PubMed, which do not have the additional layer of structured information in COMB .",
"For example, for the query ( E 1 =cardiac arrhythmias, E 2 =COVID-19), PubMed returns the following title of one paper: Guidance for cardiac electrophysiology during the COVID-19 pandemic [....] Electrocardiography and Arrhythmias Committee E 1 and E 2 are both mentioned, but not within a mechanism relation.",
"We introduced a unified schema for mechanisms that generalizes across many types of activities, functions and influences.",
"We constructed and distributed MECHANIC , a dataset of papers related to COVID-19 annotated with this schema.",
"We trained an IE model and applied it to COVID-19 literature, constructing COMB , a KB of 1.5M mechanisms.",
"We showcased the utility of COMB in structured search for mechanism relations in COVID-19 literature.",
"In a study with MDs active in the fight against the disease, our system is rated higher than PubMed search for both utility and quality.",
"Our unified view of mechanisms can help generalize and scale the study of COVID-19 and related areas.",
"More broadly, we envision a KB of mechanisms that enables the transfer of ideas across the literature (Hope et al., 2017), such as by finding relationships between mechanisms in SARS-CoV-2 and other viruses, and assists in literature-based discovery (Swanson and Smalheiser, 1996) by finding cross-document causal links.",
"Our knowledge-base and search system is primarily intended to be used by biomedical researchers working on COVID-19, and researchers from more",
"general areas across science.",
"Models trained and developed on our dataset are likely to serve researchers working on COVID-19 information extraction, and scientific NLP more broadly.",
"We hope our system will be helpful for accelerating the pace of scientific discovery, in the race against COVID-19 and beyond.",
"Our knowledge-base can include incorrect information to the extent that scientific papers can have wrong information.",
"Our KB includes metadata on the original paper from which the information was extracted, such as journal/venue and URL.",
"Our KB can also miss information included in some papers.",
"Our data collection process respected intellectual property, using abstracts from CORD-19 (Wang et al., 2020b), an open collection of COVID-19 papers.",
"Our knowledge-base fully attributes all information to the original papers.",
"All annotators were given extensive background on our objectives, and told their annotations will help build and evaluate a knowledge-base and search engine over COVID-19 research.",
"Graduate-student annotators were payed 25 USD per hour.",
"MD experts helped evaluate the tool on a voluntary basis.",
"We like to acknowledge a grant from ONR N00014-18-1-2826.",
"Authors would also like to thank anonymous reviewers, members of AI2, UW-NLP and the H2Lab at The University of Washington for their valuable feedback and comments."
]
| [
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"method",
"method",
"result",
"objective",
"objective",
"method",
"abstain",
"objective",
"method",
"method",
"result",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"objective",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"result",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"result",
"abstain",
"method",
"result",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
]
|
[
"Pretraining NLP models with variants of Masked Language Model (MLM) objectives has recently led to a significant improvements on many tasks.",
"This paper examines the benefits of pretrained models as a function of the number of training samples used in the downstream task.",
"On several text classification tasks, we show that as the number of training examples grow into the millions, the accuracy gap between finetuning BERT-based model and training vanilla LSTM from scratch narrows to within 1%.",
"Our findings indicate that MLM-based models might reach a diminishing return point as the supervised data size increases significantly.",
"Language modeling has emerged as an effective pretraining approach in wide variety of NLP models.",
"Multiple techniques have been proposed, including bi-directional language modeling (Peters et al., 2018), masked language models (Devlin et al., 2018), and variants of denoising auto-encoder approaches (Lewis et al., 2019; Raffel et al., 2019; Joshi et al., 2019).",
"Today, it is rare to examine a leaderboard without finding the top spots occupied by some variant of a pretraining method.",
"1 The future of NLP appears to be paved by pretraining a universal contextual representation on wikipedia-like data at massive scale.",
"Attempts along this path have pushed the frontier to up 10 to the size of wikipedia (Raffel et al., 2019).",
"However, the success of these experiments is mixed: although improvements have been observed, the downstream task is usually data-limited.",
"There is evidence that large-scale pretraining does not always lead to state-of-the-art results (Raffel et al., 2019), especially on tasks such as machine translation, where abundance of training data, and the 1 https://super.gluebenchmark.com/leaderboard existence of strong augmentation methods such as back translation might have limited the benefit of pretraining.",
"This paper examines the pretraining benefits of downstream tasks as the number of training samples increases.",
"To answer this question, we focus on multi-class text classification since:",
"(i) it is one of most important problems in NLP with applications spanning multiple domains.",
"(ii) large sums of training data exists for many text classification tasks, or can be obtained relatively cheaply through crowd workers (Snow et al., 2008).",
"We choose three sentiment classification datasets: Yelp review (yel, 2019), Amazon sports and electronics review (Ni et al., 2019), ranging in size from 6 to 18 million examples.",
"2 We finetune a RoBERTa model (Liu et al., 2019) with increments of the downstream dataset, and evaluate the performance at each increment.",
"For example, on the Yelp dataset whose size is 6 million, we train the models on subsets of the data with each subset size being in the sequence (60k, 600K, 1.8M, 3M .., 6M).",
"For comparison, we also train a vanilla BiLSTM, and another BiLSTM which uses pretrained Roberta token embeddings.",
"We observe that when both models are trained on 1% of the data, the gap between BiLSTM and RoBERTa models is at its peak, but as the training dataset size increases, the BiLSTM model accuracy keeps on increasing whereas RoBERTa's accuracy remain mostly flat.",
"As the dataset size increases, the accuracy gap shrinks to within 1%.",
"Our study suggests that collecting data and training on the target tasks is a solution worth considering, especially in production environments where accuracy is not the only considered factor, rather inference latency is often just as crucial.",
"We benchmarked the inference latency of the these models on 2 These datasets are the largest publicly available classifi-action datasets that we are aware of.",
"both CPU and GPU for different batch sizes, and as expected, we observe at least 20 speedup for the BiLSTM compared to the RoBERTa.",
"This paper provides new experimental evidence and discussions for people to rethink the MLM pre-training paradigm in NLP, at least for resource rich tasks.",
"Scaling the number of training examples has long been identified as source of improvement for machine learning models in multiple domains including NLP (Banko and Brill, 2001), computer vision (Deng et al., 2009; Sun et al., 2017) and speech (Amodei et al., 2016).",
"Previous work has suggested that deep learning scaling may be predictable empirically (Hestness et al., 2017), with model size scaling sub-linearly with training data size.",
"(Sun et al., 2017) concluded that accuracy increases log-arithmally with respect to training data size.",
"However, these studies have focused on training models in the the fully supervised setting, without pretraining.",
"One closer work is (He et al., 2019) where it is shown that randomly initialized standard computer-vision models perform no worse than their ImageNet pretrained counterparts.",
"However, our work focuses on text classification.",
"We do not examine the benefit of pretraining, at large, rather we focus on the benefit of pretraining for resource rich tasks.",
"Another concurrent work that is still under review, in (Nakkiran and Sutskever, 2020) observes that, in some translation task such as IWSLT14, small language models exhibit even lower test loss compared to the large transformer model when the number of training samples increases.",
"We focus on a multi-class sentiment classification task: given the user reviews, predict the rating in five points scale { 1 , 2 , 3 , 4 , 5 } .",
"The experiments are conducted on the following three benchmark datasets.",
"Yelp Challenge (yel, 2019) contains text reviews, tips, business and check-in sets in Yelp.",
"We use the 6.7m user reviews with ratings as our dataset.",
"Amazon Reviews (Ni et al., 2019) contains product reviews (ratings, text, helpfulness votes) from Amazon.",
"We choose two categories: sports / outdoors , and electronics as two separate datasets.",
"We only use the review text as input features.",
"The distribution across five ratings of each dataset is illustrated in Table 1.",
"In our experiment, all the above data is split into 90% for training and 10% for testing.",
"RoBERTa (Liu et al., 2019) RoBERTa is a transformer-based model pretrained with masked language modeling objectives on a large corpus.",
"We finetune our classification task on both Roberta-Base (12 layers, 768 hidden, 12 heads) and Roberta-Large (24 layers, 1024 hidden, 16 heads).",
"LSTM (Hochreiter and Schmidhuber, 1997) We use a bidirectional LSTM with a max-pooling layer on top of the hidden states, followed by a linear layer.",
"Token embeddings of size 128 are randomly initialized.",
"LSTM + Pretrained Token Embedding Similar to the previous setup, except we initialized the token embeddings with Roberta pretrained token embedding (Base: 768-dimensional embedding, Large: 1024-dimensional embedding).",
"The embeddings are frozen during training.",
"For fair comparison, all the above models share the same vocabulary and BPE tokenizer (Sennrich et al., 2015).",
"We use the Adam optimizer and the following hy-perparameter sweep for each model.",
"(i) RoBERTa is finetuned with the following learning rates { 5 e 6 , 1 e 5 , 1 .",
"5 e 5 , 2 e 5 } , with linear warm up in the first 5% of steps followed by a linear Figure 1: Accuracy Gap of Roberta, BiLSTM trained on different amount of data Models Yelp Sports Electronics Params Accuracy Accuracy Accuracy Roberta-Large 78.85 -79.65 -79.07 304M Roberta-Base 78.44 0.41 79.45 0.20 78.84 0.23 86M LSTM-4-512 + Large 77.14 1.71 78.80 0.85 78.16 0.92 25M LSTM-4-512 + Base 77.07 1.78 78.72 0.93 78.07 1.0 24M LSTM-4-256 + Large 77.02 1.83 78.76 0.89 78.12 0.95 7.4M LSTM-4-256 + Base 77.03 1.82 78.62 1.03 77.98 1.09 6.8M LSTM-4-256 76.37 2.48 78.38 1.27 77.76 1.31 4.8M LSTM-2-256 76.09 2.76 78.18 1.47 77.57 1.5 2.4M Table 2: Test Accuracy of Roberta-base, BiLSTM, and BiLSTM with Roberta Pretrained Token Embedding when trained on the full dataset.",
"decay to 0.",
"The batch size is set to 32, with dropout being 0.1.",
"(ii) For the LSTM, it is trained with a constant learning rate from the sequence: { 2 .",
"5 e 4 , 5 e 4 , 7 .",
"5 e 4 , 1 e 3 } .",
"The batch size is set to 64.",
"We train each model on 8 GPUs for 10 epochs and perform early stopping based on accuracy on the test set.",
"The maximum sequence length of input was set to 512 for all models.",
"We first investigate the effect of varying the number of training samples, for fixed model and training procedure.",
"We train different models using { 1% , 10% , 30% , 50% , 70% , 90% } amount of data to mimic the low-resource, medium-resource and high-resource regime.",
"Figure 1 shows that the accuracy delta between the LSTM and RoBERTa models at different percentages of the training data.",
"From the plot, we observe the following phenomena:",
"(i) Pretrained models exhibit a diminishing return behavior as the size of the target data grows.",
"When we increase the number of training examples, the accuracy gap between Roberta and LSTM shrinks.",
"For example, when both models are trained with 1% of the Yelp dataset, the accuracy gap is around 9%.",
"However, as we increases the amount of training data to 90%, the accuracy gap drops to within 2%.",
"The same behaviour is observed on both Amazon review datasets, with the initial gap starting at almost 5% for 1% of the training data, then shrinking all the way to within one point when most of the training data is used.",
"(ii) Using the pretrained RoBERTa token embeddings can further reduce the accuracy gap especially when training data is limited.",
"For example, in the Yelp review data, a 4-layers LSTM with pretrained embeddings provides additional 3 percent gain compared to its counterparts.",
"As Table 2 shows, an LSTM with pretrained RoBERTa token embeddings always outperforms the ones with random token initialization.",
"This suggests that the embeddings learned during pretraining RoBERTa may constitute an efficient approach for transfer learning the knowledge learned in these large MLM.",
"We further report the accuracy metric of each model using all the training data.",
"The full results are listed in Table",
"2. We observe that the accuracy gap is less than 1% on the Amazon datasets.",
"even compared to 24 layers RoBERTa-large model.",
"As for the Yelp dataset, the accuracy gap is within 2 percent from the RoBERTa-large model, despite an order of magnitude difference in the number of parameters.",
"We also investigate the inference time of the three type of models on GPU and CPU.",
"The CPU inference time is tested on Intel Xeon E5-2698 v4 with batch size 128.",
"The GPU inference time is tested on NVIDIA Quadro P100 with batch size { 128 , 256 , 384 } .",
"The maximum sequence length is 512.",
"We run 30 times for each settings and take the average.",
"The results are listed in TABLE",
"3. Model CPU GPU Batch size 128 128 256 384 Roberta-Base 323 16.1 16.1 16.1 Roberta-Large 950 55.5 55.5 -LSTM-2-256 15.2 0.47 0.43 0.42 LSTM-4-256 28.1 1.17 0.94 0.86 LSTM-4-256+Base 35.2 1.33 1.09 1.02 LSTM-4-256+Large 37.5 1.33 1.17 1.07 LSTM-4-512+Base 64.8 3.52 3.20 3.13 LSTM-4-512+Large 64.8 3.36 3.32 3.26 Table 3: Inference time (ms) of Roberta, BiLSTM on CPU and GPU Not surprisingly, the LSTM model is at least 20 time faster even when compared to the Roberta-Base.",
"Note that the P100 will be out of memory when batch size is 384 for Roberta-Large.",
"Another observation is that although using the Roberta pretrained token embedding introduces 10 times more model parameters compared to vanilla BiLSTM, the inference time only increases by less than 25%.",
"This is due to the most additional parameters are from a simple linear transformation.",
"Our findings in this paper indicate that increasing the number of training examples for standard' models such as LSTM leads to performance gains that are within 1 percent of their massively pretrained counterparts.",
"Due to the fact that there is no good large scale question answering dataset, it is not clear if the same findings would hold on this type of NLP tasks, which are more challenging and semantic-based.",
"In the future work, we will run more experiments if there are some other large scale open datasets.",
"Despite sentiment analysis being a crucial text classification task, it is possible, though unlikely, that the patterns observed here are limited to sentiment analysis tasks only.",
"The rationale behinds that is that pretrained LSTMs have kept up very well with transformer-based counterparts on many tasks (Radford et al.).",
"One way to interpret our results is that sim-ple' models have better regularization effect when trained on large amount of data, as also evidenced in the concurrent work (Nakkiran and Sutskever,",
"2020).The other side of the argument in interpreting our results is that MLM based pretraining still leads to improvements even as the data size scales into the millions.",
"In fact, with a pretrained model and 2 million training examples, it is possible to outperform an LSTM model that is trained with 3 more examples.",
"Finetuning BERT-style models on resource-rich downstream tasks is not well studied.",
"In this paper, we reported that, when the downstream task has sufficiently large amount of training exampes, i.e., millions, competitive accuracy results can be achieved by training a simple LSTM, at least for text classification tasks.",
"We further discover that reusing the token embeddings learned during BERT pretraining in an LSTM model leads to significant improvements.",
"The findings of this work have significant implications on both the practical aspect as well as the research on pretraining.",
"For industrial applications where there is a trade-off typically between accuracy and latency, our findings suggest it might be feasible to gain accuracy for faster models by collecting more training examples."
]
| [
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"result",
"abstain",
"method",
"abstain",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"method"
]
|
[
"We introduce, release, and analyze a new dataset, called Humicroedit, for research in computational humor.",
"Our publicly available data consists of regular English news headlines paired with versions of the same headlines that contain simple replacement edits designed to make them funny.",
"We carefully curated crowdsourced editors to create funny headlines and judges to score a to a total of 15,095 edited headlines, with five judges per headline.",
"The simple edits, usually just a single word replacement, mean we can apply straightforward analysis techniques to determine what makes our edited headlines humorous.",
"We show how the data support classic theories of humor, such as incongruity, superiority, and setup/punchline.",
"Finally, we develop baseline classifiers that can predict whether or not an edited headline is funny, which is a first step toward automatically generating humorous headlines as an approach to creating topical humor.",
"Humor detection and generation continue to be challenging AI problems.",
"While there have been some advances in automatic humor recognition (Khodak et al., 2017; Davidov et al., 2010; Barbieri and Saggion, 2014; Reyes et al., 2012; Cattle and Ma, 2018; Bertero and Fung, 2016; Yang et al., 2015), computerized humor generation has seen less progress (Binsted et al., 1997; Stock and Strapparava, 2003; Petrovic and Matthews, 2013).",
"This is not surprising, given that humor involves in-depth world-knowledge, common sense, and the ability to perceive relationships across entities and objects at various levels of understanding.",
"Even humans often fail at being funny or recognizing humor.",
"A big hindrance to progress on humor research is the scarcity of public datasets.",
"Further-(a) The Headline Editing Task.",
"more, the existing datasets address specific humor templates, such as funny one-liners (Mihal-cea and Strapparava, 2006) and filling in Mad Libs R (cid:13) (Hossain et al., 2017).",
"Creating a humor corpus is non-trivial, however, because it requires",
"(i) human annotation, and",
"(ii) a clear definition of humor to achieve good inter-annotator agreement.",
"We introduce Humicroedit , a novel dataset for research in computational humor.",
"First, we collect original news headlines from news media posted on Reddit ( reddit.com ).",
"Then, we qualify expert annotators from Amazon Mechanical Turk ( mturk.com ) to",
"(i) generate humor by applying small edits to these headlines, and to",
"(ii) judge the humor in these edits.",
"Our resulting dataset contains 15,095 edited news headlines and their numerically assessed humor.",
"Screenshots of our two annotation tasks are shown in Figure 1, and Table 1 shows some of these annotated headlines.",
"This new dataset enables various humor tasks, such as:",
"(i) understanding what makes an edited headline funny,",
"(ii) predicting whether an edited headline is funny,",
"(iii) ranking multiple edits of the same headline on a funniness scale,",
"(iv) generating humorous news headlines, and",
"(v) recommending funny headlines personalized to a reader.",
"Our dataset presents several opportunities for computational humor research since: Headlines do not have specific templates.",
"Headlines contain very few words, but convey a lot of information.",
"A deeper understanding of world-knowledge and common-sense is needed to completely understand what makes a headline funny.",
"Humorous headlines are often generated using several layers of cognition and reasoning.",
"Despite us carefully qualifying annotators, their knowledge, preferences, bias and stance towards information presented in headlines influ-ence whether they perceive a potentially funny headline as humorous, offensive, confusing, etc.",
"The presence of these factors suggests that thor-ough humor comprehension in our dataset requires the development of NLP tools that are not only robust at pattern recognition but also capable of deeper semantic understanding and reasoning.",
"As an initial exploration of this proposition, we perform various data analysis against the background of humor theories, and we train and examine classifiers to detect humorous edited headlines in our data.",
"In this section, we describe how we gathered our set of original headlines, directed editors to make them funny, employed graders to assess the level of humor in the modified headlines, and created the Humicroedit dataset.",
"Our goal is to study how humor is generated by applying short edits to headlines.",
"News headlines are ripe for humor, since they convey rich information using only a few words.",
"While the short form may seem to limit context, readers have rich background information in the form of their existing world knowledge, which helps them understand the headline.",
"Allowing only short edits means we can apply focused analysis on the tipping point between regular and funny.",
"Therefore, our task is to edit a headline to make it funny, where an edit is defined as the insertion of a single-word noun or verb to replace an existing entity or single-word noun or verb.",
"Note that our rules do not allow: Addition/removal of a whole noun/verb phrase, except removal of noun phrases that are entities (e.g., One World, Virtual Reality).",
"The decision to strictly avoid edits of other parts-of-speech (POS) words was motivated by the observation in our pilot experiments that those edits did not provide enough variety of humor.",
"For example, when substituting adjectives and adverbs, our editors mostly used antonyms or superlatives.",
"Switching nouns and verbs, on the other hand, enables the introduction of diverse novel connections between entities and actions.",
"To identify the replaceable entities, we apply named entity recognition (NER) and POS tagging using the Stanford CoreNLP toolkit (Man-ning et al., 2014).",
"We allow for replacement of only those entities that are well-known, according to the Microsoft Knowledge Base 1 .",
"This improves the likelihood that the terms are familiar to both headline editors and humor judges.",
"We allow a noun (or verb) to be replaced if it is an unambiguous noun (or verb) in WordNet (Fellbaum, 1998) (i.e., has a single WordNet POS).",
"Editors are only allowed to replace one of the selected replaceable words/entities in the headline.",
"We refer to a single-term substitution of this type as a micro-edit , and we will use this term interchangeably with edit in the remainder of this paper.",
"Micro-edits approach the smallest change that can induce humor in text, letting us focus intently on what causes humor.",
"We build our dataset from popular news headlines posted on the social media site Reddit.",
"This strategy steers us towards a set of headlines that is part of general discourse, rather than being only of specialized interest, which would make editing them for humor difficult.",
"We obtain all Reddit posts from the popular sub-reddits r/worldnews and r/politics from January 2017 to May 2018 using Google Big-Query 2 .",
"Each of these posts is a headline from a news source.",
"We remove duplicate headlines and headlines that have fewer than 4 words or more than 20 words.",
"Finally, we keep only the headlines from the 25 English news sources that contribute the most headlines in the Reddit data, resulting in a total of 287,076 news headlines.",
"For our data annotation tasks, we use Mechanical Turk workers who",
"(i) are located in the U.S.,",
"(ii) have a HIT approval rate greater than 97%, and",
"(iii) have more than 10,000 HITs approved.",
"To ensure high data quality, we further qualify distinct sets of",
"(i) turker judges for recognizing humor in an edited headline, and",
"(ii) editors adept at editing headlines to generate humor.",
"We manually collected a set of 20 original news headlines and edited each of them such that some edits are funny and some are not.",
"We asked several members of our research group to assess the funniness of each edited headline using the following integer scale developed by Hossain et al. (2017): 0 Not funny 1 Slightly funny 2 Moderately funny 3 Funny We instructed internal and turker judges",
"(i) to grade objectively regardless of their own stance towards issues, entities and information expressed in the headline, and",
"(ii) to grade an edited headline as funny if they believed it would be funny to a large audience.",
"Further, we instructed judges to grade an edited headline as funny if either the headline was funny by itself regardless of the original headline, or the headline was only funny when considering how the original headline was changed.",
"We labeled the ground truth funniness of each of these 20 edited qualifier headlines as its mean internal judge grade.",
"For the qualification task, we classified as funny any edited headline with a mean grade of 1.0 or above.",
"Next, we launched the same task on Mechanical Turk until we found 150 qualified judges (60% of the candidates).",
"Turkers were qualified if",
"(i) they had 3 or fewer classification errors according to our 1.0 threshold, and",
"(ii) on average, their grades were within 0.6 of the mean internal judge grades.",
"We calculated the inter-annotator agreement for assigning headline funniness grades using the Krippendorff's interval metric (Krippendorff, 1970) a real number in the range [ 1 , 1] , with -1, 0 and 1, respectively, implying complete disagreement, no consensus and full agreement.",
"The for the internal judges and qualified turker judges were, respectively, 0.57 and 0.64.",
"For editor qualification, we randomly sampled 60 headlines, split into 6 separate Mechanical Turk tasks of 10 headlines each.",
"Candidate editors were asked to complete one of these tasks, which was to make each headline as funny as possible to a general audience using a micro-edit.",
"Task participants were instructed not to apply the following edits: Cheap humor generation techniques: add profanity, slang, bathroom/potty humor, crude sexual references or informal language.",
"Squeeze multiple words into one (e.g., House-cat, JumpedOverWall).",
"Next, we used 7 qualified judges to assess the funniness of each edited headline of each candidate.",
"We qualified all candidates whose mean funniness of edited headlines was above 0.8 or the task's average headline's funniness grade, whichever was higher.",
"In total, we obtained 100 qualified editors (57.5% of the candidates) who met our expectations in their ability to create funny headlines.",
"For our final dataset, we randomly sampled a total of 5,170 news headlines from our Reddit dataset, obtaining roughly an equal number of headlines from each news source.",
"We asked 3 editors to edit each headline and 5 judges to grade each edited headline.",
"Multiple micro-edits of the same headline allow us to compare different edits in terms of their effectiveness for generating humor, which we leave for future work.",
"To avoid turker exhaustion and decision fatigue, we performed the annotation task over a series of mini-batches launched at least 24 hours apart.",
"After each round of editing, we applied tools to",
"(i) check the edits for spelling mistakes which we manually corrected, and",
"(ii) to find and eliminate inserted tokens that were a concatenation of two or more words (e.g., selftanner).",
"To allow diversity in annotations, we applied a maximum HIT limit for annotators per batch.",
"After each batch was completed, we temporarily suspended those editors and judges who had done significantly more HITs than the rest, until the others caught up.",
"Lastly, as we obtained more and more annotated data, the editors started employing the same humor generation strategies (e.g., inserting words from a small vocabulary).",
"Consequently, judges saw repeated, identical edits, so the element of surprise was gone, and the judges were grading fewer humorous edited headlines as funny.",
"We addressed this by randomly sampling a set of editors and judges for each batch, obtaining new editors and judges over time, and removing those editors who had done a majority of the HITs but whose edits' average funniness grade fell below a threshold (=0.7) after they participated in a batch.",
"We also removed judges who repeatedly assigned very low funniness grades compared to the 4 other judges for the same edit.",
"The judges' agreement score based on was 0.20, showing modest agreement considering the factors above and others such as judges' personal preferences, bias, political stance, etc. which make consensus difficult.",
"Our Humicroedit dataset includes 15,095 unique edited headlines graded for funniness.",
"For annotating a single headline, we paid 10 US cents to editors and 2.5 US cents to judges.",
"There were also small costs for qualification.",
"Our total cost for obtaining the dataset is about USD 4,500 3 .",
"In this section, we analyze what types of micro-edits are effective at creating humor in our dataset, and we discuss our findings against the background of humor theories.",
"Figure 2 shows the histogram of the mean rating of each edited headline.",
"While the majority of the headlines achieve slight to moderate levels of humor, some of them appear inherently difficult to make humorous by micro-editing.",
"We noticed that editors encountered difficulty making headlines funny when the headlines had very negative themes, such as shootings, death, etc., and when they focused on information less likely to be known by a general audience (e.g., relatively unknown person, an insignificant political issue).",
"By manual inspection, we can gain insights into humor generation strategies employed by our editors, which we discuss with references to Table 1:",
"1. Using a word that forms a meaningful n-gram with the adjacent words (e.g., ID 5: Fire and Fury marshmallows; ID 6: Wall sesame street).",
"2. Connection between replaced word and the replacement: replacements that are semantically distant from (e.g., ID 1: Mexico therapist) or similar in pronunciation to (e.g., ID 10: ties pies) to the replaced word.",
"3. Using a word that makes a strong connection with an entity in the headline (e.g., ID 2: Trump and hair; ID 9: Obama and ears).",
"4. Creating sarcasm (ID: 11).",
"5. Belittling an entity or noun in the headline (e.g., ID 4: Hillary Clinton's turn fault; ID 9: Obama's years ears).",
"6. Tension suppression 4 : making a serious headline silly (e.g., IDs 5 and 9).",
"7. Inserting words that generate incongruity (common among most examples in Table 1).",
"8. Setup and punchline: let the headline build up towards an expected ending, and then change words towards the end to produce a coherent but surprising ending (e.g., IDs 3, 4 and 5).",
"Each micro-edit used a new replacement word to change the headline.",
"We clustered these replacement words using their GloVe word vectors (Pen-nington et al., 2014) and k -means clustering, with k = 20 .",
"Our manually-generated cluster names are shown in Table 2, where the clusters are ordered by the mean funniness score of the edited headlines whose replacement word is in the cluster.",
"For each cluster, we show the frequency with which the cluster was used for replacement words and frequent sample words from the cluster.",
"We can compare our automatically generated clusters with those of Westbury and Hollis (2018).",
"They manually created six clusters from the 200 funniest, single words in Engelthaler and Hills (2018) and then they added more words algorithmically.",
"Four of their six manually curated classes have direct correspondences to our automatically curated classes: sex , insults , bodily functions , and animals .",
"We did not find an equivalent to their profanity class, because we instructed our editors to avoid profanity.",
"There is also a party class that we do not have.",
"Overall, though, we find good agreement between their manually curated classes and some of our automatically generated clusters, leading us to believe that our clusters are meaningfully representative of humor generation strategies for our task.",
"Our rated headlines give us an opportunity to explore theories of humor in a systematic way.",
"We find, in general, that these theories are supported by our data.",
"Although some linguists argue that jokes should make economical use of words (Tomoioaga, 2015), Ritchie (2004) argues that jokes often have extra information, which can make a joke funnier.",
"4 This is also known as the relief theory of humor.",
"While humorous headlines form a special niche of jokes, we observed that longer headlines generally had higher humor potential.",
"Figure 3 shows that the population of our collected headlines from Reddit has a length distribution with a peak at 10 words and a long tail to the right.",
"The least funny edited headlines are the shortest, and the most funny are the longest.",
"This makes sense since very short headlines (4-5 words long) barely have enough contextual information to exploit to make a humorous edit, whereas headlines that have very rich contexts generally allow editors more flexibility to generate humor.",
"We note that Dunbar et al. (2016) also found that longer jokes are funnier, but that some jokes could be too complicated to be funny.",
"We can also examine the number and proportion of replaceable words and how these num-bers affect funniness.",
"In our dataset, the number of replaceable words ranged between 1 and 12, and funniness grades of micro-edits were significantly lower at the two extremes.",
"Editors apparently had difficulty generating humor when they were severely constrained in choosing a word to replace, or when they had too many choices for replacement.",
"However, edited headlines with a higher proportion of replaceable words were generally funnier, as shown in Table",
"3. This suggests that allowing editors more freedom in choosing words from the headline to edit results in better humor, or that high proportion of nouns, entities and verbs in the headline increases the chance of successful humor generation.",
"aim to violate an expectation, with the expectation normally set up by the joke itself.",
"We test incongruity by examining the relationship between the replacement words chosen by our editors and words in the original headline using cosine distances between their GloVe vectors.",
"If incongruity is important, we expect the replacement word to be distant from the headline's original words.",
"The results of this analysis are shown in Figure",
"4. Our approach involved computing the correlations between mean funniness scores of edited headlines and different GloVe distances between their replacement words and the other words in the headline serving as context.",
"In order to sharpen the analysis, we looked at subsets of headlines with extreme funniness scores.",
"For instance, the left-most data points in Figure 4 pertain only to those edited headlines that are in the top and bottom 5% of mean scores, which filters out headlines whose scores are in the middle.",
"The four curves higher on the plot show a relatively high correlation between humor scores Figure 4: Correlations of word vector based cosine distances with mean funniness at various dataset sizes.",
"and the cosine distance between the added word (add in legend) and the replaced word (repl in legend) or the other words in the headline (cntx in legend).",
"This suggests that incongruity leads to humor.",
"The three lower curves show there is not a strong correlation between humor and the distance between the original, replaced word the other words in the headline.",
"Finally, smaller, less humor-ambiguous data leads to stronger positive correlations, which suggests that higher incongruity leads to more quality humor.",
"We specifically studied whether the setup and punchline (Rochmawati, 2017) approach is used in funny headline generation, where the humor comes toward the end of the joke after a setup at the beginning.",
"This has been verified numerically for funny cartoon captions by Shahaf et al. (2015).",
"For our analysis, we construct the joke profile graph , shown in Figure",
"5. It shows the proportion of time the editors substituted a word at each relative word position bin compared to if they randomly chose a word to substitute.",
"Specifically, the red curve in the plot shows the proportion of replacement word locations if they were chosen randomly from those available in the editing task.",
"The green curve shows the proportion of word locations actually chosen by our editors, and the blue curve shows the difference.",
"We see that the blue line rises monotonically toward the end of the headline, meaning that editors tend to prefer replacing words later in the headline.",
"The plot also shows the average funniness grade as a dotted line as a function of the position of the replacement word.",
"It rises dramatically toward the end, showing that the funniest headlines were generally those with a replacement word toward the end.",
"Jokes often express our feelings of superiority over someone else (Morreall, 2016).",
"This can lead to frequent use of negative sentiment in jokes, as found by Mihalcea and Pulman (2007) in their analysis of humorous texts.",
"We find similar support in our clusters of replacement words in Table 2, where the clusters labeled insults , human deficiencies , and corrupted are all comprised of words that tend to denigrate other people, accounting for about 12% of the substitute words inserted by our editors.",
"In this section, we develop baseline classifiers to infer whether an edited headline is funny.",
"Given our dataset, there are three possible combinations of information that we can use to detect humor:",
"1. Using only the edited headline to predict whether it is funny or not.",
"2. Using only the original headline to predict its potential for funniness.",
"3. Using both original and edited headlines to jointly predict resulting funniness.",
"We address the first of these scenarios.",
"A classifier of this type could be used in a generate-and-test setting to create humorous headlines by trying different micro-edits.",
"To map the range of observed funniness grades (see Figure 2) to the funny/not-funny classes, we sort our full dataset in decreasing order of mean funniness 5 scores, and we take the top X % of the data from each end, at size intervals of 10%.",
"Note that each train/test split has an equal number of funny and not-funny headlines, establishing a 50% majority class baseline for accuracy.",
"We first trained a number of non-neural classifiers (logistic regression, random forest, SVM), using two feature sets: n-gram features (1, 2 and 3-grams combined) and features based on GloVe embeddings 6 as shown in Figure",
"4. We use 80% of the data for training and 20% for testing.",
"We optimized hyperparameters for accuracy on 10-fold cross validation on the training set.",
"The random forest classifier consistently performed best, so we only report its test set performance.",
"We also applied a neural baseline model using a single-layer bi-directional LSTM (Hochre-iter and Schmidhuber, 1997) with 16 hidden units, a dropout of 0.5, and GloVe pre-trained embedding of the sequence of words in the edited headline.",
"The training set was further split into 80%-20% splits for training and validation.",
"We used a mini-batch size of 32 with up to 25 epochs to train our model, optimizing for cross-entropy.",
"Table 5 shows the results obtained with our classifiers.",
"LSTM performs better than random forest with either n-gram (Rf-ngram) or GloVe features (Rf-Glv), achieving our best accuracy of 68.54% 5 We then sort by increasing standard deviations of grades to further rank headlines which tie on funniness, as lower standard deviation indicates stronger judge agreement.",
"6 This is our only feature set that uses the replaced word.",
"Bin 0-.4 .4-.8 .8-1.2 1.2-1.6 1.6-2.0 2.0-2.4 2.4-3.0 Acc.",
"71.5 61.1 52.0 61.0 68.6 68.3 76.2 Table 4: LSTM accuracy for distinct grade bins (upper-bounds are inclusive) on the test set for X = 40 .",
"using X = 10.",
"We suspect that the reason for the LSTM's superior performance is that it learns predictive interactions between the semantics of the headline's words (via the GloVe embeddings) that trigger the humor 7 .",
"Table 5 also shows that accuracy generally decreases as X increases, which is expected since higher X implies a smaller separation between funny and not-funny classes, making classification harder.",
"This is further corroborated by the observation that annotator-agreement scores (also shown in Table 5) decrease similarly with increasing X , indicating that funny and not-funny classes are easier to distinguish at the extreme ends of the dataset for both humans and machines alike.",
"We now investigate the test-set performance of the LSTM trained on the dataset obtained using X = 40 , the largest of our experimental datasets for which the class boundaries are distinct.",
"To analyze how well the classifier predicts the extremes in the test set, we obtained classification accuracy on distinct mean grade bins, presented in Table",
"4. The LSTM is able to distinguish the far extremes ( 0.4 and > 1.6) of the test set much more convincingly than the headlines with mean grades in the interval (0.4,1.6].",
"We found a slightly negative correlation between classification accuracy and standard deviation of grades.",
"Using additional judges for headlines with high standard deviation of grades would possibly improve annotator agreement and classification accuracy.",
"The LSTM achieved a significantly lower accuracy when an entity (61.8%) was replaced by the micro-edit compared to when a noun (64.5%) or a verb (65.5%) was replaced.",
"For 5 of the 7 bins in Table 4, the entity-replaced headline classification accuracy was lower than when the other two types were replaced, with the LSTM only achieving an accuracy of 47.9% on the (0.8,1.2] bin for entity-replaced headlines.",
"Although the classifier is never shown what has been replaced, it is bet-7 Using only words in the original headline produced accuracy in the mid-50% range, suggesting that the LSTM captures some humor impact of the replacement word as input and that some headlines have high potential for funniness.",
"ter at assessing humor when the replaced word is not an entity.",
"Our judges did have access to the replaced word, so we speculate this knowledge is important when the replaced word is an entity, especially when the entity triggers the judge's recollection of their world knowledge surrounding the entity, which the LSTM does not have.",
"Another potential reason is that the pretrained GloVe vectors are trained on web data (840 billion tokens obtained from Common Crawl) no more recent than 2014, which may not appropriately represent common entities in our 2017-2018 headline data.",
"Next, we qualitatively analyzed the LSTM's classification accuracy towards the two extremes of the dataset, some of which are shown in Table",
"1. Overall, the LSTM seems to suffer from a relatively high level of brittleness (possibly arising from the unusual writing style in headlines), where correct predictions could be obtained by very little modification to the text.",
"For example, changing Trump Trump's in ID 1 and deleting Essen-tial Politics: in ID 3 fix their classification errors.",
"Quotes in headlines also confused the LSTM (e.g., ID 4) since it is sometimes non-trivial to discern the speaker of the quote in a headline.",
"The classifier often had difficulty figuring out humorous replacements that involve commonsense knowledge (e.g., IDs 7 and 8).",
"Not surprisingly, it also failed to detect offensive replacements as in IDs 13 and 14, where the model probably recognized the incongruity and marked these as funny.",
"World knowledge and cultural references were other challenges (e.g., IDs 4, 9 and 14).",
"The LSTM was able to figure out some of the obvious negative sentiments which were common in unfunny headlines (e.g., ID 15), and it detected some humor patterns resulting from using words that form a common (but funny in the context) n-gram with the adjacent words (e.g., IDs 5 and 6).",
"Overall, our results show that there is a discernible signal separating funny and and not-funny headlines, even when using relatively shallow features that only take the content of the headline into account (modulo GloVe embeddings which are pretrained and hence contain semantic information gleaned from a larger corpus).",
"We expect that further work, which could examine deeper relationships to current events, historical context, and common sense knowledge, will improve the ability to distinguish funny from not-funny beyond the baselines provided here.",
"Previous research on automated humor can be divided into work on datasets, analysis, detection, and generation.",
"We will give examples of each.",
"Datasets are important for automated understanding of humor and for training models.",
"Starting at the simplest linguistic level, Engelthaler and Hills (2018) gathered almost 5,000 English words with funniness ratings for each one.",
"Fila-tova (2012) found 1,905 Amazon product reviews classified as either regular or ironic/sarcastic, and Khodak et al. (2017) collected 1.3 million sarcastic statements from Reddit and a much larger set of non-sarcastic statements.",
"Mihalcea and Strapparava (2005) collected about 24,000 one-liner jokes, Potash et al. (2017) shared a dataset to rank funny tweets for certain hashtags, and Miller et al. (2017) created a task for pun detection.",
"Humor analysis, as we have done, is aimed at understanding what makes something funny.",
"Building on the word-level corpus of Engelthaler and Hills (2018), Westbury and Hollis (2018) developed models to predict the funniness of 4,997 words.",
"Looking at multi-word, but still short text, Shahaf et al. (2015) analyzed cartoon captions in order to understand what made some funnier than others.",
"The work that is most similar to ours is from West and Horvitz (2019), who looked at pairs of funny and normal headlines.",
"While we employed editors to create funny headlines from serious ones, they went the other way using a Web-based game, producing and analyzing 2,801 modified versions of 1,191 satirical headlines.",
"Humor detection is characterized by determining if a given text is funny or not.",
"Examples include Khodak et al. (2017), detecting sarcasm in Reddit and Davidov et al. (2010) detecting sarcasm in Amazon product reviews and Twitter.",
"Barbieri and Saggion (2014) and Reyes et al. (2012) showed how to detect humorous tweets, and Kiddon and Brun (2011) detected double entendres.",
"Generating humor is a difficult problem.",
"Past work includes Binsted et al. (1997) producing punning riddles, funny acronyms from Stock and Strapparava (2003), jokes of the type I like my coffee like I like my war, cold by Petrovic and Matthews (2013), and filling in Mad Libs R (cid:13) by Hossain et al. (2017).",
"Our headline work has the potential to help in humor generation, moving away from jokes with a strong template to more free form.",
"We have developed and released Humicroedit, a carefully curated dataset of 15,095 headlines with simple edits designed to make them funny.",
"The dataset specifies the edits and also comes with five funniness scores for each edited headline.",
"The simple replacement edits facilitate focused analysis on what causes the humor.",
"We showed how our data supports, in a quantitative way, humor theories about length of joke, incongruity, superiority, and setup/punchline.",
"Finally, we developed baseline classifiers that show how well we can distinguish funny edits from non-funny edits using simple linguistic features.",
"We expect our dataset will facilitate research in humor and natural language processing.",
"Headlines present unique challenges and opportunities, because their humor is largely topical, depending on a knowledge of current events and prominent people and entities.",
"Future work with this data could include deeper features for assessing humor.",
"We expect that humor detection would likely improve using features that incorporate world knowledge and common sense.",
"Likewise, there may be something to learn by analyzing topical jokes from professional comedians.",
"With our single-word edits, this analysis becomes easier, because we are looking at the minimal change in a headline to make it funny.",
"Additionally, if we can better understand what makes a headline funny, we may be able to automatically generate funny headlines and even personalize them to particular readers.",
"We thank Daniel Gildea for carefully reviewing our paper and for his advice on the machine learning experiments.",
"We also thank the NAACL reviewers for their various helpful suggestions."
]
| [
"objective",
"method",
"method",
"method",
"result",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"other",
"other"
]
|
[
"Question Answering (QA) naturally reduces to an entailment problem, namely, verifying whether some text entails the answer to a question.",
"However, for multi-hop QA tasks, which require reasoning with multiple sentences, it remains unclear how best to utilize entailment models pre-trained on large scale datasets such as SNLI, which are based on sentence pairs.",
"We introduce Multee, a general architecture that can effectively use entailment models for multi-hop QA tasks.",
"Multee uses",
"(i) a local module that helps locate important sentences, thereby avoiding distracting information, and",
"(ii) a global module that aggregates information by effectively incorporating importance weights.",
"Importantly, we show that both modules can use entailment functions pre-trained on a large scale NLI datasets.",
"We evaluate performance on MultiRC and OpenBookQA, two multihop QA datasets.",
"When using an entailment function pre-trained on NLI datasets, Multee outperforms QA models trained only on the target QA datasets and the OpenAI transformer models.",
"How can we effectively use textual entailment models for question answering?",
"Previous attempts at this have resulted in limited success (Harabagiu and Hickl, 2006; Sacaleanu et al., 2008; Clark et al., 2012).",
"With recent large scale entailment datasets (Bowman et al., 2015; Williams et al., 2018; Khot et al., 2018) pushing entailment models to high accuracies (Chen et al., 2017; Parikh et al., 2016; Wang et al., 2017), we re-visit this challenge and propose a novel method for repurposing neural entailment models for QA.",
"A key difficulty in using entailment models for QA turns out to be the mismatch between the inputs to the two tasks: large-scale entailment datasets are typically framed at a sentence Figure 1: An example illustrating the challenges in using sentence-level entailment model for multi-sentence reasoning needed for QA, and the high-level approach used in Multee.",
"level , whereas question answering requires verifying whether multiple sentences , taken together as a premise, entail a hypothesis.",
"There are two straightforward ways to address this mismatch: (1) aggregate independent entailment decisions over each premise sentence, or (2) make a single entailment decision after concatenating all premise sentences.",
"Neither approach is fully satisfactory.",
"To understand why, consider the set of premises in Figure 1, which entail the hypothesis H c .",
"Specifically, the combined information in P 1 and P 3 entails H c , which corresponds to the correct answer Cambridge .",
"On one hand, aggregating independent decisions will fail because no individual premise entails HC .",
"On the other hand, simply concatenating premises to form a single paragraph will fail because distracting information in P 2 and P 4 can muddle useful information in P 1 and P 3 .",
"An effective approach, therefore, must recognize relevant sentences (i.e., avoid distracting ones) and compose their sentence-level information.",
"Our solution to this challenge is based on the observation that a sentence-level entailment function can be re-purposed for both recognizing relevant sentences, and for computing sentence-level representations.",
"Both tasks require comparing information in a pair of texts, but the objectives of the comparison are different.",
"This means we can take an entailment function that is trained for basic entailment (i.e., comparing information in texts), and adapt it to work for both recognizing relevance and computing representations.",
"Thus, this architecture allows us to incorporate advances in entailment architectures and to leverage pre-trained models obtained using large scale entailment datasets.",
"To this end, we propose a general architecture that uses a (pre-trained) entailment function f e for multi-sentence QA.",
"Given a hypothesis statement H qa representing a candidate answer, and the set of premise sentences { P i } , our proposed architecture uses the same function f e for two components:",
"(a) a sentence relevance module that scores each P i based on its potential relevance to H qa , with the goal of weeding out distractors; and",
"(b) a relevance-weighted aggregator that combines entailment information from multiple P i .",
"Thus, we build effective entailment aware representations of larger contexts (i.e., multiple sentences) from those of small contexts (i.e., individual sentences).",
"The main strength of our approach is that, unlike standard attention mechanisms, the aggregator module uses the attention scores from the relevance module at multiple levels of abstractions (e.g., multiple layers of a neural network) within f e , using join operations that compose representations at each level.",
"We refer to this mu lti-l evel aggregation of te xtual e ntailment representations as Multee (pronounced multi).",
"Our implementation of Multee uses ESIM (Chen et al., 2017), a recent sentence-level entailment model, pre-trained on SNLI and MultiNLI datasets.",
"We demonstrate its effectiveness on two challenging multi-sentence reasoning datasets: MultiRC (Khashabi et al., 2018) and OpenBookQA (Mihaylov et al., 2018).",
"Multee using ELMo contextual embeddings (Peters et al., 2018) matches state-of-the-art results achieved with large transfomer-based models (Radford et al., 2018) that were trained on a sequence of large scale tasks (Sun et al., 2019).",
"Ablation studies demonstrate that both relevance scoring and multi-level aggregation are valuable, and that pre-training on large entailment corpora is particularly helpful for OpenBookQA.",
"This work makes three main contributions:",
"(i) A novel approach to use pre-trained entailment models for question answering.",
"(ii) A model that incorporates local (sentence level) entailment decisions with global (document level) entailment decisions to effectively aggregate information for multi-hop QA task.",
"(iii) An empirical evaluation that shows entailment based QA can achieve state-of-the-art performance on two challenging multihop QA datasets, OpenBookQA and MultiRC.",
"Non-extractive question answering can be seen as a textual entailment problem, where we verify whether a hypothesis constructed out of a question and a candidate answer is entailed by the knowledgea collection of sentences 1 in the source text.",
"The probability of an answer A , given a question Q , can be modeled as the probability of a set of premises { P i } entailing a hypothesis statement H qa constructed from Q and A : Pr[ A | Q, { P i } ] = Pr[ { P i } (cid:15) H qa ] (1) Here we use (cid:15) to denote textual entailment.",
"Given QA training data, we can then learn a model that approximates the entailment probability Pr[ { P i } (cid:15) H qa ] .",
"Can one build an effective QA model g e using an existing entailment model f e that has been pre-trained on a large-scale entailment dataset?",
"Figure 2 illustrates two straightforward ways of doing so, using f e as a black-box function: Figure 2: Black Box Applications of Textual Entailment Model for QA: Max and Concat models",
"(i) Aggregate Local Decisions (Max): Use f e to check how much each sentence P i entails H qa on its own, and aggregate these local entailment decisions, for instance, using a max operation.",
"1 This collection can be a sequence in the case of passage comprehension or a list of sentences, potentially from varied sources, in the case of QA over multiple documents.",
"(ii) Concatenate Premises (Concat): Combine the premise sentences in a sequence to form a single large passage P , and use f e to check whether this passage as a whole entails the hypothesis H qa , making a single entailment decision: g e ( { P i } , H qa ) = f e ( P, H qa ) (3) Our experiments reveal, however, that neither approach is an effective means of using pre-trained entailment models for QA (see Table 1).",
"For the example in Figure 1, Max model would not be able to consider information from P1 and P3 together.",
"Instead, it will pickup Silicon Valley as the answer since P2 is close to H s , Facebook was launched in Silicon Valley .",
"Similarly, Concat would also be muddled by distracting information in P2, which will weaken its confidence in answer Cambridge .",
"Therefore, without careful guidance, simple aggregation can easily add distracting information into the premise representation, causing entailment to fail.",
"This motivates the need for new, effective mechanisms for global reasoning over a collection of premises.",
"We propose a new entailment based QA model, Multee, with two components:",
"(i) a sentence relevance model , which learns to focus on the relevant sentences, and",
"(ii) a multi-layer aggregator , which uses an entailment model to obtain multiple layers of question-relevant representations for the premises and then composes them using the sentence-level scores from the relevance model.",
"Finding relevant sentences is a form of local entailment between each premise and the answer hypothesis, whereas aggregating question-relevant representations is a form of global entailment between all premises and the answer hypothesis.",
"This means, we can effectively re-purpose the same pre-trained entailment function f e for both components.",
"Figure 3 shows an architecture that uses multiple copies of f e to achieve this.",
"The goal of this module is to identify sentences in the paragraph that are important for the given hypothesis.",
"As shown in Figure 1, this helps the global module aggregate relevant content while reducing the chances of picking up distracting information.",
"A sentence is considered important if it contains information that is relevant to answering the question.",
"In other words, the importance of a sentence can be modeled as its entailment probability, i.e., how well the sentence by itself supports the answer hypothesis.",
"We can use a pre-trained entailment model to obtain this.",
"The importance i of a sentence P i can be modeled as: i = f e ( P i , H qa ) (4) This can be further improved by modeling the sentence with its surrounding context.",
"This is especially useful for passage-level QA, where the neighboring sentences provide useful context.",
"Given a premise sentence P i , the entailment function f e computes a single hypothesis-aware representation x i containing information in the premise that is relevant to entailing the answer hypothesis H qa .",
"This is essentially the output of last layer of neural function f e before projecting it to logits.",
"We denote this part of f e that outputs last vector representation as f e v and full f e that gives entailment probability as f e p .",
"We use these hypothesis-aware x i vectors for each sentence as inputs to a BiLSTM producing a contextual representation c i for each premise sentence P i , which is then fed to a feedforward layer that predicts the sentence-level importance as: i = softmax( WT c i + b ) (5) The components for generating x i are part of the original entailment function f e and can be pre-trained on the entailment dataset.",
"The BiLSTM to compute c i and the parameters W and b for computing i are not part of the original entailment function and thus can only be trained on the target QA task.",
"We perform this additional contextual-ization only when sentences form contiguous text.",
"Additionally, for datasets such as MultiRC, where the relevant sentences have been marked, we introduce a loss term based on the true relevance label and predicted weights, i .",
"The goal of this module is to aggregate representations from important sentences in order to make a global entailment decision.",
"There are two key questions to answer: (1) how to combine the sentence-level information into a paragraph-level representation and (2) how to use the sentence relevance weights { i } .",
"Most entailment models include many layers that transform the input premise and the hypothesis.",
"A typical neural entailment stack includes enFigure 3: Multee overview: Multee includes two main components, a relevance module, and a multi-layer aggregator module.",
"Both modules use pre-trained entailment functions ( f e p and f e v ).",
"f e p is the full entailment model that gives entailment probability, and f e v is part of it excluding last projection to logits and softmax.",
"The multi-level aggregator uses multiple copies of entailment function f e v , one for each sub-aggregator performing a join at a different layer.",
"Right part of figure zooms in on one such sub-aggregator joining at layer (cid:96) .",
"coding layers that independently generate contextual representations of the premise and the hypothesis, followed by some cross-attention layer that yields relationships between the premise and hypothesis words, and additional layers that use this cross-attention to generate premise attended representations of the hypothesis and vice versa.",
"The final layers are classification layers which determine entailment based on the representations from the previous layer.",
"Each layer thus generates intermediate representation that captures different type of entailment related information.",
"This presents us with a choice of multiple points for aggregation.",
"Figure 3 illustrates our approach for aggregating sentence-level representations into a single paragraph level representation.",
"For each premise P i in the passage, we first process the pair ( P i , H qa ) through the entailment stack ( f e v ) resulting in a set of intermediate representations { X i(cid:96) } for each layer (cid:96) .",
"We can choose a particular layer (cid:96) to be the aggregation layer.",
"We then compute a weighted combination of the sentence-level outputs at this layer { X i(cid:96) } to produce a passage-level representation Y (cid:96) .",
"The weights for the sentences are obtained from the Sentence Relevance model.",
"We refer to this as a join operation as shown in the Figure 3.",
"Layers of the entailment function f e v that are below the join operate at a sentence-level, while layers above the join now operate over paragraph-wise representations.",
"The final layer (i.e. the top most layer) of f e v thus gives us a vector representation of the entire passage.",
"This type of join can be applied at multiple layers resulting in paragraph vectors that correspond to multiple levels of aggregation.",
"We concatenate these paragraph vectors and pass them through a feedforward network projecting them down to logits, that can be used to compute the final passage wide entailment probabilities.",
"Given a set of sentence-wise outputs from the lower layer { X i } and the corresponding sentence-relevance weights { i } , the join operation combines them into a single passage-level representation Y , which can be directly consumed by the layer above it in the stack.",
"The specifics of the join operation depends on the shape of the outputs from the lower layer, and the shape of the inputs expected by the layer after the join.",
"Here we show four possible join operations, one for each layer.",
"The ones defined for Score Layer and Embedding Layer can be reduced to black-box baselines, while we use the other two in Multee.",
"Score Layer : The score layer outputs the entailment probabilities { s i } for each premise to hypothesis independently, which need to be joined to one entailment score.",
"One way to do this is to simply take a weighted maximum of the individual entailment probabilities.",
"So we have X i = s i i and Y = max i (cid:0) i s i (cid:1) .",
"This reduces to black-box Max model (Equation 2) when using { i } = 1 .",
"Embedding Layer : The embedding layer outputs a sequence of embedded vectors of [ P i ] 2 one sequence for each premise P i and another sequence of embedded vectors [ H qa ] for the answer hypothesis H qa .",
"A join operation in this case scales each embedded vector in a premise by its relevance weight and concatenates them together to 2 We use [ . ] to denote a sequence and .",
"to denote a vector form [ P ] .",
"H qa is passed through unchanged.",
"X i = ([ P i ] , [ H qa ]) i [ P ] = [ 1 [ P 1 ]; 2 [ P 2 ]; . . . ; n [ P n ]] Y = (cid:0) [ P ] , [ H qa ] (cid:1) For non-contextual word embeddings, this reduces to Concat Premises (Eq. 3) when { i } = 1 .",
"Final Layer (FL) : The final layer in the entailment stack usually outputs a single vector h which is then used in a linear layer and softmax to produce label probabilities.",
"The join operation here is a weighted sum of the premise-level vectors.",
"So we have X i = h i i and Y = (cid:80) i i h i .",
"This is similar to a standard attention mechanism, where attended representation is computed by summing the scaled representations.",
"However, such scaled addition is not possible when the outputs from lower layers are not of the same shapes, as in the following case.",
"Cross Attention Layer (CA) : Cross-attention is a standard component of many entailment and reading comprehension models.",
"This layer produces three outputs:",
"(i) For each premise P i , we get a hypothesis to premise cross attention matrix M hp i with shape (h p i ), where h is the number of hypothesis tokens, and p i is the number of tokens in premise P i ;",
"(ii) for each premise P i , we get a sequence of vectors [ P i ] that corresponds to the token sequence of the premise P i ; and",
"(iii) for the hypothesis, we get a single sequence of vectors [ H qa ] that corresponds to its token sequence.",
"M hp i attention matrix was generated by cross attention from [ H qa ] to [ P i ] .",
"The join operation in this layer produces a cross attention matrix that spans the entire passage, i.e., has shape ( h p ), where p is the total number of tokens across all premises.",
"The operation first scales the cross-attention matrices by the sentence-relevance weights { i } in order to tone down the influence of distracting/irrelevant sentences, and then re-normalizes the final matrix: X i = ( M hp i , [ P i ] , [ H qa ]) i M hp = (cid:2) i M hp 1 ; . . . ; i M hp n (cid:3) M hpij = M hpij (cid:80) k M hpik [ P ] = (cid:2) [ P 1 ]; [ P 2 ]; ... ; [ P n ] (cid:3) Y = ( M hp , P , H qa ) where M hpij is i th row and j th column of M hp .",
"Multee's multi-layer aggregator module uses join operations at two levels: Cross Attention Layer (CA) and Final Layer (FL) .",
"The two corresponding aggregators share parameters up till the lower of the two join layers (CA in this case), where they both operate at the sentence level.",
"Above this layer, one aggregator switches to operating at the paragraph level, where it has its own, unshared parameters.",
"In general, if Multee were to aggregate at layers (cid:96) i 1 , (cid:96) i 2 , . . . , (cid:96) ik , then the aggregators with joins at layers (cid:96) and (cid:96) (cid:48) respectively could share parameters at layers 1 , . . . , min { (cid:96), (cid:96) (cid:48) } .",
"Multee uses the ESIM stack as the entailment function pre-trained on SNLI and MultiNLI for both the relevance module and for the multi-layer aggregator module.",
"It uses aggregation at two-levels, one at the cross-attention level (CA) and one at the final layer (FL).",
"All uses of the entailment function in Multee are initialized with the same pre-trained entailment model weights.",
"The embedding layer and the BiLSTM layer process paragraph-level contexts but processing at higher layers are done either at premise level or paragraph-level depending on where the join operation is performed.",
"Datasets: We evaluate Multee on two datasets, OpenBookQA (Mihaylov et al., 2018) and MultiRC (Khashabi et al., 2018), both of which are specifically designed to test reasoning over multiple sentences.",
"MultiRC is paragraph-based multiple-choice QA dataset derived from varying topics where the questions are answerable based on information from the paragraph.",
"In MultiRC, each question can have more than one correct answer choice, and so it can be viewed as a binary classification task (one prediction per answer choice), with 4,848 / 4,583 examples in Dev/Test sets.",
"OpenBookQA, on the other hand, has multiple-choice science questions with exactly one correct answer choice and no associated paragraph.",
"As a result, this dataset requires the relevant facts to be retrieved from auxiliary resources including the open book of facts released with the paper and other sources such as WordNet (Miller, 1995) and ConceptNet (Speer and Havasi, 2012).",
"It contains 500 questions in the Dev and Test sets.",
"Preprocessing: For each question and answer choice, we create an answer hypothesis statement using a modified version of the script used in SciTail (Khot et al., 2018) construction.",
"We wrote a handful of rules to better convert the question and answer to a hypothesis.",
"We also mark the span of answer in the hypothesis with special begin and end tokens, @@@answer and answer@@@ respectively 3 .",
"For MultiRC, we also apply an off-the-shelf coreference resolution model 4 and replace the mentions when they resolve to pronouns occurring in a different sentence 5 .",
"For OpenBookQA, we use the exact same retrieval as released by the authors of OpenBookQA 6 and use the OpenBook and WordNet as the knowledge source with top 5 sentences retrieved per query.",
"Training Multee: For OpenBookQA we use cross entropy loss for labels corresponding to 4 answer choices.",
"For MultiRC, we use binary cross entropy loss for each answer-choice separately since in MultiRC each question can have more than one correct answer choice.",
"The entailment 3 Answer span marking gave substantial gains for all entailment based models including the baselines.",
"4 https://github.com/huggingface/neuralcoref 5 It is hard to learn co-reference, as these target datasets are too small to learn this in an end-to-end fashion.",
"6 https://github.com/allenai/OpenBookQA components are pre-trained on sentence-level entailment tasks and then fine-tuned as part of end-to-end QA training.",
"The MultiRC dataset includes sentence-level relevance labels.",
"We supervise the Sentence Relevance module with a binary cross entropy loss for predicting these relevance labels when available.",
"We used PyTorch (Paszke et al., 2017) and AllenNLP to implement our models and ran them on Beaker 7 .",
"For pre-training we use the same hyper-parameters of ESIM(Chen et al., 2017) as available in implementation of AllenNLP (Gardner et al., 2017) and fine-tune the model parameters.",
"We do not perform any hyper-parameter tuning for any of our models.",
"We fine-tune all layers in ESIM except for the embedding layer.",
"Models Compared: We experiment with Glove (Pennington et al., 2014) and ELMo (Peters et al., 2018) embeddings for Multee and compare with following three types of systems: (A) Baselines using entailment as a black-box We use the pre-trained entailment model as a black-box in two ways: concatenate premises ( Concat ) and aggregate sentence level decisions with a max operation ( Max ).",
"Both models were also pre-trained on SNLI and MultiNLI datasets and fine-tuned on the target QA datasets with same 7 https://beaker.org/ pre-processing.",
"(B) Previously published results: For MultiRC, there are two published baselines: IR (Information Retrieval) and LR (Logistic Regression).",
"These simple models turn out to be strong baselines on this relatively smaller sized dataset.",
"For OpenBookQA, we report published baselines from (Mihaylov et al., 2018): Question Match with ELMo (QM + ELMo), Question to Answer ESIM with ELMo (ESIM + ELMo) and their best result with the Knowledge Enhanced Reader (KER).",
"(C) Large Transformer based models: We compare with OpenAI-Transformer (OFT), pre-trained on large-scale language modeling task and fine-tuned on respective datasets.",
"A contemporaneous work, 8 which published these transformer results, also fine-tuned this transformer further on a large scale reading comprehension dataset, RACE (Lai et al., 2017), before fine-tuning on the target QA datasets with their method, Reading Strategies .",
"Table 1 summarizes the performance of all models.",
"Multee outperforms the black-box entailment baselines (Concat and Max) that were pre-trained on the same data, previously published baselines, OpenAI transformer models.",
"We note that the 95% confidence intervals around baseline accuracy for OpenBookQA and MultiRC are 4.3% and 1.3%, respectively.",
"On OpenBookQA test set, Multee with GloVe outperforms ensemble version of OpenAI transformer by 3.0 points in accuracy.",
"It also outperforms single model version of Reading Strategies system and is comparable to their ensemble version.",
"On MultiRC dev set, Multee with ELMo outperforms ensemble version of OpenAI transformer by 1.9 points in F1a, 2.7 in F1m and 6.3 in EM.",
"It also outperforms single model version of Reading Strategies system and is comparable to their ensemble version.",
"Recall that the Reading Strategies results are reported with an additional fine-tuning on another larger QA dataset, RACE (Lai et al., 2017) aside from the target QA datasets we use here.",
"While ELMo contextual embeddings helped in MultiRC, it did not help OpenBookQA.",
"We believe this is in part due to the mismatch between our ELMo training setup where all sentences are treated as a single sequence, which, while true in 8 Published on arXiv on Oct 31, 2018 (Sun et al., 2019).",
"In general, gains from Multee are more prominent in OpenBookQA than in MultiRC.",
"We hypothesize that a key contributor to this difference is distraction being a lesser challenge in MultiRC, where premise sentences come from a single paragraph whose other sentences are often irrelevant and rarely distract towards incorrect answers.",
"OpenBookQA has a noisier set of sentences, since an equal number of sentences is retrieved for the correct and each incorrect answer choice.",
"Relevance Model Ablation.",
"Table 2 shows the utility of the relevance module.",
"We use the same setting as the full model (aggregation at Cross Attention (CA) and the Final Layer (FL)).",
"As shown in the table, using the relevance module weights ( (cid:51) i ) leads to improved accuracy on both datasets (substantially so in OpenBookQA) as compared to ignoring the module, i.e., setting all weights to 1 ( (cid:55) i ).",
"In MultiRC, we show that the additional supervision for the relevance module leads to even further improvements in score.",
"Multi-Level Aggregator Ablation.",
"Multee performs aggregation at two levels: Cross Attention Layer (CA) and Final Layer (FL).",
"We denote this by CA+FL.",
"To show that multi-level aggregation is better than individual aggregations, we train OpenBookQA MultiRC Accuracy F1a | F1m Snli + MultiNli 55.8 69.9 | 73.6 Snli 50.4 69.3 | 73.3 Scratch 42.2 68.3 | 72.6 Table 4: Effect (on test data) of pre-training the entailment model used in Multee.",
"models with aggregation at only FL and at only CA.",
"Table 3 shows that multi-layer aggregation is better than CA or FL alone on both the datasets.",
"One of the benefits of using entailment based components in a QA model is that we can pre-train them on large scale entailment datasets and fine-tune them as part of the QA model.",
"Table 4 shows that such pre-training is valuable.",
"The model trained from scratch is substantially worse in the case of OpenBookQA, highlighting the benefits of our entailment-based QA model.",
"Multee benefits come from two sources:",
"(i) Re-purposing of entailment function for multi-sentence question answering, and",
"(ii) transferring from a large-scale entailment task.",
"In the case of OpenBookQA, both are helpful.",
"For MultiRC, only the first is a significant contributor.",
"Table 5 shows that re-purposing was a bigger factor for MultiRC, since Max and Concat models do not work well when trained from scratch.",
"Relevance Loss.",
"The sentence-level relevance model provides a way to dig deeper into the overall QA model's behavior.",
"When sentence-level supervision is available, as in the case of MultiRC, we can analyze the impact of different auxiliary losses for the relevance module.",
"Table 6 shows the QA performance with different relevance losses, and Figure 5 shows a visualization of attention F1a precision F1a recall IR Sum Loss 59.5 68.5 BCE Loss 58.0 83.2 Table 6: F1a precision and recall on MultiRC Dev with 2 kinds of relevance losses.",
"scores for a question in MultiRC.",
"Overall, we find that two types of behaviors emerge from different loss functions.",
"For instance, trying to minimize the sum of attention probability mass on irrelevant sentences i.e. (cid:80) i i (1 y i ) , called IR Sum Loss , causes the attention scores to become peaky i.e, high for one or two sentences, and close to zero for others.",
"This leads to higher precision but at significantly lower recall for the QA system, as it now uses information from fewer but highly relevant sentences.",
"Binary cross entropy loss (BCE) allows the model to attend to more relevant sentences thereby increasing recall without too much drop in precision.",
"Failure Cases.",
"As Figure 5 shows, our model with BCE loss tends to distribute the attention, especially to sentences close to the relevant ones.",
"We hypothesize that the model is learning to use the contextualized BiLSTM representations to incorporate information from neighboring sentences, which is useful for this task and for passage understanding in general.",
"For example, more than 60% of Dev questions in MultiRC have at least one adjacent relevant sentence pair.",
"Figure 4a illustrates this behavior.",
"On the other hand, if the relevant sentences are far apart, the model finds it difficult to handle such long-range cross sentence dependencies in its contextualized representations.",
"As a result, it ends up focusing attention on the most relevant sentence, missing out on other relevant sentences (Figure 4b).",
"When these unattended but relevant sentences contain the answer, the model fails.",
"Entailment systems have been applied to question-answering before but have only had limited success (Harabagiu and Hickl, 2006; Sacaleanu et al., 2008; Clark et al., 2012) in part because of the small size of the early entailment datasets (Da-gan et al., 2006, 2013).",
"Recent large scale entailment datasets such as SNLI (Bowman et al.,",
"(a) Positive Example",
"(b) Negative Example Figure 4: Success and failure examples of Multee from MultiRC.",
"R : annotated relevant sentences.",
"Green/yellow: high/low predicted relevance.",
"2015) and MultiNLI (Williams et al., 2018) have led to many new powerful neural entailment models that are not only more effective, but also produce better representations of sentences (Conneau et al., 2017).",
"Models such as Decomposable Attention (Parikh et al., 2016) and ESIM (Chen et al., 2017), on the other hand, find alignments between the hypothesis and premise words through cross-attention.",
"However, these improvements in entailment models have not yet translated to improvements in end tasks such as question answering.",
"SciTail (Khot et al., 2018) was created from a science QA task to push for models with a direct impact on QA.",
"Entailment models trained on this dataset show minor improvements on the Aristo Reasoning Challenge (Clark et al., 2018; Musa et al., 2018).",
"However, these QA systems make independent predictions and can not combine information from multiple supporting sentences.",
"Combining information from multiple sentences is a key problem in language understanding.",
"Recent Reading comprehension datasets (Welbl et al., 2018; Khashabi et al., 2018; Yang et al., 2018; Mihaylov et al., 2018) explicitly evaluate a system's ability to perform such reasoning through questions that need information from multiple sentences in a passage.",
"Most approaches on these tasks perform simple attention-based aggregation (Mihaylov et al., 2018; Song et al., 2018; Cao et al., 2018) and do not exploit the entailment models trained on large scale datasets.",
"Using entailment for question answering has seen limited success.",
"Neural entailment models are designed and trained on tasks defined over sentence pairs, whereas QA often requires reasoning over longer texts spanning multiple sentences.",
"We propose Multee, a novel QA model that addresses this mismatch.",
"It uses an existing entailment model to both focus on relevant sentences and aggregate information from these sentences.",
"Results on two challenging QA datasets, as well as our ablation study, indicate that entailment based QA can achieve state-of-the-art performance and is a promising direction for further research.",
"This work is supported in part by the National Science Foundation under Grant IIS-1815358.",
"The computations on beaker.org were supported in part by credits from Google Cloud."
]
| [
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"objective",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"other",
"other"
]
|
[
"Voumya Vanyal 2 Yang 1",
"Nnowledge Jraph Fompletion (NJF) aims at automatically predicting missing links for large-scale knowledge graphs.",
"D vast number of state-of-the-art NJF techniques have got published at top conferences in several research fields, including data mining, machine learning, and natural language processing.",
"Kowever, we notice that several recent papers report very high performance, which largely outperforms previous state-of-the-art methods.",
"Ln this paper, we find that this can be attributed to the inappropriate evaluation protocol used by them and propose a simple evaluation protocol to address this problem.",
"Whe proposed protocol is robust to handle bias in the model, which can substantially affect the final results.",
"Ze conduct extensive experiments and report performance of several existing methods using our protocol.",
"Whe reproducible code has been made publicly available.",
"Ueal-world knowledge bases are usually expressed as multi-relational graphs, which are collections of factual triplets, where each triplet ( h, r, t ) represents a relation r between a head entity h and a tail entity t .",
"Kowever, real-word knowledge bases are usually incomplete (Gong et al., 2014), which motivates the research of automatically predicting missing links.",
"D popular approach for Nnowledge Jraph Fompletion (NJF) is to embed entities and relations into continuous vector or matrix space, and use a well-designed score function f ( h, r, t ) to measure the plausibility of the triplet ( h, r, t ) .",
"Post of the previous methods use translation distance based (Eordes et al., 2013; Zang et al., 2014; Xiao et al., 2016; Vun et al., 2019) and semantic matching based (Qickel and Wresp, 2013; Yang et al., 2014; Qickel et al., 2016; Wrouillon et al., 2016; Hqual contribution. Oiu et al., 2017) scoring functions which are easy to analyze.",
"Kowever, recently, a vast number of neural network-based methods have been proposed.",
"Whey have complex score functions which utilize black-box neural networks including Fonvolutional Qeural Qetworks (FQQs) (Gettmers et al., 2018; Qguyen et al., 2018), Uecurrent Qeural Qetworks (UQQs) (Oin et al., 2015; Zang et al., 2018), Jraph Qeural Qetworks (JQQs) (Vchlichtkrull et al., 2017; Vhang et al., 2019), and Fapsule Qetworks (Qguyen et al., 2019).",
"Zhile some of them report state-of-the-art performance on several benchmark datasets that are competitive to previous embedding-based approaches, a considerable portion of recent neural network-based papers report very high performance gains which are not consistent across different datasets.",
"Poreover, most of these unusual behaviors are not at all analyzed.",
"Vuch a pattern has become prominent and is misleading the whole community.",
"Ln this paper, we investigate this problem and find that this is attributed to the inappropriate evaluation protocol used by these approaches.",
"Ze demonstrate that their evaluation protocol gives a perfect score to a model that always outputs a constant irrespective of the input.",
"Whis has lead to artificial inflation of performance of several models.",
"Ior this, we find a simple evaluation protocol that creates a fair comparison environment for all types of score functions.",
"Ze conduct extensive experiments to re-examine some recent methods and fairly compare them with existing approaches.",
"Whe source code of the paper has been publicly available at http://github.com/svjan5/kg-reeval .",
"Wable 1: Fhanges in PUU for different methods on IE15k-237 and ZQ18UU datasets with respect to FonvH show inconsistent improvements.",
"note the set of entities and relations and T = { ( h, r, t ) | h, t E , r R} is the set of triplets (facts), the task of Nnowledge Jraph Fompletion (NJF) involves inferring missing facts based on the known facts.",
"Post the existing methods de-fine an embedding for each entity and relation in G , i.e., e h , e r h E , r R and a score function f ( h, r, t ) : E R E R which assigns a high score for valid triplets than the invalid ones.",
"NJF Hvaluation Guring NJF evaluation, for predicting t in a given triplet ( h, r, t ) , a NJF model scores all the triplets in the set T (cid:48) = { ( h, r, t (cid:48) ) | t (cid:48) E} .",
"Eased on the score, the model first sorts all the triplets and subsequently finds the rank of the valid triplet ( h, r, t ) in the list.",
"Ln a more relaxed setting called filtered setting , all the known correct triplets (from train, valid, and test triplets) are removed from T (cid:48) except the one being evaluated (Eordes et al., 2013).",
"Whe triplets in T (cid:48) { t } are called negative samples.",
"Uelated Zork Srior to our work, Nadlec et al. (2017) cast doubt on the claim that performance improvement of several models is due to architectural changes as opposed to hyperparameter tuning or different training objective.",
"Ln our work, we raise similar concerns but through a different angle by highlighting issues with the evaluation procedure used by several recent methods.",
"Fhandrahas et al. (2018) analyze the geometry of NJ embeddings and its correlation with task performance while Qayyeri et al. (2019) examine the effect of different loss functions on performance.",
"Kowever, their analysis is restricted to non-neural approaches.",
"Iigure 1: Vorted score distribution of FonvNE for an example valid triplet and its negative samples.",
"Whe score is normalized into [0 , 1] (lower the better).",
"Got-ted line indicate the score for the valid triplet.",
"Ze find that in this example, around 58.5% negative sampled triplets obtain the exact same score as the valid triplet.",
"Ln this section, we first describe our observations and concerns and then investigate the reason behind.",
"Veveral recently proposed methods report high performance gains on a particular dataset.",
"Kow-ever, their performance on another dataset is not consistently improved.",
"Ln Wable 1, we report change in PUU score on IE15k-237 (Woutanova and Fhen, 2015) and ZQ18UU (Gettmers et al., 2018) datasets with respect to FonvH (Gettmers et al., 2018) for different methods including UotatH (Vun et al., 2019), WuckHU (Ealaevic et al., 2019), FonvNE (Qguyen et al., 2018), FapsH (Qguyen et al., 2019), NEDW (Qathani et al., 2019), and WransJate (Yuan et al., 2019).",
"Rverall, we find that for a few recent QQ based methods, there are inconsistent gains on these two datasets.",
"Ior instance, in FonvNE, there is a 21.8% improvement over FonvH on IE15k-237, but a degradation of 42.3% on ZQ18UU, which is surprising given the method is claimed to be better than FonvH.",
"Rn the other hand, methods like UotatH and WuckHU give consistent improvement across both benchmark datasets.",
"Vcore distribution Zhen evaluating NJF methods, for a given triplet ( h, r, t ) , the ranking of t given h and r is computed by scoring all the triplets of form { ( h, r, t (cid:48) ) | t (cid:48) E} , where E is the set of",
"Iigure 2: Slot shows the frequency of the number of negative triplets with the same assigned score as the valid triplet during evaluation on IE15k-237 dataset.",
"Whe results show that for methods like FonvNE and FapsH, a large number of negative triplets get the same score as the valid triplets whereas for methods like FonvH such occurrences are rare.",
"all entities.",
"Rn investing a few recent QQ based approaches, we find that they have unusual score distribution, where some negatively sampled triplets have the same score as the valid triplet.",
"Dn instance of IE15k-237 dataset is presented in Iigure",
"1. Kere, out of 14,541 negatively sampled triplets, 8,520 have the exact same score as the valid triplet.",
"Vtatistics on the whole dataset Ln Iigure 2, we report the total number of triplets with the exact same score over the entire dataset for FonvNE (Qguyen et al., 2018) and FapsH (Qguyen et al., 2019) and compare them with FonvH (Gettmers et al., 2018) which does not suffer from this issue.",
"Ze find that both FonvNE and FapsH have multiple occurrences of such unusual score distribution.",
"Rn average, FonvNE and FapsH have 125 and 197 entities with exactly same score as the valid triplet over the entire evaluation dataset of IE15k-237, whereas FonvH has around 0.002, which is almost negligible.",
"Ln Vection 4, we demonstrate how this leads to massive performance gain for methods like FonvNE and FapsH.",
"Uoot of the problem Iurther, we investigate the cause behind such unusual score distribution.",
"Ln Iigure 3, we plot the ratio of neurons becoming zero after UeOX activation for the valid triplets vs. their normalized frequency on IE15k-237 dataset.",
"Whe results show that in FonvNE and FapsH, a large fraction (87.3% and 92.2% respectively) of the neurons become zeros after applying UeOX 0.0 0.2 0.4 0.6 0.8 1.0 Ratio of Neurons becoming zero 0 5 10 15 20 25 30 N o r m a li z e d F r e q u e n c y ConvKB CapsE ConvE Iigure 3: Gistribution of ratio of neurons becoming zero after UeOX activation in different methods for the valid triplets in IE15k-237 dataset.",
"Ze find that for FonvNE and FapsH an unusually large fraction of neurons become zero after UeOX activation whereas the does not hold with FonvH.",
"activation.",
"Kowever, with FonvH, this count is substantially less (around 41.1%).",
"Eecause of the zeroing of nearly all neurons (at least 14.2% for FonvNE and 22.0% for FapsH), the representation of several triplets become very similar during for-ward pass and thus leading to obtaining the exact same score.",
"Ln this section, we present different evaluation protocols that can be adopted in knowledge graph completion.",
"Ze further show that inappropriate evaluation protocol is the key reason behind the unusual behavior of some recent QQ-based methods.",
"Kow to deal with the same scoresB Dn essential aspect of the evaluation method is to decide how to break ties for triplets with the same score.",
"Pore concretely, while scoring the candidate set T (cid:48) , if there are multiple triplets with the same score from the model, one should decide which triplet to pick.",
"Dssuming that the triplets are sorted in a stable manner, we design a general evaluation scheme for NJF, which consists of the following three different protocols: WRS : Ln this setting, the correct triplet is inserted in the beginning of T (cid:48) .",
"ERWWRP : Kere, the correct triplet is inserted at the end of T (cid:48) .",
"UDQGRP : Ln this, the correct triplet is placed randomly in T (cid:48) .",
"Wable 2: Hffect of different evaluation protocols on recent NJ embedding methods on IE15k-237 dataset.",
"Ior WRS and ERWWRP , we report changes in performance with respect to UDQGRP protocol.",
"Slease refer to Vection 5.4 for details.",
": NEDW has test data leakage in their original implementation, which is fixed in our experiments.",
"Giscussion Eased on the definition of the three evaluation protocols, it is clear that WRS evaluation protocol does not evaluate the model rigorously.",
"Lt gives the models that have a bias to provide the same score for different triplets, an inappropriate advantage.",
"Rn the other hand, ERWWRP evaluation protocol can be unfair to the model during inference time because it penalizes the model for giving the same score to multiple triplets, i.e., if many triplets have the same score as the correct triple, the correct triplet gets the least rank possible.",
"Ds a result, UDQGRP is the best evaluation technique which is both rigorous and fair to the model.",
"Lt is in line with the situation we meet in the real world: given several same scored candidates, the only option is to select one of them randomly.",
"Kence, we propose to use UDQGRP evaluation scheme for all model performance comparisons.",
"Ln this section, we conduct extensive experiments using our proposed evaluation protocols and make a fair comparison for several existing methods.",
"Ze evaluate the proposed protocols on IE15k-237 (Woutanova and Fhen, 2015) dataset 1 , which is a subset of IE15k (Eordes et al., 2013) with inverse relations deleted to prevent direct inference of test triples from training.",
"Ln our experiments, we categorize existing NJF methods into the following two categories:",
"Qon-Dffected: Whis includes methods which give consistent performance under different evaluation protocols.",
"Ior experiments in this paper, we consider three such methods FonvH, UotatH, and WuckHU.",
"Dffected: Whis category consists of recently proposed neural-network based methods whose performance is affected by different evaluation protocols.",
"FonvNE, FapsH, WransJate 2 , and NEDW are methods in this category.",
"Ior all the methods, we use the code and the hyper-parameters provided by the authors in their respective papers.",
"Podel performance is evaluated by Pean Ueciprocal Uank (PUU), Pean Uank (PU) and KitsC10 (KC10) on the filtered setting (Eor-des et al., 2013).",
"Wo analyze the effect of different evaluation protocols described in Vection 4, we study the performance variation of the models listed in Vection 5.2.",
"Ze study the effect of using WRS and ERWWRP protocols and compare them to UDQGRP protocol.",
"Ln their original paper, FonvH, UotatH, and WuckHU use a strategy similar to the proposed UDQGRP protocol, while FonvNE, FapsH, and NEDW use WRS protocol.",
"Ze also study the random error in UDQGRP protocol with multiple runs, where we report the average and standard deviation on 5 runs with different random seeds.",
"Whe results are presented in Wables",
"2. 2 Vince we cannot find any open-source implementation of WransJate, we leave the re-evaluation of WransJate as our future work.",
"Ze observe that for Qon-Dffected methods like FonvH, UotatH, and WuckHU, the performance remains consistent across different evaluation protocols.",
"Kowever, with Dffected methods, there is a considerable variation in performance.",
"Vpecifi-cally, we can observe that these models perform best when evaluated using WRS and worst when evaluated using ERWWRP 3 .",
"Iinally, we find that the proposed UDQGRP protocol is very robust to different random seeds.",
"Dlthough the theoretic upper and lower bounds of a UDQGRP score are WRS and ERWWRP scores respectively, when we evaluate knowledge graph completion for real-world large-scale knowledge graphs, the randomness doesn't affect the evaluation results much.",
"Ln this paper, we performed an extensive reexamination study of recent neural network based NJF techniques.",
"Ze find that many such models have issues with their score functions.",
"Fombined with inappropriate evaluation protocol, such methods reported inflated performance.",
"Eased on our observations, we propose UDQGRP evaluation protocol that can clearly distinguish between these affected methods from others.",
"Ze also strongly encourage the research community to follow the UDQGRP evaluation protocol for all NJF evaluation purposes.",
"Ze thank the reviewers for their helpful comments.",
"Whis work is supported in part by the Qational Vcience Ioundation (QVI) under grant LLV-1546329 and Joogle ShG Iellowship.",
"Lvana Ealaevic, Farl Dllen, and Kospedales.",
"2019.",
"Wucker: Wensor factorization for knowledge graph completion.",
"Ln Hmpirical Pethods in Qatural Oanguage Srocessing .",
"Vystems 26 , pages 27872795.",
"Furran Dssociates, Lnc.",
"Fhandrahas, Dditya Vharma, and Sartha Walukdar.",
"2018.",
"Wowards understanding the geometry of knowledge graph embeddings.",
"Ln Sroceedings of the 56th Dnnual Peeting of the Dssociation for Fomputational Oinguistics (Yolume 1: Oong Sapers) , pages 122131, Pelbourne, Dustralia.",
"Dssociation for Fomputational Oinguistics.",
"Wim Gettmers, Pinervini Sasquale, Vtenetorp Son-tus, and Vebastian Uiedel.",
"2018.",
"Fonvolutional 2d knowledge graph embeddings.",
"Ln Sroceedings of the 32th DDDL Fonference on Drtificial Lntelligence , pages 18111818.",
"Uudolf Nadlec, Rndrej Eajgar, and Man Nleindienst.",
"2017.",
"Nnowledge base completion: Easelines strike back.",
"Ln Sroceedings of the 2nd Zorkshop on Uep-resentation Oearning for QOS , pages 6974, Yancou-ver, Fanada.",
"Dssociation for Fomputational Oinguistics.",
"Gai Tuoc Qguyen, Wu Ginh Qguyen, Gat Tuoc Qguyen, and Ginh Shung.",
"Dntoine Eordes, Qicolas Xsunier, Dlberto Jarcia-Guran, Mason Zeston, and Rksana Yakhnenko.",
"2013.",
"Wranslating embeddings for modeling multi-relational data.",
"Ln F. M. F. Eurges, O. Eottou, P. Zelling, Z. Jhahramani, and N. T. Zeinberger, editors, Ddvances in Qeural Lnformation Srocessing 3 NEDW incorporates FonvNE in the last layer of its model architecture, which should be affected by different evaluation protocols.",
"Eut we find another bug on the leakage of test triples during negative sampling in the reported model, which results in more significant performance degradation.",
"Xin Gong, Hvgeniy Jabrilovich, Jeremy Keitz, Zilko Korn, Qi Oao, Nevin Purphy, Whomas Vtrohmann, Vhaohua Vun, and Zei Zhang.",
"2014.",
"Nnowledge vault: D web-scale approach to probabilistic knowledge fusion.",
"Ln Sroceedings of the 20th DFP VLJNGG Lnternational Fonference on Nnowledge Giscovery and Gata Pining , NGG '14, pages 601 610, Qew York, QY, XVD.",
"DFP.",
"Yankai Oin, Zhiyuan Oiu, Kuanbo Ouan, Paosong Vun, Viwei Uao, and Vong Oiu.",
"2015.",
"Podeling relation paths for representation learning of knowledge bases.",
"Ln Sroceedings of the 2015 Fonference on Hmpirical Pethods in Qatural Oanguage Srocessing , pages 705714, Oisbon, Sortugal.",
"Dssociation for Fomputational Oinguistics.",
"Kanxiao Oiu, Yuexin Zu, and Yiming Yang.",
"2017.",
"Dnalogical inference for multi-relational embeddings.",
"Ln Sroceedings of the 34th Lnternational Fonference on Pachine Oearning , volume 70 of Sroceedings of Pachine Oearning Uesearch , pages 21682178, Lnternational Fonvention Fentre, Vyd-ney, Dustralia.",
"SPOU.",
"Geepak Qathani, Matin Fhauhan, Fharu Vharma, and Panohar Naul.",
"2019.",
"Oearning attention-based embeddings for relation prediction in knowledge graphs.",
"Ln Sroceedings of the 57th Dnnual Peeting of the Dssociation for Fomputational Oinguistics .",
"Dssociation for Fomputational Oinguistics.",
"Pojtaba Qayyeri, Fhengjin Xu, Yadollah Yaghoobzadeh, Kamed Vhariat Yazdi, and Mens Oehmann.",
"2019.",
"Woward Xnderstanding Whe Hffect Rf Ooss function Rn When Serformance Rf Nnowledge Jraph Hmbedding.",
"arXiv e-prints , page arXiv:1909.00519.",
"2018.",
"D novel embedding model for knowledge base completion based on convolutional neural network.",
"Ln Sroceedings of the 2018 Fonference of the Qorth Dmerican Fhapter of the Dssociation for Fomputational Oinguistics: Kuman Oanguage Wechnologies, Yolume 2 (Vhort Sapers) , pages 327333.",
"Dssociation for Fomputational Oinguistics.",
"Gai Tuoc Qguyen, Whanh Yu, Wu Ginh Qguyen, Gat Tuoc Qguyen, and Ginh Shung.",
"2019.",
"D Fapsule Qetwork-based Hmbedding Podel for Nnowledge Jraph Fompletion and Vearch Sersonalization.",
"Ln Sroceedings of the 2019 Dnnual Fonference of the Qorth Dmerican Fhapter of the Dssociation for Fomputational Oinguistics: Kuman Oanguage Wech-nologies (QDDFO-KOW) , pages 21802189.",
"Paximilian Qickel, Oorenzo Uosasco, and Womaso Soggio.",
"2016.",
"Kolographic embeddings of knowledge graphs.",
"Ln Sroceedings of the Whirtieth DDDL Fonference on Drtificial Lntelligence , DDDL'16, pages 19551961.",
"DDDL Sress.",
"Paximilian Qickel and Yolker Wresp.",
"2013.",
"Wensor factorization for multi-relational learning.",
"Ln Pachine Oearning and Nnowledge Giscovery in Gatabases , pages 617621, Eerlin, Keidelberg.",
"Vpringer Eerlin Keidelberg.",
"Meffrey Sennington, Uichard Vocher, and Fhristopher Panning.",
"2014.",
"Jlove: Jlobal vectors for word representation.",
"Ln Sroceedings of the 2014 conference on empirical methods in natural language processing (HPQOS) , pages 15321543.",
"Pichael Vchlichtkrull, Whomas Q Nipf, Seter Eloem, Uianne van den Eerg, Lvan Witov, and Pax Zelling.",
"2017.",
"Podeling relational data with graph convolutional networks.",
"arXiv preprint arXiv:1703.06103 .",
"Fhao Vhang, Yun Wang, Ming Kuang, Minbo Ei, Xiaodong Ke, and Eowen Zhou.",
"2019.",
"Hnd-to-end structure-aware convolutional networks for knowledge base completion.",
"Zhiqing Vun, Zhi-Kong Geng, Mian-Yun Qie, and Mian Wang.",
"2019.",
"Uotate: Nnowledge graph embedding by relational rotation in complex space.",
"Ln Lnternational Fonference on Oearning Uepresentations .",
"Nristina Woutanova and Ganqi Fhen.",
"2015.",
"Rbserved versus latent features for knowledge base and text inference.",
"Ln Sroceedings of the 3rd Zorkshop on Fontinuous Yector Vpace Podels and their Fompo-sitionality , pages 5766.",
"Who Wrouillon, Mohannes Zelbl, Vebastian Uiedel, ric Jaussier, and Juillaume Eouchard.",
"2016.",
"Fomplex embeddings for simple link prediction.",
"Ln Sroceedings of the 33rd Lnternational Fonference on Lnternational Fonference on Pachine Oearning Yolume 48 , LFPO'16, pages 20712080.",
"MPOU.org.",
"Kaoyu Zang, Yivek Nulkarni, and Zilliam Yang Zang.",
"2018.",
"GRORUHV: deep contextualized knowledge graph embeddings.",
"FoUU , abs/1811.00147.",
"Zhen Zang, Mianwen Zhang, Mianlin Ieng, and Zheng Fhen.",
"2014.",
"Nnowledge graph embedding by translating on hyperplanes.",
"Ln Sroceedings of the Wwenty-Highth DDDL Fonference on Drtificial Lntelligence , DDDL'14, pages 11121119.",
"DDDL Sress.",
"Kan Xiao, Pinlie Kuang, and Xiaoyan Zhu.",
"2016.",
"Wransg : D generative model for knowledge graph embedding.",
"Ln Sroceedings of the 54th Dnnual Peeting of the Dssociation for Fomputational Oinguistics (Yolume 1: Oong Sapers) , pages 23162325.",
"Dssociation for Fomputational Oinguistics.",
"Eishan Yang, Zen-tau Yih, Xiaodong Ke, Mianfeng Jao, and Oi Geng.",
"2014.",
"Hmbedding entities and relations for learning and inference in knowledge bases.",
"FoUU , abs/1412.6575.",
"Eesides IE15k-237, we also evaluate the proposed protocols on ZQ18UU (Gettmers et al., 2018) dataset, which is a subset of ZQ18 (Eordes et al., 2013) containing lexical relations between words.",
"Vimilar to IE15k-237, inverse relations are removed in ZQ18UU.",
"Whe results on ZQ18UU are shown in Wable",
"3. Irom these results, we can draw similar conclusions as in Vection 5.",
"Ze also show the total number of triplets with the exact same score over the entire ZQ18UU dataset for FonvNE, FapsH and FonvH in Iigure 4.",
"Wable 3: Serformance comparison under different evaluation protocols on ZQ18UU dataset.",
"Ior WRS and ERWWRP , we report changes in performance with respect to UDQGRP protocol.",
": FapsH uses the pre-trained 100-dimensional Jlove (Sennington et al., 2014) word embeddings for initialization on ZQ18UU dataset, which makes the comparison on ZQ18UU still unfair.",
": NEDW has test data leakage in their original implementation, which is fixed in our experiments.",
"Iigure 4: Slot shows the frequency of the number of negative triplets with the same assigned score as the valid triplet during evaluation on ZQ18UU dataset.",
"Whe results show that Xnlike IE15k-237, in this dataset, only FonvNE has a large number of negative triplets get the same score as the valid triplets."
]
| [
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
]
|
[
"We formulate the novel task of automatically updating an existing natural language comment based on changes in the body of code it accompanies.",
"We propose an approach that learns to correlate changes across two distinct language representations, to generate a sequence of edits that are applied to the existing comment to reflect the source code modifica-tions.",
"We train and evaluate our model using a dataset that we collected from commit histories of open-source software projects, with each example consisting of a concurrent update to a method and its corresponding comment.",
"We compare our approach against multiple baselines using both automatic metrics and human evaluation.",
"Results reflect the challenge of this task and that our model outperforms baselines with respect to making edits.",
"Software developers include natural language comments alongside source code as a way to document various aspects of the code such as functionality, use cases, pre-conditions, and post-conditions.",
"With the growing popularity of open-source software that is widely used and jointly developed, the need for efficient communication among developers about code details has increased.",
"Consequently, comments have assumed a vital role in the development cycle.",
"With developers regularly refactor-ing and iteratively incorporating new functionality, source code is constantly evolving; however, the accompanying comments are not always updated to reflect the code changes (Tan et al., 2007; Ratol and Robillard, 2017).",
"Inconsistency between code and comments can not only lead time-wasting confusion in tight project schedules (Hu et al., 2018) but can also result in bugs (Tan et al., 2007).",
"To address this problem, we propose an approach that can automatically suggest comment updates when the associated methods are changed.",
"Prior work explored rule-based approaches for detecting inconsistencies for a limited set of cases; however, they do not present ways to automatically fix these inconsistencies (Tan et al., 2007; Ratol and Robillard, 2017).",
"Recent work in automatic comment generation aims to generate a comment given a code representation (Liang and Zhu, 2018; Hu et al., 2018; Fernandes et al., 2019); although these techniques could be used to produce a completely new comment that corresponds to the most recent version of the code, this could potentially discard salient content from the existing comment that should be retained.",
"To the best of our knowledge, we are the first to formulate the task of automatically updating an existing comment when the corresponding body of code is modified .",
"This task is intended to align with how developers edit a comment when they introduce changes in the corresponding method.",
"Rather than deleting it and starting from scratch, they would likely only modify the specific parts relevant to the code updates.",
"For example, Figure 1 shows the getRotX method being modified to have the return value parsed into degrees.",
"Within the same commit, the corresponding comment is revised to indicate this, without imposing changes on parts of the comment that pertain to other aspects of the return value.",
"We replicate this process through a novel approach which is designed to correlate edits across two distinct language representations: source code and natural language comments.",
"Namely, our model is trained to generate a sequence of edit actions , which are to be applied to the existing comment, by conditioning on learned representations of the code edits and existing comment.",
"We additionally incorporate linguistic and lexical features to guide the model in determining where edits should be made in the existing comment.",
"Furthermore, we develop an output reranking scheme that aims to produce edited comments that are fluent, preserve content that should not be changed, and maintain stylistic properties of the existing comment.",
"We train and evaluate our system on a corpus constructed from open-source Java projects on GitHub, by mining their commit histories and extracting examples from consecutive commits in which there was a change to both the code within a method as well as the corresponding Javadoc comment, specifically, the @return Javadoc tag.",
"These comments, which have been previously studied for learning associations between comment and code entities (Panthaplackel et al., 2020), follow a well-defined structure and describe characteristics of the output of a method.",
"For this reason, as an initial step, we focus on @return comments in this work.",
"Our evaluation consists of several automatic metrics that are used to evaluate language generation tasks as well as tasks that relate to editing natural language text.",
"We also conduct human evaluation, and assess whether human judgments correlate with the automatic metrics.",
"The main contributions of this work include (1) the task of automatically updating an existing comment based on source code changes and (2) a novel approach for learning to relate edits between source code and natural language that outperforms multiple baselines on several automatic metrics and human evaluation.",
"Our implementation and data are publicly available.",
"1 2 Task Given a method, its corresponding comment, and an updated version of the method, the task is to update the comment so that it is consistent with the code in the new method.",
"For the example in Figure 1, we want to generate @return double the roll euler angle in degrees. based on the changes between the two versions of the method and the existing comment @return double the roll euler angle.",
"Concretely, given (M old , C old ) and M new , 1 https://github.com/panthap2/ LearningToUpdateNLComments Figure 2: High-level overview of our system.",
"where M old and M new denote the old and new versions of the method, and C old signifies the previous version of the comment, the task is to produce C new , the updated version of the comment.",
"We design a system that examines source code changes and how they relate to the existing comment in order to produce an updated comment that reflects the code modifications.",
"Since C old and C new are closely related, training a model to directly generate C new risks having it learn to just copy C old .",
"To explicitly inform the model of edits, we define the target output as a sequence of edit actions , C edit , to indicate how the existing comment should be revised (e.g., for C old = ABC , C edit = <Delete>A<DeleteEnd> implies that A should be deleted to produce C new = BC ).",
"Furthermore, in order to better correlate these edits with changes in the code, we unify M old and M new into a single diff sequence that explicitly identifies code edits, M edit .",
"We discuss in more detail how M edit and the training C edit are constructed in 4.",
"Figure 2 shows a high-level overview of our system.",
"We design an encoder-decoder architecture consisting of three components: a two-layer, bidirectional GRU (Cho et al., 2014) that encodes the code changes (M edit ), another two-layer, bidirectional GRU that encodes the existing comment (C old ), and a GRU that is trained to decode a sequence of edit actions (C edit ).",
"2 We concatenate the 2 We refrain from using the self-attention model (Vaswani et al., 2017) because prior work (Fernandes et al., 2019) suggests that it yields lower performance for comment generation.",
"final states of the two encoders to form a vector that summarizes the content in M edit and C old , and use this vector as the initial state of the decoder.",
"The decoder essentially has three subtasks: (1) identify edit locations in C old ; (2) determine parts of M edit that pertain to making these edits; and (3) apply updates in the given locations based on the relevant code changes.",
"We rely on an attention mechanism (Luong et al., 2015) over the hidden states of the two encoders to accomplish the first two goals.",
"At every decoding step, rather than aligning the current decoder state with all the encoder hidden states jointly, we align it with the hidden states of the two encoders separately.",
"We concatenate the two resulting context vectors to form a unified context vector that is used in the final step of computing attention, ensuring that we incorporate pertinent content from both input sequences.",
"Consequently, the resulting attention vector carries information relating to the current decoder state as well as knowledge aggregated from relevant portions of C old and M edit .",
"Using this information, the decoder performs the third subtask, which requires reasoning across language representations.",
"Specifically, it must determine how the source code changes that are relevant to the current decoding step should manifest as natural language updates to the relevant portions of C old .",
"At each step, it decides whether it should begin a new edit action by generating an edit start keyword, continue the present action by generating a comment token, or terminate the present action by generating an end-edit keyword.",
"Because actions relating to deletions will include tokens in C old , and actions relating to insertions are likely to include tokens in M edit , we equip the decoder with a pointer network (Vinyals et al., 2015) to accommodate copying tokens from C old and M edit .",
"The decoder generates a sequence of edit actions, which will have to be parsed into a comment (4.4).",
"Here we define the edit lexicon that is used to construct the input code edit sequence, M edit , and the target comment edit sequence, C edit .",
"ries of edit actions; each edit action is structured as",
"4 We define four types of edit actions: Insert , Delete , Replace , and Keep .",
"<Action> [span of tokens] <ActionEnd> .",
"Because the Replace action must simultaneously incorporate distinct content from two versions (i.e., tokens in the old version that will be replaced, and tokens in the new version that will take their place), it follows a slightly different structure: <ReplaceOld> [span of old tokens] <ReplaceNew> [span of new tokens] <ReplaceEnd> 4.2 Code Edits We extract the edits between M old and M new using the edit lexicon to construct M edit , the code edit sequence used as input in one of the encoders.",
"Figure 2 (top right) shows the M edit corresponding to code changes in Figure",
"1. In contrast to line-level code diffs that are commonly used for commit message generation (Loy-ola et al., 2017; Jiang et al., 2017; Xu et al., 2019), this representation allows us to explicitly capture more fine-grained edits.",
"While we could exploit the abstract syntax tree (AST) structure of source code and represent the changes between the ASTs corresponding to the two versions of code, prior work suggests that such techniques do not always lead to improved performance (Yin et al., 2019).",
"We leave it to future work to investigate how the AST structure can be leveraged for this task.",
"We identify the changes between C old and C new to construct C edit , the target comment edit sequence.",
"During inference, the output comment is produced by parsing the predicted edit sequence (4.4).",
"We introduce a slightly modified set of specifications that disregards the Keep type when constructing the sequence of edit actions, referred to as the condensed edit sequence .",
"The intuition for disregarding Keep and the span of tokens to which it applies is that we can simply copy the content that is retained between C old and C new , instead of generating it anew.",
"By doing posthoc copying, we simplify learning for the model since it has to only learn what to change rather than also having to learn what to keep .",
"We design a method to deterministically place edits in their correct positions in the absence of 4 Preliminary experiments showed that this performed better than structuring edits at the token-level as in other tasks (Shin et al., 2018; Li et al., 2018; Dong et al., 2019; Awasthi et al., 2019).",
"Keep spans.",
"For the example in Figure 1, the raw sequence <Insert>in degrees<InsertEnd> does not encode information as to where in degrees should be inserted.",
"To address this, we bind an insert sequence with the minimum number of words (aka anchors) such that the place of insertion can be uniquely identified.",
"This results in the structure that is shown for C edit in Figure",
"2. Here angle serves as the anchor point, identifying the insert location.",
"Following the structure of Replace , this sequence indicates that angle should be replaced with angle in degrees, effectively inserting in degrees and keeping angle from C old , which appears immediately before the insert location.",
"See Appendix A for details on this procedure.",
"Since the decoder is trained to predict a sequence of edit actions, we must align it with C old and copy unchanged tokens in order to produce the edited comment.",
"We denote the predicted edit sequence as C' edit and the corresponding parsed output as C' new .",
"This procedure entails simultaneously following pointers, left-to-right, on C old and C' edit , which we refer to as P old and P edit respectively.",
"P old is advanced, copying the current token into C' new at each point, until an edit location is reached.",
"The edit action corresponding to the current position of P edit is then applied, and the tokens from its relevant span are copied into C' new if applicable.",
"Finally, P edit is advanced to the next action, and P old is also advanced to the appropriate position in cases involving deletions and replacements.",
"This process repeats until both pointers reach the end of their respective sequences.",
"We extract linguistic and lexical features for tokens in M edit and C edit , many of which were shown to improve learning associations between @return comment and source code entities in our prior work (Panthaplackel et al., 2020).",
"We incorporate these features into the network as one-hot vectors that are concatenated to M edit and C edit embeddings and then passed through a linear layer.",
"These vectors are provided as inputs to the two encoders.",
"All sequences are subtokenized, e.g., camelCase camel , case .",
"Features specific to M edit : We aim to take advantage of common patterns among different types of code tokens by incorporating features that identify certain categories: edit keywords, Java keywords, and operators.",
"If a token is not an edit keyword, we have indicator features for whether it is part of a Insert , Delete , ReplaceNew , ReplaceOld , or Keep span.",
"We believe this will be particularly helpful for longer spans since edit keywords only appear at either the beginning or end of a span.",
"Finally, we include a feature to indicate whether the token matches a token in C old .",
"This is intended to help the model identify locations in M edit that may be relevant to editing C old .",
"Features specific to C old : We include whether a token matches a code token that is inserted, deleted, or replaced in M edit .",
"These help align parts of C old with code edits, assisting the model in determining where edits should be made.",
"In order to exploit common patterns for different types of tokens, we incorporate features that identify whether the token appears more than once in C old or is a stop word, and its part-of-speech.",
"Shared features: We include whether the token is a subtoken that was originally part of a larger token and its index if so (e.g., split from camelCase , camel and case are subtokens with indices 0 and 1 respectively).",
"These features aim to encode important relationships between adjacent tokens that are lost once the body of code and comment are transformed into a single, subtokenized sequences.",
"Additionally, because we focus on @return comments, we introduce features intended to guide the model in identifying relevant tokens in M edit and C old .",
"Namely, we include whether a given token matches a token in a return statement that is unique to M old , unique to M new , or present in both.",
"Similarly, we indicate whether the token matches a token in the subtokenized return type that is unique to M old , unique to M new , or present in both.",
"Reranking allows the incorporation of additional priors that are difficult to back-propagate, by re-scoring candidate sequences during beam search (Neubig et al., 2015; Ko et al., 2019; Kriz et al., 2019).",
"We incorporate two heuristics to rescore the candidates: 1) generation likelihood and 2) similarity to C old .",
"These heuristics are computed after parsing the candidate edit sequences (4.4).",
"Generation likelihood.",
"Since the edit model is trained on edit actions only, it does not globally score the resulting comment in terms of aspects such as fluency and overall suitability for the updated method.",
"To this end, we make use of a pre-trained comment generation model (8.2) that is Train Valid Test Examples 5,791 712 736 Projects 526 274 281 Edit Actions 8,350 1,038 1,046 Sim (M old , M new ) 0.773 0.778 0.759 Sim (C old , C new ) 0.623 0.645 0.635 Code Unique 7,271 2,473 2,690 Mean 86.4 87.4 97.4 Median 46 49 50 Comm.",
"trained on a substantial amount of data for generating C new given only M new .",
"We compute the length-normalized probability of this model generating the parsed candidate comment, C' new , (i.e., P ( C (cid:48) new | M new ) 1 /N where N is the number of tokens in C' new ).",
"This model gives preference to comments that are more likely for M new and are more consistent with the general style of comments.",
"5 Similarity to C old .",
"So far, our model is mainly trained to produce accurate edits; however, we also follow intuitions that edits should be minimal (as an analogy, the use of Levenshtein distance in spelling correction).",
"To give preference to predictions that accurately update the comment with minimal mod-ifications, we use similarity to C old as a heuristic for reranking.",
"We measure similarity between the parsed candidate prediction and C old using METEOR (Banerjee and Lavie, 2005).",
"Reranking score.",
"The reranking score for each candidate is a linear combination of the original beam score, the generation likelihood, and the similarity to C old with coefficients 0.5, 0.3, and 0.2 respectively (tuned on validation data).",
"We extracted examples from popular, open-source Java projects using GitHub's commit history.",
"We extract pairs of the form (method, comment) for the same method across two consecutive commits where there is a simultaneous change to both the code and comment.",
"This creates somewhat noisy data for the task of comment update; Appendix B describes filtering techniques to reduce this noise.",
"5 We attempted to integrate this model into the training procedure of the edit model through joint training; however, this deteriorated performance.",
"We first tokenize M old and M new using the javalang 6 library.",
"We subtokenize based on camelCase and snake_case, as in previous work (Allamanis et al., 2016; Alon et al., 2019; Fernandes et al., 2019).",
"We then form M edit from the subtokenized forms of M old and M new .",
"We tokenize C old and C new by splitting by space and punctuation.",
"We remove HTML tags and the @return that precedes all comments, and also subtokenize tokens since code tokens may appear in comments as well.",
"The gold edit action sequence, C edit , is computed from these processed forms of C old and C new .",
"To avoid having examples that closely resemble one another in training and test, the projects in the training, test, and validation sets are disjoint, similar to Movshovitz-Attias and Cohen (2013).",
"Table 1 gives dataset statistics.",
"Of the 7,239 examples in our final dataset, 833 of them were extracted from the diffs used in Panthaplackel et al. (2020).",
"Including code and comment tokens that appear at least twice in the training data as well as the predefined edit keywords, the code and comment vocabulary sizes are 5,945 and 3,642 respectively.",
"We evaluate our approach against multiple rule-based baselines and comment generation models.",
"Copy: Since much of the content of C old is typically retained in the update, we include a baseline that merely copies C old as the prediction for C new .",
"Return type substitution: The return type of a method often appears in its @return comment.",
"If the return type of M old appears in C old and the return type is updated in the code, we substitute the new return type while copying all other parts of C old .",
"Otherwise, C old is copied as the prediction.",
"Return type substitution w/ null handling: As an addition to the previous method, we also check whether the token null is added to either a return statement or if statement in the code.",
"If so, we copy C old and append the string or null if null , otherwise, we simply copy C old .",
"This baseline addresses a pattern we observed in the data in which ways to handle null input or cases that could result in null output were added.",
"One of our main hypotheses is that modeling edit sequences is better suited for this task than generating comments from scratch.",
"However, a counter argument could be that a comment generation model could be trained from substantially more data, since it is much easier to obtain parallel data in the form (method, comment), without the constraints of simultaneous code/comment edits.",
"Hence the power of large-scale training could out-weigh edit modeling.",
"To this end, we compare with a generation model trained on 103,473 method/ @return comment pairs collected from GitHub.",
"We use the same underlying neural architecture as our edit model to make sure that the difference in results comes from the amount of training data and from using edit of representations only: a two-layer, bi-directional GRU that encodes the sequence of tokens in the method, and an attention-based GRU decoder with a copy mechanism that decodes a sequence of comment tokens.",
"We expect the incorporation of more complicated architectures, e.g., tree-based (Alon et al., 2019) and graph-based (Fer-nandes et al., 2019) encoders which exploit AST structure, can be applied to both an edit model and a generation model, which we leave for future work.",
"Evaluation is based on the 736 (M new , C new ) pairs in the test set described in 7.",
"We ensure that the projects from which training examples are extracted are disjoint from those in the test set.",
"In order to allow the generation model to exploit the old comment, this system uses similarity to C old (cf. 6) as a heuristic for reranking the top candidates from the previous model.",
"The reranking score is a linear combination of the original beam score and the METEOR score between the candidate prediction and C old , both with coefficient 0.5 (tuned on validation data).",
"Model parameters are identical across the edit model and generation model, tuned on validation data.",
"Encoders have hidden dimension 64, the decoder has hidden dimension 128, and the dimension for code and comment embeddings is 64.",
"The embeddings used in the edit model are initialized using the pre-trained embedding vectors from the generation model.",
"We use a dropout rate of 0.6, a batch size of 100, an initial learning rate of 0.001, and Adam optimizer.",
"Models are trained to minimize negative log likelihood, and we terminate training if the validation loss does not decrease for ten consecutive epochs.",
"During inference, we use beam search with beam width=20.",
"Metrics: We compute exact match, i.e., the percentage of examples for which the model prediction is identical to the reference comment C new .",
"This is often used to evaluate tasks involving source code edits (Shin et al., 2018; Yin et al., 2019).",
"We also report two prevailing language generation metrics: METEOR (Banerjee and Lavie, 2005), and average sentence-level BLEU-4 (Papineni et al., 2002) that is previously used in code-language tasks (Iyer et al., 2016; Loyola et al., 2017).",
"Previous work suggests that BLEU-4 fails to accurately capture performance for tasks related to edits, such as text simplification (Xu et al., 2016), grammatical error correction (Napoles et al., 2015), and style transfer (Sudhakar et al., 2019), since a system that merely copies the input text often achieves a high score.",
"Therefore, we also include two text-editing metrics to measure how well our system learns to edit : SARI (Xu et al., 2016), originally proposed to evaluate text simplification, is essentially the average of N-gram F1 scores corresponding to add, delete, and keep edit operations; 7 GLEU (Napoles et al., 2015), used in grammatical error correction and style transfer, takes into account the source sentence and deviates from BLEU by giving more importance to n-grams that have been correctly changed.",
"Results: We report automatic metrics averaged across three random initializations for all learned models, and use bootstrap tests (Berg-Kirkpatrick et al., 2012) for statistical significance.",
"Table 2 presents the results.",
"While reranking using C old appears to help the generation model, it still substantially underperforms all other models, across all metrics.",
"Although this model is trained on considerably more data, it does not have access to C old during training and uses fewer inputs and consequently has less context than the edit model.",
"Reranking slightly deteriorates the edit model's 7 Although the original formulation only used precision for the delete operation, more recent work computes F1 for this as well (Dong et al., 2019; Alva-Manchego et al., 2019).",
"performance with respect to SARI; however, it provides statistically significant improvements on most other metrics.",
"Although two of the baselines achieve slightly higher BLEU-4 scores than our best model, these differences are not statistically significant, and our model is better at editing comments, as shown by the results on exact match, SARI, and GLEU.",
"In particular, our edit models beat all other models with wide, statistically significant, margins on SARI, which explicitly measures performance on edit operations.",
"Furthermore, merely copying C old , yields a relatively high BLEU-4 score of 46.218.",
"The return type substitution and return type substitution w/ null handling baselines produce predictions that are identical to C old for 74.73% and 65.76% of the test examples, respectively, while it is only 9.33% for the reranked edit model.",
"In other words, the baselines attain high scores on automatic metrics and even beat our model on BLEU-4, without actually performing edits on the majority of examples.",
"This further underlines the shortcomings of some of these metrics and the importance of conducting human evaluation for this task.",
"Automatic metrics often fail to incorporate semantic meaning and sentence structure in evaluation as well as accurately capture performance when there is only one gold-standard reference; indeed, these metrics do not align with human judgment in other generation tasks like grammatical error correction (Napoles et al., 2015) and dialogue generation (Liu et al., 2016).",
"Since automatic metrics have not yet been explored in the context of the new task we are proposing, we find it necessary to conduct human evaluation and study whether these metrics are consistent with human judgment.",
"User study design: Our study aims to reflect how a comment update system would be used in practice, such as in an Integrated Development En-Baseline Generation Edit None 18.4% 12.4% 30.2% 55.0% Table 3: Percentage of annotations for which users selected comment suggestions produced by each model.",
"vironment (IDE).",
"When developers change code, they would be shown suggestions for updating the existing comment.",
"If they think the comment needs to be updated to reflect the code changes, they could select the one that is most suitable for the new version of the code or edit the existing comment themselves if none of the options are appropriate.",
"We simulated this setting by asking a user to select the most appropriate updated comment from a list of suggestions, given C old as well as the diff between M old and M new displayed using GitHub's diff interface.",
"The user can select multiple options if they are equally good or a separate None option if no update is needed or all suggestions are poor.",
"The list of suggestions consists of up to three comments, predicted by the strongest benchmarks and our model : (1) return type substitution w/ null handling, (2) reranked generation model, and (3) reranked edit model, arranged in randomized order.",
"We collapse identical predictions into a single suggestion and reward all associated models if the user selects that comment.",
"Additionally, we remove any prediction that is identical to C old to avoid confusion as the user should never select such a suggestion.",
"We excluded 6 examples from the test set for which all three models predicted C old for the updated comment.",
"Nine students (8 graduate/1 undergraduate) and one full-time developer at a large software company, all with 2+ years of Java experience, participated in our study.",
"To measure inter-annotator agreement, we ensured that every example was evaluated by two users.",
"We conducted a total of 500 evaluations, across 250 distinct test examples.",
"Results: Table 3 presents the percentage of annotations (out of 500) for which users selected /** @return item in given position*/ public Complex getComplex( final int i) { return get(i); } Previous Version /** @return item in first position*/ public Complex getComplex() { return get(); } Updated Version Figure 3: Changes in the getComplex method and its corresponding @return comment between two subsequent commits of the eclipse-january project, available on GitHub.",
"comment suggestions that were produced by each model.",
"Using Krippendorff's (Krippendorff, 2011) with MASI distance (Passonneau, 2006) (which accommodates our multi-label setting), inter-annotator agreement is 0.64, indicating satisfactory agreement.",
"The reranked edit model beats the strongest baseline and reranked generation by wide statistically-significant margins.",
"From rationales provided by two annotators, we observe that some options were not selected because they removed relevant information from the existing comment, and not surprisingly, these options often corresponded to the comment generation model.",
"Users selected none of the suggested comments 55% of the time, indicating there are many cases for which either the existing comment did not need updating, or comments produced by all models were poor.",
"Based on our inspection of a sample these, we observe that in a large portion of these cases, the comment did not warrant an update.",
"This is consistent with prior work in sentence simplification which shows that, very often, there are sentences that do not need to be simplified (Li and Nenkova, 2015).",
"Despite our efforts to minimize such cases in our dataset through rule-based filtering techniques, we found that many remain.",
"This suggests that it would be beneficial to train a classi-fier that first determines whether a comment needs to be updated before proposing a revision.",
"Furthermore, the cases for which the existing comment does need to be updated but none of the models produce reasonable predictions illustrate the scope for improvement for our proposed task.",
"We find that our model performs poorly in cases requiring external knowledge and more context than that provided by the given method.",
"For instance, correctly updating the comment shown in Figure 3 requires knowing that get returns the item in the first position if no argument is provided.",
"Our model does not have access to this information, and it fails to generate a reasonable update: @return complex in given position.\"",
"On the other hand, the reranked generation model produces @return the complex value\" which is arguably reasonable for the given context. This suggests that incorporating more code context could be beneficial for both models. Furthermore, we find that our model tends to make more mistakes when it must reason about a large amount of code change between M old and M new , and we found that in many such cases, the output of the reranked generation model was better. This suggests that when there are substantial code changes, M new effectively becomes a new method, and generating a comment from scratch may be more appropriate. Ensembling generation with our system through a regression model that predicts the extent of editing that is needed may lead to a more generalizable approach that can accommodate such cases. Sample outputs are given in Appendix C. 11 Ablations We empirically study the effect of training the network to encode explicit code edits and decode explicit comment edits. As discussed in Section 3, the edit model consists of two encoders, one that encodes C old and another that encodes the code representation, M edit . We conduct experiments in which the code representation instead consists of either (1) M new or (2) both M old and M new (encoded separately and hidden states concatenated). Additionally, rather than having the decoder generate comment edits in the form C edit , we introduce experiments in which it directly generates C new , with no intermediate edit sequence. For this, we use only the underlying architecture of the edit model (without features or reranking). The performance for various combinations of input code and target comment representations are shown in Table 4. By comparing performance across combinations consisting of the same input code representation and varying target comment representations, the importance of training the decoder to generate a sequence of edit actions rather than the full updated comment is very evident. Furthermore, comparing across varying code representations under the C edit target comment representation, it is clear that explicitly encoding the code changes, as M edit , leads to significant improvements across most metrics. We further ablate the features introduced in 5. As shown in Table 5, these features improve performance by wide margins, across all metrics. Inputs Output xM (%) METEOR BLEU-4 SARI GLEUC old , M new C new 5.707 29.259 33.534 28.024 30.000 C edit 4.755 33.796 43.315 35.516 37.970 (cid:107) C old , M old , M new C new 3.714 18.729 20.060 23.914 21.956 C edit 5.163 34.895 44.006 33.479 37.618 (cid:107) C old , M edit C new 6.114 29.968 34.164 28.980 30.491 C edit 8.922 36.229 44.283 40.538 39.879 Table 4: Exact match, METEOR, BLEU-4, SARI, and GLEU for various combinations of code input and target comment output configurations. Features and reranking are disabled for all models. Scores for which the difference in performance is not statistically significant (p < 0.05) are indicated with matching symbols. Model xM (%) METEOR BLEU-4 SARI GLEU Models Edit 17.663 42.222 48.217 46.376 45.060 feats. 8.922 36.229 44.283 40.538 39.879 Reranked models Edit 18.433 44.698 50.717 45.486 46.118 feats. 8.877 38.446 46.665 36.924 40.317 Table 5: Exact match, METEOR, BLEU-4, SARI, and GLEU scores of ablated models. Scores for which the difference in performance is not statistically significant (p < 0.05) are indicated with matching symbols. 12 Related Work Learning from source code changes: Lee et al. (2019) use rule-based techniques to automatically detect and revise outdated API names in code documentation; however, their approach cannot be extended to full natural language comments that are the focus of this work. Zhai et al. (2020) propose a technique for updating incomplete and buggy comments by propagating comments from different code elements (e.g., variables, methods, classes) based on program analysis and several heuristics. Rather than simply copying a related comment, we aim to revise an outdated comment by reasoning about code changes. Yin et al. (2019) present an approach for learning structural and semantic properties of source code edits so that they can be generalized to new code inputs. Similar to their work, we learn vector representations from source code changes; however, unlike their setting, we apply these representations to natural language. Prior work in automatic commit message generation aims to learn from code changes in order to generate a natural language summary of these changes (Loyola et al., 2017; Jiang et al., 2017; Xu et al., 2019). Instead of generating natural language content from scratch as done in their work, we focus on applying edits to existing natural language text. We also show that generating a comment from scratch does not perform as well as our proposed edit model for the comment update setting. Editing natural language text: Approaches for editing natural language text have been studied extensively through tasks such as sentence simplification (Dong et al., 2019), style transfer (Li et al., 2018), grammatical error correction (Awasthi et al., 2019), and language modeling (Guu et al., 2018). The focus of this prior work is to revise sentences to conform to stylistic and grammatical conventions, and it does not generally consider broader contextual constraints. On the contrary, our goal is not to make cosmetic revisions to a given span of text, but rather amend its semantic meaning to be in sync with the content of a separate body of information on which it is dependent. More recently, Shah et al. (2020) proposed an approach for rewriting an outdated sentence based on a sentence stating a new factual claim, which is more closely aligned with our task. However, in our case, the separate body of information is not natural language and is generally much longer than a single sentence. 13 Conclusion We have addressed the novel task of automatically updating an existing programming comment based on changes to the related code. We designed a new approach for this task which aims to correlate cross-modal edits in order to generate a sequence of edit actions specifying how the comment should be updated. We find that our model outperforms multiple rule-based baselines and comment generation models, with respect to several automatic metrics and human evaluation. Acknowledgements We thank reviewers for their feedback on this work and the participants of our user study for their time. This work was partially supported by a Google Faculty Research Award and the US National Science Foundation under Grant Nos. CCF-1652517 and IIS-1850153. References Miltiadis Allamanis. 2019. The adverse effects of code duplication in machine learning models of code. In SPLASH , Onward!, pages 143153. Miltiadis Allamanis, Hao Peng, and Charles Sutton. 2016. A convolutional attention network for extreme summarization of source code. In International Conference on Machine Learning , pages 20912100. Uri Alon, Shaked Brody, Omer Levy, and Eran Yahav. 2019. code2seq: Generating sequences from structured representations of code. In International Conference on Learning Representations . Fernando Alva-Manchego, Louis Martin, Carolina Scarton, and Lucia Specia. 2019. EASSE: Easier automatic sentence simplification evaluation. In Conference on Empirical Methods in Natural Language Processing and International Joint Conference on Natural Language Processing: System Demonstrations , pages 4954. Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. 2019. Parallel iterative edit models for local sequence transduction. In Conference on Empirical Methods in Natural Language Processing and International Joint Conference on Natural Language Processing , pages 42514261. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for MT evaluation with improved correlation with human judgments. In Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization , pages 65 72. Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in NLP. In Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning , pages 9951005. Kyunghyun Cho, Bart van Merrinboer, Caglar Gul-cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoderdecoder for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing , pages 17241734. Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing. In Annual Meeting of the Association for Computational Linguistics , pages 33933402. Patrick Fernandes, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Structured neural summarization. In International Conference on Learning Representations . Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. Transactions of the Association for Computational Linguistics , 6:437450. Xing Hu, Ge Li, Xin Xia, David Lo, and Zhi Jin. 2018. Deep code comment generation. In International Conference on Program Comprehension , pages 200 210. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In Annual Meeting of the Association for Computational Linguistics , pages 20732083. Siyuan Jiang, Ameer Armaly, and Collin McMillan. 2017. Automatically generating commit messages from diffs using neural machine translation. In International Conference on Automated Software Engineering , pages 135146. Wei-Jen Ko, Greg Durrett, and Junyi Jessy Li. 2019. Linguistically-informed specificity and semantic plausibility for dialogue generation. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 34563466. Klaus Krippendorff. 2011. Computing Krippendorff's alpha reliability. Technical report, University of Pennsylvania. Reno Kriz, Joo Sedoc, Marianna Apidianaki, Carolina Zheng, Gaurav Kumar, Eleni Miltsakaki, and Chris Callison-Burch. 2019. Complexity-weighted loss and diverse reranking for sentence simplification. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 31373147. Seonah Lee, Rongxin Wu, S.C. Cheung, and Sungwon Kang. 2019. Automatic detection and update suggestion for outdated API names in documentation. Transactions on Software Engineering . Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sentiment and style transfer. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 18651874. Junyi Jessy Li and Ani Nenkova. 2015. Fast and accurate prediction of sentence specificity. In AAAI Conference on Artificial Intelligence , pages 22812287. Yuding Liang and Kenny Q. Zhu. 2018. Automatic generation of text descriptive comments for code blocks. In AAAI Conference on Artificial Intelligence , pages 52295236. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Nose-worthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Conference on Empirical Methods in Natural Language Processing , pages 21222132. Pablo Loyola, Edison Marrese-Taylor, and Yutaka Mat-suo. 2017. A neural architecture for generating natural language descriptions from source code changes. In Annual Meeting of the Association for Computational Linguistics , pages 287292. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Conference on Empirical Methods in Natural Language Processing , pages 14121421. Dana Movshovitz-Attias and William W. Cohen. 2013. Natural language models for predicting programming comments. In Annual Meeting of the Association for Computational Linguistics , pages 3540. Courtney Napoles, Keisuke Sakaguchi, Matt Post, and Joel Tetreault. 2015. Ground truth for grammatical error correction metrics. In Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing , pages 588593. Graham Neubig, Makoto Morishita, and Satoshi Naka-mura. 2015. Neural reranking improves subjective quality of machine translation: NAIST at WAT2015. In Workshop on Asian Translation , pages 3541. Sheena Panthaplackel, Milos Gligoric, Raymond J. Mooney, and Junyi Jessy Li. 2020. Associating natural language comment and source code entities. In AAAI Conference on Artificial Intelligence . Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Annual Meeting of the Association for Computational Linguistics , pages 311318. Rebecca Passonneau. 2006. Measuring agreement on set-valued items (MASI) for semantic and pragmatic annotation. In International Conference on Language Resources and Evaluation . Inderjot Kaur Ratol and Martin P. Robillard. 2017. Detecting fragile comments. International Conference on Automated Software Engineering , pages 112 122. Darsh J. Shah, Tal Schuster, and Regina Barzilay. 2020. Automatic fact-guided sentence modification. In AAAI Conference on Artificial Intelligence . Richard Shin, Illia Polosukhin, and Dawn Song. 2018. Towards specification-directed program repair. In International Conference on Learning Representations Workshop . Akhilesh Sudhakar, Bhargav Upadhyay, and Arjun Ma-heswaran. 2019. Transforming delete, retrieve, generate approach for controlled text style transfer."
]
| [
"objective",
"objective",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"method",
"abstain",
"method",
"method",
"method",
"objective",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"result",
"method",
"abstain",
"abstain",
"method",
"result",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other"
]
|
[
"Abstract",
"Text representation models are prone to exhibit a range of societal biases, reflecting the noncontrolled and biased nature of the underlying pretraining data, which consequently leads to severe ethical issues and even bias amplifica-tion.",
"Recent work has predominantly focused on measuring and mitigating bias in pretrained language models.",
"Surprisingly, the landscape of bias measurements and mitigation resources and methods for conversational language models is still very scarce: it is limited to only a few types of bias, artificially constructed resources, and completely ignores the impact that debiasing methods may have on the final performance in dialog tasks, e.g., conversational response generation.",
"In this work, we present REDDITBIAS , the first conversational data set grounded in the actual human conversations from Reddit, allowing for bias measurement and mitigation across four important bias dimensions: gender , race , religion , and queerness .",
"Further, we develop an evaluation framework which simultaneously 1) measures bias on the developed REDDITBIAS resource, and 2) evaluates model capability in dialog tasks after model debiasing.",
"We use the evaluation framework to benchmark the widely used conversational DialoGPT model along with the adaptations of four debiasing methods.",
"Our results indicate that DialoGPT is biased with respect to religious groups and that some debiasing techniques can remove this bias while preserving downstream task performance.",
"Pretrained language models and their corresponding contextualized representation spaces (Peters et al., 2018; Devlin et al., 2019) have recently been shown to encode and amplify a range of stereotypical human biases (e.g., gender or racial biases) (Zhao et al., 2019; Basta et al., 2019; Liang et al., 2020a,b), much like their static embedding predecessors",
"predecessors (Bolukbasi et al., 2016; Caliskan et al., 2017; Dev and Phillips, 2019; Gonen and Goldberg, 2019; Lauscher et al., 2020a, inter alia ).",
"Having models that capture or even amplify human biases brings about further ethical challenges to the society (Henderson et al., 2018), since stereotyping minoritized groups is a representational harm that perpetuates societal inequalities and unfairness (Blodgett et al., 2020).",
"Human biases are in all likelihood especially harmful if encoded in conversational AI systems, like the recent DialoGPT model (Zhang et al., 2020), which directly interact with humans, possibly even taking part in intimate and personal conversations (Utami et al., 2017).",
"Given the increasing presence of dialog systems and chatbots in everyday life, the body of work that focuses on detecting and mitigating biases in conversational systems is surprisingly limited (Lee et al., 2019; Liu et al., 2020a,b; Dinan et al., 2020a,b), albeit some more research has recently emerged in the wider context of biases in general-purpose language generation models (Qian et al., 2019; Sheng et al., 2019; Nadeem et al., 2020; Yeo and Chen, 2020).",
"Most of these efforts 1) focus on a single bias dimension (predominantly gender bias), 2) operate on artificial data (i.e., not real-world dialog interactions), and with the isolated exception of Liu et al. (2020b) 3) completely neglect to analyze the potential effects of debiasing on model performance in dialog (sub-)tasks (e.g., dialog state tracking).",
"In this work, we aim to close all these gaps by introducing REDDITBIAS , the first 'real-world' data set for measuring and mitigating biases in dialog models, together with an evaluation framework that couples bias measures with downstream evaluation on dialog tasks.",
"Contributions.",
"The contributions of this work are threefold: 1) we construct REDDITBIAS , a resource for multi-dimensional bias evaluation and mitigation dedicated to conversational AI.",
"Unlike other bias evaluation resources, REDDITBIAS is created from real-world conversations collected from the popular online discussion platform Reddit and manually annotated for multiple societal bias dimensions:",
"(i) religion , with two bias analysis subdimensions ( Jews , Christians ) and ( Muslims , Christians ),",
"(ii) race ( African , American ),",
"(iii) gender ( female , male ), and",
"(iv) queerness ( LGBTQ , straight ); 2) Along with the resource, we propose a dialog-oriented bias evaluation framework: it couples",
"(i) a perplexity-based bias measure meant to quantify the amount of bias in generative language models with",
"(ii) performance measures on two concrete downstream dialogue tasks dialog state tracking (DST) and conversational response generation (CRG).",
"Such a setup allows to test whether bias mitigation comes at the expense of deteriorated downstream dialog performance; 3) Finally, we adapt four bias mitigation methods from the literature and profile their debiasing and downstream effects on conversational language models with our evaluation framework.",
"Acknowledging the conversational nature of REDDITBIAS , we resort to the recently proposed DialoGPT model (Zhang et al., 2020) for our comparative evaluation study.",
"Our experimental results indicate that",
"(i) DialoGPT is significantly biased along two (out of five) bias evaluation dimensions and",
"(ii) that some of the employed debiasing methods (see 4) manage to reduce the bias, at the same time preserving DialoGPT's conversational capabilities.",
"We release REDDITBIAS together with all code online at: https://github.com/umanlp/RedditBias .",
"We first describe the process of REDDITBIAS creation, carried out in three steps: 1) creation of bias specifications for multiple bias dimensions, 2) retrieval of candidates for biased comments based on the bias specifications, and 3) manual annotation of candidate comments for the presence of bias.",
"Unlike prior work, which mostly focuses on one or two bias dimensions, our study encompasses five types of bias from four dimensions: (1) religion (two different bias types), (2) race , (3) gender , and (4) queerness .",
"To measure or mitigate a bias, one must first formalize (i.e., specify) it.",
"To this end, we start from the concept of an explicit bias specification (Caliskan et al., 2017; Lauscher et al., 2020a): an explicit bias specification BE = ( T 1 , T 2 , A 1 , A 2 ) consists of two sets of target terms or phrases T 1 and T 2 between which a bias is expected to exist w.r.t. two sets of attribute terms or phrases A 1 , and A 2 .",
"Further, we opt for bias specifications that reflect the inequality between groups in power, i.e., dominant groups, and discriminated groups, i.e., minoritized groups : 1 for each BE , the set T 1 consists of terms describing a minoritized group with (negative) stereotypical terms in A 1 , while T 2 consists of terms describing a dominant group with (positive) stereotypical terms in A 2 .",
"We compile bias specifications as follows.",
"The two target lists T 1 and T 2 are created by manually compiling small sets of near-synonymous expressions that unambiguously refer to the minoritized and dominant groups, respectively (e.g., for dimension religion and Muslims as the minoritized group, we compile T 1 = { muslims , arabs , islamic people , islam , islamic culture } ).",
"We then collect the list A 1 of stereotypical negative descriptors by engaging with sociological literature relating to the minoritized groups (Welch, 2007; Shaw, 2012; Black, 2015).",
"2 Finally, we create the corresponding list A 2 of positive descriptors by looking for (loose) antonyms of expressions in A 1 (e.g., if Jewish people T 1 are stereotypically greedy A 1 , we would then place generous into A 2 ).",
"Note that designing bias specifications is a crucial step in most of the current debiasing approaches and that there exists a trade-off between employing a bigger set of specification terms and keeping the bias specifications clean.",
"In this work, we generally focus on smaller and more precise term sets.",
"We show partial term lists from our bias specifications in Table 1 and provide the full lists in the Appendix.",
"Starting from the compiled bias specifications, we next retrieve candidates for stereotypical comments from Reddit using the Pushshift API.",
"3 To this end, we generate query strings by coupling each term from the target set T 1 identifying the minoritized group with each term from the corresponding stereotypical attribute set A 1 this gives a query 1 We borrow the terminology (i.e., minoritized groups vs. dominant groups or groups in power ) from the feminist discourse (e.g., D'Ignazio and Klein, 2020) 2 For example, Welch (2007) lists stereotypical negatives such as violent , drug dealer , or prison as strongly associated with African Americans.",
"set Q = T 1 A 1 .",
"4 We then run each query from Q against the API with a search period of 3 .",
"33 years.",
"In a postprocessing step, we clean the retrieved data by removing URLs, user names, and extra white spaces and by lower-casing the comments.",
"We retain only the retrieved comments that are shorter than 150 characters.",
"In many cases we observed that, while comments as a whole are not biased, the part of the comment that connects t T 1 and a A 1 , if taken out of context, is biased (e.g., he just thinks all blacks are criminals ).",
"To capture more biased phrases, we also extract a narrower context of + / 7 tokens from the target term t T 1 .",
"We then annotate for bias both (1) the whole comment and (2) this narrower context window around the target term extracted from the comment (as a standalone text).",
"4 To increase the likelihood that retrieved comments do express the bias of interest, we couple T 1 terms with correct forms of the verb to be (e.g., jews are instead of jews or husband is instead of husband ), as such phrases are more likely to introduce a biased statement.",
"(i.e., phrases).",
"Human annotators then assign a binary label indicating if a negative stereotypical bias is expressed to each comment and each corresponding phrase.",
"5 After an initial training of the annotators, we first carried out a small calibration study during which we refined the annotation guidelines 6 and identified corner cases, e.g., comments involving sarcasm or comments quoting an earlier (biased) comment.",
"We then split all the retrieved candidate comments for all five bias types between the three annotators (without overlap) and let them carry out the annotation work.",
"Table 3 reveals the total number of annotated and positive (i.e., biased) instances at the comment and phrase level for each of the five bias types.",
"Finally, we measure the inter-annotator agreement (IAA) by letting an additional annotator 7 label 100 randomly selected candidates for biased comments (20 per each of the five bias types).",
"We measure an IAA of .65 Krippendorff's (nomi-nal) on the comment level and .67 on the phrase 5 We hired three annotators with diverse gender and diverse religious and cultural backgrounds; they all have an University degree in Computer Science and speak English fluently.",
"level.",
"We did not observe significant differences in agreement across the individual bias types.",
"For the purposes of training and evaluating bias mitigation methods (which we adapt from the literature for conversational LMs in 4), we split the obtained biased phrases into train, development, and test portions; their sizes are also shown in Table 3.",
"We further show examples of comments labeled as biased for all five bias types in Table 2.",
"We now describe our framework for bias evaluation in conversational language models (LMs), which couples (1) a bias measure computed on the test portions of REDDITBIAS with (2) task-specific performance on downstream dialog tasks.",
"The latter aims to capture potential negative effects that debiasing techniques may have on downstream dialog performance of conversational LMs.",
"We estimate bias in conversational LMs by measuring if (and how much) likelier the LM is to generate a stereotypically biased phrase compared to a corresponding inversely biased phrase in which we replace t 1 T 1 with a t 2 T 2 .",
"To this end, we start from a bias specification BE = ( T 1 , T 2 , A 1 , A 2 ) and a set of the corresponding biased phrases X ( T 1 ,A 1 ) from the test portion of REDDITBIAS related to this bias dimension.",
"We first build pairs of corresponding terms between the { t 1 , t 2 } T 1 T 2 .",
"8 We list all pairs in the Appendix.",
"We then follow the principle of counterfactual data augmentation (Zhao et al., 2018) and for each biased phrase x ( t 1 ,a 1 ) X ( T 1 ,A 1) (e.g., everyone knows jews are greedy) create a corresponding inversely biased phrase x ( t 2 ,a 1 ) (e.g., everyone knows christians are greedy).",
"Let ( X ( T 1 ,A 1 ) , X ( T 2 ,A 1 ) ) = { ( x ( i ) ( t 1 ,a 1 ) , x ( i ) ( t 2 ,a 1 ) ) } Ni =1 be 8 For instance, for the bias type Religion #1 , we pair ( jew , christian ), ( judaism , christianity ), etc. a set of N such counterfactual pairs.",
"Our bias measure relies on the significance of mean perplexity differences between biased expressions x ( i ) ( t 1 ,a 1 ) and their counterfactual counterparts x ( i ) ( t 2 ,a 1 ) .",
"Since the reliability of such significance may be negatively affected by outliers (Pollet and van der Meij, 2017), we first reduce noise by removing pairs in which either x ( i ) ( t 1 ,a 1 ) or x ( i ) ( t 2 ,a 1 ) have very high perplexity, i.e., if they are not within the interval [( x + 3 s ) , ( x 3 s )] , where x is the mean perplexity of the sample and s the corresponding standard deviation.",
"Finally, we quantify and report the bias effect as the t -value of the Student's two-tailed test between two ordered sets of corresponding perplexity scores PP ( X ( T 1 ,A 1 ) ) and PP ( X ( T 2 ,A 1 ) ) obtained after eliminating the outlier pairs.",
"In this setup, a negative t value indicates the presence of a (negative) stereotypical bias.",
"The bias is then statistically significant if the corresponding p -value of the test is within the given confidence interval (in this study set to = 0 . 05 ).",
"Successful bias mitigation should ideally have no negative effect on the downstream performance of the LM in dialog tasks.",
"We therefore couple the LMB evaluation (3.1) with measures of performance on 1) the original (intrinsic) measurement of in-domain perplexity on Reddit utterances (Zhang et al., 2020), and two dialog tasks: 2) dialog state tracking on MultiWoZ (Budzianowski et al., 2018), and 3) conversational response generation on DSTC-7 (Yoshino et al., 2019).",
"Language Model Perplexity (LMP).",
"Following the original DialoGPT evaluation, we measure the perplexity of the model before and after we subject it to the bias mitigation methods from 4 on the reference data set consisting of 6 K examples extracted from Reddit by Zhang et al. (2020).",
"9 Dialog State Tracking (DST).",
"Resorting to one of the central subtasks of task-oriented dialog, we evaluate the models' performances on DST.",
"Here, the goal is to maintain an accurate account of the dialog belief state (i.e., information slots and their values provided by the user) at each turn of the conversation, combining the information from the current user utterance and the conversation history (Henderson et al., 2014; Mrksic et al., 2017).",
"We 9 github.com/microsoft/DialoGPT/blob/ master/data/human.ref.6k.txt evaluate the DST performance on the MultiWoZ 2.0 data set (Budzianowski et al., 2018).",
"10 As in the original work, DST is cast into a binary prediction task: given the dialog history and the current user utterance, predict for each slot-value combination whether it should be part of the current dialog belief state.",
"As input to DialogGPT, we concatenate the tokens from",
"(i) the previous system output,",
"(ii) the current user utterance, and",
"(iii) the MultiWoZ domain, the slot, and value tokens.",
"We couple the DialoGPT's transformer with a simple feed-forward classifier to which we feed the transformed representation of the last input token.",
"We train the whole model using the binary cross-entropy loss.",
"Conversational Response Generation (CRG).",
"Finally, like the original DialoGPT paper, we evaluate the model before and after bias mitigation on the sentence generation task from the Dialog System Technology Challenge 7 (DSTC-7; Yoshino et al., 2019).",
"The models receive",
"(a) a conversational input which includes k most recent preceding turns, and",
"(b) facts external pieces of texts containing knowledge relevant to the conversation, and are challenged to generate an interesting response that is relevant w.r.t. the dialog history.",
"For simplicity, here we use only the conversational context as input for DialoGPT and ignore the facts.",
"Starting from the transformed representation of the last context token, we then simply fine-tune DialoGPT (transformer encoder plus the LM head) on the train portion of the DSTC-7 data set via causal language modeling, generating the correct response from the data set.",
"The multi-reference test portion of the data set, also created from Reddit, has 5 gold (human) responses for each instance.",
"For evaluating biases and benchmarking bias mitigation effects on REDDITBIAS , we selected the well-known DialoGPT (Zhang et al., 2020) as the conversational LM.",
"Besides being one of the most well-known conversational LMs, it is additionally suitable for evaluation with REDDITBIAS because it was pretrained on Reddit data.",
"We subject DialoGPT to several bias mitigation approaches, which we here adapt in order to make them applicable to conversational LMs.",
"Qian et al. (2019) reduce the gender bias in recurrent LMs by extending the LM loss of the model with an auxiliary term which penalizes differences in probabilities assigned to words from gender pairs, e.g., woman and man .",
"For each of the five bias types (2) and their corresponding bias specifications BE = ( T 1 , T 2 , A 1 , A 2 ) , we manually compile a set of pairs P = { ( t 1 i , t 2 i ) } i T 1 T 2 for which an unbiased language model should assign equal probability to t 1 i T 1 and t 2 i T 2 at the position of any occurrence of either t 1 i or t 2 i .",
"Target terms from both T 1 and T 2 may participate in multiple pairs in P .",
"11 Let P t P be the set of pairs in which some target term t (from either T 1 or T 2 ) participates.",
"At every position in which any term t from P occurs, we augment the LM loss with the following debiasing loss: LLMD = 1 | P t | (cid:88) ( t 1 ,t 2) P i | log y t 1 y t 2 | , (1) where y is the predicted probability for a term, with the probability distribution computed only over the reduced vocabulary consisting of terms from P .",
"For positions where any terms from P appears, the overall loss is the weighted sum between the causal LM loss LLM and LLMD : L = LMLLM + DLLMD , (2) with the ratio between hyperparameters LM and D regulating the trade-off between the language modeling capability and bias mitigation.",
"Inspired by the DebiasNet approach of Lauscher et al. (2020a), applied in the context of debiasing static word embeddings, we devise a debiasing loss that aims to equalize the distance of terms from T 1 and T 2 w.r.t. the stereotypical attribute terms from the attribute set A 1 .",
"For each bias specification, we start from the same set P = { ( t 1 i , t 2 i ) } i T 1 T 2 of manually created term pairs between the target lists as in the case of LMD.",
"However, this time we focus on occurrences of attribute terms a A 1 .",
"At every position at which any of the terms from A 1 appears, we augment the LM loss with the 11 E.g., for the bias type Religion #2, we created the following pairs: ( muslim , christian ), ( islamic , christian ), ( islam , christianity ), ( arabs , americans ), ( islamism , christianity ).",
"We list the pairs for all other bias types in the Appendix.",
"Here, a is the transformed vector representation of the token a and t1 and t 2 are vector representations of t 1 and t 2 from the output LM layer (i.e., output embeddings of t 1 and t 2 ), 12 and cos denotes the cosine similarity.",
"ADD forces the output representations of target terms from the dominant group (e.g., christian ) to be equally distant to the representation of a stereotypical attribute for the minoritized group (e.g., dangerous ) as the representations of corresponding target terms denoting the minoritized group (e.g., muslim ).",
"Similar to LMD, for all occurrences of a A 1 , the final loss is the weighted sum of LLM and LADD , see Eq.",
"(2).",
"Similar to Bordia and Bowman (2019), we next devise a loss based on the idea of hard debiasing from Bolukbasi et al. (2016).",
"We compute this loss in two steps: (1) identification of the bias subspace, and (2) neutralization of the attribute words w.r.t. to the previously identified bias subspace.",
"(1) Bias Subspace Identification.",
"We start from the same set of manually curated target term pairs P as in LMD and ADD.",
"Let t be the output vector of some term t from the LM head.",
"We then obtain partial bias vectors b i for pairs ( t 1 i , t 2 i ) P by computing the differences between t1 i and t2 i : b i = ( t1 i t2 i ) / 2 .",
"We then stack the partial bias vectors b i to form a matrix C .",
"The bias subspace B then consists of the top k columns of V , obtained via SVD of C (i.e., SVD ( C ) = UV (cid:62) ), with k as the smallest number of singular values that explain at least 50% of the variance of the squared Frobenius norm of the matrix C .",
"(2) Attribute Neutralization.",
"In the second step, we neutralize the contextualized representations of attributes a A 1 with respect to the bias subspace B computed in the first step.",
"For each occurrence of any a A 1 , we augment the language modeling loss LLM with the following debiasing loss: LHD = k (cid:88) j =1 | b j (cid:104) a , b j (cid:105)| , (4) 12 For attributes and targets consisting of multiple subword tokens, we average their respective subword vectors.",
"where (cid:104) , (cid:105) denotes the dot product, a is the transformed vector of the input attribute token a , and b j denotes the j -th column of the bias subspace B .",
"The hard debiasing loss forces the transformer network of the language model to produce contextualized representations for stereotypical attributes (e.g., dangerous ) that are orthogonal to k most prominent bias directions.",
"Again, like in LMD and ADD, the total loss for some input token a A 1 is the weighted sum of the debiasing loss LHD and the language modeling loss LLM .",
"In contrast to the previous three debiasing methods, all of which introduce some type of additional debiasing loss, in CDA (Zhao et al., 2018) we modify the input data on which we fine-tune the DialoGPT via standard causal LM training.",
"The general idea is to break stereotypical associations of the model by duplicating each stereotypical (i.e., biased) instance and then replacing the term denoting the minoritized group with the corresponding term denoting the dominant group.",
"We again start from the manually created set of paired terms P = { ( t 1 i , t 2 i ) } i T 1 T 2 .",
"For each utterance in the training portion of REDDITBIAS which contains an association between t 1 i T 1 and a A 1 (e.g., that Muslim is dangerous ) we create a corresponding counterfactual utterance by replacing t 1 i with its pair t 2 i (e.g., that Christian is dangerous ).",
"We then simply further fine-tune DialoGPT by minimizing the causal LM loss LLM on both the original and counterfactual utterances.",
"In our experiments, we benchmark DialoGPT, a variant of GPT2 (Radford et al., 2019) pretrained on Reddit conversations with the objective to learn to generate responses that are coherent with the contextual prompt.",
"The model is pretrained on a data set containing 147M comment-response pairs spanning the time period from 2005 to 2017.",
"The corpus on which DialoGPT was trained had been preprocessed by removing offensive phrases from a large blacklist.",
"Consequently, DialoGPT is expected to exhibit fewer societal biases than general-purpose language models.",
"We validate this with our evaluation framework based on REDDITBIAS .",
"For each of the five bias types (2) we evaluate in terms of bias effect and downstream dialog performance (3) the original DialoGPT and its four debiased variants produced by applying one of the adapted debiasing method (4).",
"Data Splits.",
"For each bias type, we split the set of bias phrases from REDDITBIAS into training, development, and test portions, see Table 3 again.",
"We carry out the debiasing using the training and compute LMB on the test portions of REDDITBIAS .",
"13 Training and Optimization Details.",
"In all experiments, we use DialoGPT small ( 12 layers, 117 M parameters).",
"For each debiasing run, we train for 2 epochs, and optimize the parameters using Adam (Kingma and Ba, 2015) with the following configu-ration: learning rate = 5 10 5 , weight decay = 0 , beta1 = 0 .",
"9 , beta2 = 0 .",
"999 , epsilon = 1 10 8 .",
"In the loss-based debiasing procedures (LMD, ADD, HD) we optimize the hyperparameters on the respective validation portion of REDDITBIAS , searching the following grid: batch size { 4 , 8 , 16 } , gradient accumulation steps { 1 , 5 , 8 } , LM { 0 .",
"001 , 0 .",
"01 } , and D { 10 , 50 , 100 } .",
"We train the downstream models for DST and CRG (3) for a single epoch.",
"We optimize the models using Adam optimizer with the learning rate set to 5 10 5 and epsilon set to 1 10 8 .",
"We limit the input sequences to 128 (subword) tokens.",
"For DST, we train in batches of 48 instances, whereas for CRG, we set the batch size to 80 .",
"Figures 1a and 1b and Tables 4 and 5 summarize our evaluation results.",
"For brevity, we show only F1 scores for DST and Bleu-4 for CRG.",
"14 13 Note that for CDA, due to the augmentation procedure, we effectively train on two times more utterances.",
"14 Alternative performance measures, available in the Appendix, show similar trends in results.",
"Stereotypical Bias.",
"As shown in Figure 1a, according to our stereotypical bias measure (LMB), the original DialoGPT model still exhibits significant bias along the dimension of religion, for both Religion #1 ( jews , christians ), and Religion #2 ( muslims , christians ), despite the reported heuristic removal of offensive language from the pretraining data (Zhang et al., 2020).",
"This is most likely due to the more subtle nature of religious stereotypes, which manifest themselves not only in openly offensive text but also in latent co-occurrences of target and attribute terms (e.g., Islam being radical or Jews playing violins ).",
"The bias effect for the Gender dimension is also in the stereotypical direction (i.e., the t-value is negative), but the effect size is insignificant.",
"For Race and Queerness , DialoGPT exhibits insignificant bias effects in the direction opposite from the stereotypical one.",
"We believe that the biases in these two dimensions are most frequently associated with explicit and offensive language, much of which was eliminated in DialoGPT's preprocessing.",
"For the two Religion bias types, in which DialoGPT exhibits significant biases, only two of the four debiasing methods HD and CDA are able to remove the stereotypical bias for both bias specifications statistically significantly.",
"LMD and ADD each make the bias insignificant only in one of two cases (LMD for Religion #2 , ADD for Religion #1 ), although they do attenuate the original bias effect for the other specification as well.",
"Interestingly, for the dimensions in which DialoGPT does not exhibit significant stereotypical bias in the first place ( Race , Gender , Orientation ), all four debiasing methods tend to lead to an anti-stereotypical bias effect, i.e., to more strongly (and in a few cases statistically significantly) associated negative stereotypical attributes with the dominant group.",
"For example, criminal gets associated with caucasian , nurse with father or sinful with heterosexual ).",
"This finding stresses the utmost impor-religion1 religion2 race gender orientation 4 2 0 2 4 t _ v a l u e * * * * * * * * DialoGPT LMD ADD HD CDA",
"Downstream Dialog Performance.",
"Encouragingly, none of the four debiasing methods in our study seem to diminish DialoGPT's capabilities in downstream dialog tasks DST and response generation (see Tables 4 and 5).",
"15 Interestingly, while LMD drastically increases the perplexity on Reddit utterances (Figure 1b; see LMP in 3) this does not have negative consequences on DST and CRG.",
"To summarize, from the benchmarked debiasing methods, HD and CDA are able to significantly reduce the bias and preserve conversational capabilities; Our results suggest that the dialog performance would remain unaffected even if HD and CDA are to be applied more than once, in order to mitigate multiple bias types.",
"For a comprehensive overview of work on bias in NLP, we refer the reader to (Sun et al., 2019; Blodgett et al., 2020; Shah et al., 2020).",
"Here, we provide (1) a brief overview of bias measures and mitigation methods and their usage in (2) language generation and, specifically, in (3) dialog.",
"(1) Bias in NLP.",
"Resources, measures, and mitigation methods largely target static word embedding models: with their famous analogy man is to computer programmer as woman is to home-maker , Bolukbasi et al. (2016) first drew attention 15 Two exceptions, which requires further investigation are DST performance drops of LMD when debiasing for Race and of ADD when debiasing for Gender .",
"to the issue.",
"Caliskan et al. (2017) presented the Word Embedding Association Test (WEAT), quantifying the bias between two sets of target terms towards two sets of attribute terms.",
"Subsequent work proposed extensions to further embedding models (Liang et al., 2020a,b) and languages (e.g., McCurdy and Serbetci, 2020; Lauscher and Glavas, 2019; Lauscher et al., 2020b; May et al., 2019), analyses of the proposed measures (e.g., Gonen and Goldberg, 2019; Ethayarajh et al., 2019), more comprehensive evaluation frameworks (Lauscher et al., 2020a), new debiasing approaches (Dev and Phillips, 2019; Karve et al., 2019) and task-specific bias measures and resources for tasks like coreference resolution (Zhao et al., 2018), machine translation (Stanovsky et al., 2019) and natural language inference (Dev et al., 2020).",
"In our work, we similarly acknowledge the importance of understanding bias w.r.t. downstream tasks, but focus on dialog systems, for which the landscape of research efforts is surprisingly scarce.",
"(2) Bias in Language Generation.",
"Dialog systems crucially depend on natural language generation (NLG) models.",
"Yeo and Chen (2020) experimented with gender bias in word embeddings for NLG.",
"Sheng et al. (2019) introduce the notion of a regard for a demographic, and compile a data set and devise a bias classification model based on that notion.",
"Webster et al. (2020) proposed Discovery of Correlation (DisCo), a template-based method for gender bias detection which considers an LM's three highest-ranked predictions for a blank text position.",
"Nadeem et al. (2020) introduce StereoSet, a crowdsourced data set for associative contexts at two levels (intra-sentence and inter-sentence) for four bias dimensions.",
"Nangia et al. (2020) present CrowS-Pairs, a data set for measuring bias in masked LMs focusing on nine bias types.",
"However, they don't measure task-oriented model performance, which may degrade as a result of the debiasing procedure (Lauscher et al., 2020a).",
"Qian et al. (2019) reduce gender bias in recurrent LMs with a loss function based on HD (Bolukbasi et al., 2016) we adapt this method for debiasing conversational LMs (see 4).",
"(3) Bias in Dialog.",
"The landscape of research on bias in dialog systems is scarce: the existing efforts mostly focus on measuring and mitigating gender bias only and do not measure downstream dialog performance of debiased models.",
"Dinan et al. (2020b) focus on multi-dimensional gender bias classification and controlled mitigation.",
"Dinan et al. (2020a) analyze existing dialog data sets for gender bias and extend LIGHT (Urbanek et al., 2019), a resource for grounded dialog, with crowdsourced gender-balanced utterances.",
"Both Lee et al. (2019) and Liu et al. (2020a) add racial bias as a second dimension for bias analysis of dialog models.",
"While Lee et al. (2019) classify whether chatbots agree or disagree with stereotypical statements, Liu et al. (2020a) explore several measures for evaluating bias in dialog systems, including diversity in response generation this is similar to the work of Liu et al. (2020b) who also include generation quality measures.",
"Overall, these efforts focus only on the two bias dimensions ( gender and race ) and fail to thoroughly analyze the effects of debiasing on performance in dialog tasks such as slot-value extraction, DST, and CRG which are paramount in task-oriented dialog systems.",
"Stereotypical societal biases may lead to the generation of unfair and unethical responses in dialog systems.",
"We presented REDDITBIAS , a comprehensive resource for bias evaluation and debiasing of conversational LMs.",
"Consisting of manually-annotated biased comments from Reddit, REDDITBIAS is the first real-world resource dedicated to multi-dimensional analysis ( gender , race , religion , queerness ) of biases in dialog models.",
"We benchmarked the well-known DialogGPT on REDDITBIAS and analyzed the effects that different debiasing methods (adapted from previous work) have on it.",
"Despite dedicated bias mitigation preprocessing of DialogGPT's pretraining data, it still exhibits prominent religious biases.",
"The benchmarked debiasing methods, however, mostly manage to mitigate those biases, while at the same time retaining the model performance in dialog-oriented downstream tasks (e.g., dialog state tracking).",
"We hope that REDDITBIAS catalyzes research efforts on fair and ethical dialog systems and conversational AI.",
"The work of Anne Lauscher and Goran Glavas has been supported by the Multi2ConvAI Grant (Mehrsprachige und Domanen-ubergreifende Conversational AI) of the Baden-Wurttemberg Ministry of Economy, Labor, and Housing (KI-Innovation).",
"The work of Ivan Vulic has been supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no. 648909) and the ERC PoC Grant MultiConvAI: Enabling Multilingual Conversational AI (no. 957356).",
"Acknowledging the ethical dimension of our work, we like to point the reader to the following limitations and potential implications.",
"(i) Gender is a spectrum and we fully acknowledge the importance of the inclusion of all gender identities , e.g., nonbinary, gender fluid, polygen-der, etc. in language technologies.",
"Note that in our gender bias specification, however, we follow a more classic notion in-line with our focus on the discrepancy between a dominant and a minoritized group.",
"We capture gender identities beyond the binary conception in our LGBTQ bias specification under the notion of queerness .",
"(ii) Similarly important is the intersectional-ity (Crenshaw, 1989) of stereotyping due to the individual composition and interaction of identity chracteristics, e.g., social class and gender (Degaetano-Ortlieb, 2018).",
"Due to its complexity, we do not address the topic in this work.",
"(iii) As we demonstrate in our work, debiasing technologies can, beyond its intended use, be used to increase bias and create biased models.",
"We think that this finding stresses our responsibility to reach out and to raise awareness w.r.t. the impact of language technology among decision makers and users, to establish a broader discourse, and to include ethical aspects in current data science curricula (Bender et al., 2020)."
]
| [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method"
]
|
[
"Probabilistic context-free grammars (PCFGs) with neural parameterization have been shown to be effective in unsupervised phrase-structure grammar induction.",
"However, due to the cubic computational complexity of PCFG representation and parsing, previous approaches cannot scale up to a relatively large number of (nonterminal and preterminal) symbols.",
"In this work, we present a new parameterization form of PCFGs based on tensor decomposition, which has at most quadratic computational complexity in the symbol number and therefore allows us to use a much larger number of symbols.",
"We further use neural parameterization for the new form to improve unsupervised parsing performance.",
"We evaluate our model across ten languages and empirically demonstrate the effectiveness of using more symbols.",
"1 1 Introduction Unsupervised constituency parsing is the task of inducing phrase-structure grammars from raw text without using parse tree annotations.",
"Early work induces probabilistic context-free grammars (PCFGs) via the Expectation Maximation algorithm and finds the result unsatisfactory (Lari and Young, 1990; Carroll and Charniak, 1992).",
"Recently, PCFGs with neural parameterization (i.e., using neural networks to generate rule probabilities) have been shown to achieve good results in unsupervised constituency parsing (Kim et al., 2019a; Jin et al., 2019; Zhu et al., 2020).",
"However, due to the cubic computational complexity of PCFG representation and parsing, these approaches learn PCFGs with relatively small numbers of nonterminals and preterminals.",
"For example, Jin et al. (2019) use 30 Corresponding Author 1 Our code: https://github.com/sustcsonglin/TN-PCFG nonterminals (with no distinction between preterminals and other nonterminals) and Kim et al. (2019a) use 30 nonterminals and 60 preterminals.",
"In this paper, we study PCFG induction with a much larger number of nonterminal and preterminal symbols.",
"We are partly motivated by the classic work of latent variable grammars in supervised constituency parsing (Matsuzaki et al., 2005; Petrov et al., 2006; Liang et al., 2007; Cohen et al., 2012; Zhao et al., 2018).",
"While the Penn treebank grammar contains only tens of nonterminals and preterminals, it has been found that dividing them into subtypes could significantly improves the parsing accuracy of the grammar.",
"For example, the best model from Petrov et al. (2006) contains over 1000 nonterminal and preterminal symbols.",
"We are also motivated by the recent work of Buhai et al. (2019) who show that when learning latent variable models, increasing the number of hidden states is often helpful; and by Chiu and Rush (2020) who show that a neural hidden Markov model with up to 2 16 hidden states can achieve surprisingly good performance in language modeling.",
"A major challenge in employing a large number of nonterminal and preterminal symbols is that representing and parsing with a PCFG requires a computational complexity that is cubic in its symbol number.",
"To resolve the issue, we rely on a new parameterization form of PCFGs based on tensor decomposition, which reduces the computational complexity from cubic to at most quadratic.",
"Furthermore, we apply neural parameterization to the new form, which is crucial for boosting unsupervised parsing performance of PCFGs as shown by Kim et al. (2019a).",
"We empirically evaluate our approach across ten languages.",
"On English WSJ, our best model with 500 preterminals and 250 nonterminals improves over the model with 60 preterminals and 30 nonterminals by 6.3% mean F1 score, and we also observe consistent decrease in perplexity and overall in-crease in F1 score with more symbols in our model, thus confirming the effectiveness of using more symbols.",
"Our best model also surpasses the strong baseline Compound PCFGs (Kim et al., 2019a) by 1.4% mean F1.",
"We further conduct multilingual evaluation on nine additional languages.",
"The evaluation results suggest good generalizability of our approach on languages beyond English.",
"Our key contributions can be summarized as follows: (1) We propose a new parameterization form of PCFGs based on tensor decomposition, which enables us to use a large number of symbols in PCFGs.",
"(2) We further apply neural parameterization to improve unsupervised parsing performance.",
"(3) We evaluate our model across ten languages and empirically show the effectiveness of our approach.",
"Grammar induction using neural networks: There is a recent resurgence of interest in unsupervised constituency parsing, mostly driven by neural network based methods (Shen et al., 2018a, 2019; Drozdov et al., 2019, 2020; Kim et al., 2019a,b; Jin et al., 2019; Zhu et al., 2020).",
"These methods can be categorized into two major groups: those built on top of a generative grammar and those without a grammar component.",
"The approaches most related to ours belong to the first category, which use neural networks to produce grammar rule probabilities.",
"Jin et al. (2019) use an invertible neural projection network (a.k.a. normalizing flow (Rezende and Mohamed, 2015)) to parameterize the preterminal rules of a PCFG.",
"Kim et al. (2019a) use neural networks to parameterize all the PCFG rules.",
"Zhu et al. (2020) extend their work to lexicalized PCFGs, which are more expressive than PCFGs and can model both dependency and constituency parse trees simultaneously.",
"In other unsupervised syntactic induction tasks, there is also a trend to use neural networks to produce grammar rule probabilities.",
"In unsupervised dependency parsing, the Dependency Model with Valence (DMV) (Klein and Manning, 2004) has been parameterized neurally to achieve higher induction accuracy (Jiang et al., 2016; Yang et al., 2020).",
"In part-of-speech (POS) induction, neurally parameterized Hidden Markov Models (HMM) also achieve state-of-the-art results (Tran et al., 2016; He et al., 2018).",
"Tensor decomposition on PCFGs: Our work is closely related to Cohen et al. (2013) in that both use tensor decomposition to parameterize the probabilities of binary rules for the purpose of reducing the time complexity of the inside algorithm.",
"However, Cohen et al. (2013) use this technique to speed up inference of an existing PCFG, and they need to actually perform tensor decomposition on the rule probability tensor of the PCFG.",
"In contrast, we draw inspiration from this technique to design a new parameterization form of PCFG that can be directly learned from data.",
"Since we do not have a probability tensor to start with, additional tricks have to be inserted in order to ensure validity of the parameterization, as will be discussed later.",
"PCFGs build upon context-free grammars (CFGs).",
"We start by introducing CFGs and establishing notations.",
"A CFG is defined as a 5-tuple G = ( S , N , P , , R ) where S is the start symbol, N is a finite set of nonterminal symbols, P is a finite set of preterminal symbols, 2 is a finite set of terminal symbols, and R is a set of rules in the following form: S A A N A BC, A N , B, C N P T w, T P , w PCFGs extend CFGs by associating each rule r R with a probability r .",
"Denote n , p , and q as the number of symbols in N , P , and , respectively.",
"It is convenient to represent the probabilities of the binary rules in the tensor form: T h A ,h B ,h C = A BC , T R n m m , where T is an order-3 tensor, m = n + p , and h A [ 0 , n ) and h B , h C [ 0 , m ) are symbol indices. For the convenience of computation, we assign indices [ 0 , n ) to nonterminals in N and [ n, m ) to preterminals in P . Similarly, for a preterminal rule we define Q h T ,h w = T w , Q R p q . 2 Strictly, CFGs do not distinguish nonterminals N (con-stituent labels) from preterminals P (part-of-speech tags). They are both treated as nonterminals. N , P , satisfy N P = and ( N P ) = . Again, h T and h w are the preterminal index and the terminal index, respectively. Finally, for a start rule we define r h A = S A , r R n . Generative learning of PCFGs involves maximizing the log-likelihood of every observed sentence w = w 1 , . . . , w l : log p ( w ) = log t TG ( w ) p ( t ) , where TG ( w ) contains all the parse trees of the sentence w under a PCFG G . The probability of a parse tree t TG is defined as p ( t ) = r t R r , where t R is the set of rules used in the derivation of t. log p ( w ) can be estimated efficiently through the inside algorithm, which is fully differentiable and amenable to gradient optimization methods. 3.2 Tensor form of the inside algorithm We first pad T , Q , and r with zeros such that T R m m m , Q R m q , r R m , and all of them can be indexed by both nonterminals and preterminals. The inside algorithm computes the probability of a symbol A spanning a substring w i,j = w i , . . . , w j in a recursive manner ( 0 i < j < l ): s Ai,j = j 1 k = i B,C A BC s Bi,k s Ck + 1 ,j . (1) Base Case: s Ti,i = T w i , 0 i < l . We use the tensor form of PCFGs to rewrite Equation 1 as: s h A i,j = j 1 k = i h B ,h CT h A ,h B ,h C s h B i,k s h C k + 1 ,j = j 1 k = i ( T h A s k + 1 ,j ) s i,k , (2) where s i,j , s i,k , and s k + 1 ,j are all m -dimensional vectors; the dimension h A corresponds to the symbol A . Thus s i,j = j 1 k = i ( T s k + 1 ,j ) s i,k . (3) Equation 3 represents the core computation of the inside algorithm as tensor-vector dot product. It is amenable to be accelerated on a parallel computing device such as GPUs. However, the time and space complexity is cubic in m , which makes it impractical to use a large number of nonterminals and preterminals. 4 Parameterizing PCFGs based on tensor decomposition The tensor form of the inside algorithm has a high computational complexity of O ( m 3 l 3 ) . It hinders the algorithm from scaling to a large m . To resolve the issue, we resort to a new parameterization form of PCFGs based on tensor decomposition (TD-PCFGs) (Cohen et al., 2013). As discussed in Section 2, while Cohen et al. (2013) use a TD-PCFG to approximate an existing PCFG for speedup in parsing, we regard a TD-PCFG as a stand-alone model and learn it directly from data. The basic idea behind TD-PCFGs is using Kruskal decomposition of the order-3 tensor T . Specifically, we require T to be in the Kruskal form, T = d l = 1 T ( l ) , T ( l ) = u ( l ) v ( l ) w ( l ) , (4) where u ( l ) R n is a column vector of a matrix U R n d ; v ( l ) , w ( l ) R m are column vectors of matrices V , W R m d , respectively; indicates Kronecker product. Thus T ( l ) R n m m is an order-3 tensor and T ( l ) i,j,k = u ( l ) i v ( l ) j w ( l ) k . The Kruskal form of the tensor T is crucial for reducing the computation of Equation 3. To show this, we let x = s i,k , y = s k + 1 ,j , and z be any summand in the right-hand side of Equation 3, so we have: z = ( T y ) x . (5) Substitute T in Equation 4 into Equation 5 and consider the i -th dimension of z : z i = ( T i y ) x = m j = 1 m k = 1 d l = 1 T ( l ) i,j,k x j y k = m j = 1 m k = 1 d l = 1 u ( l ) i v ( l ) j w ( l ) k x j y k = d l = 1 u ( l ) i m j = 1 v ( l ) j x j m k = 1 w ( l ) k y k = d l = 1 u ( l ) i ( x T v ( l ) ) ( y T w ( l ) ) = ( e Ti U ) (( VT x ) ( WT y )) , (6) where indicates Hadamard (element-wise) product; e i R m is a one-hot vector that selects the i -th row of U . We have padded U with zeros such that U R m d and the last m n rows are all zeros. Thus z = U (( VT x ) ( WT y )) , (7) and accordingly, s i,j = U j 1 k = i (( VT s i,k ) ( WT s k + 1 ,j )) . (8) Equation 8 computes the inside probabilities using TD-PCFGs. It has a time complexity O ( md ) . By caching VT s i,k and WT s k + 1 ,j , the time complexity of the inside algorithm becomes O ( dl 3 + mdl 2 ) (Cohen et al., 2013), which is at most quadratic in m since we typically set d = O ( m ) . Interestingly, Equation 8 has similar forms to recursive neural networks (Socher et al., 2013) if we treat inside score vectors as span embeddings. One problem with TD-PCFGs is that, since we use three matrices U , V and W to represent tensor T of binary rule probabilities, how we can ensure that T is non-negative and properly normalized, i.e., for a given left-hand side symbol A , j,k T h A ,j,k = 1 . Simply reconstructing T with U , V and W and then performing normalization would take O ( m 3 ) time, thus defeating the purpose of TD-PCFGs. Our solution is to require that the three matrices are non-negative and meanwhile U is row-normalized and V and W are column-normalized (Shen et al., 2018b). Theorem 1. Given non-negative matrices U R n d and V , W R m d , if U is row-normalized and V and W are column-normalized, then U , V , and W are a Kruskal decomposition of a tensor T R n m m where T i,j,k [ 0 , 1 ] and T i is normalized such that j,k T i,j,k = 1 .",
"We use neural parameterization for TD-PCFGs as it has demonstrated its effectiveness in inducing PCFGs (Kim et al., 2019a).",
"In a neurally parameterized TD-PCFGs, the original TD-PCFG parameters are generated by neural networks, rather than being learned directly; parameters of the neural network will thus be the parameters to be optimized.",
"This modeling approach breaks the parameter number limit of the original TD-PCFG, so we can control the total number of parameters flexibly.",
"When the total number of symbols is small, we can over-parameterize the model as over-parameterization has been shown to ease optimization (Arora et al., 2018; Xu et al., 2018; Du et al., 2019).",
"On the other hand, when the total number of symbols is huge, we can decrease the number of parameters to save GPU memories and speed up training.",
"The resulting model is referred to as neural PCFGs based on tensor decomposition (TN-PCFGs).",
"We start with the neural parameterization of U R n d and V , W R m d .",
"We use shared symbol embeddings E s R m k ( k is the symbol embedding dimension) in which each row is the embedding of a nonterminal or preterminal.",
"We first compute an unnormalized U by applying a neural network f u ( ) to symbol embeddings E s : U = f u ( E s ) = ( ReLU ( E s M ( 1 ) u )) M ( 2 ) u , where M ( 1 ) u R k k and M ( 2 ) u R k d are learnable parameters of f u ( ) .",
"For simplicity, we omit the learnable bias terms.",
"We compute unnormalized V and W in a similar way.",
"Note that only E s is shared in computing the three unnormalized matrices.",
"Then we apply the Softmax activation function to each row of U and to each column of V and W , and obtain normalized U , V , and W .",
"For preterminal-rule probabilities Q R p q and start-rule probabilities r R n , we follow (Kim et al., 2019a) and define them as: Q h T ,h w = T w = exp ( u Tw f t ( w T )) w exp ( u Tw f t ( w T )) , r h A = S A = exp ( u TA f s ( w S )) A N exp ( u TA f s ( w S )) , where w and u are symbol embeddings; f s ( ) and f t ( ) are neural networks that encode the input into a vector (see details in Kim et al. (2019a)).",
"Note that the symbol embeddings are not shared between preterminal rules and start rules.",
"Typically, the CYK algorithm 3 can be directly used to solve this problem exactly: it first computes the score of the most likely parse; and then automatic differentiation is applied to recover the best tree structure t (Eisner, 2016; Rush, 2020).",
"This, however, relies on the original probability tensor T and is incompatible with our decomposed representation.",
"4 If we reconstruct T from U , V , W and then perform CYK, then the resulting time and space complexity would degrade to O ( m 3 l 3 ) and become unaffordable when m is large.",
"Therefore, we resort to Minimum Bayes-Risk (MBR) style decoding because we can compute the inside probabilities efficiently.",
"Our decoding method consists of two stages.",
"The first stage computes the conditional probability of a substring w i,j being a constituent in a given sentence w (a.k.a. posteriors of spans being a con-stituent): p ( w i,j w ) = 1 p ( w ) t TG ( w ) p ( t ) 1 { w i,j t } .",
"We can estimate the posteriors efficiently by using automatic differentiation after obtaining all the inside probabilities.",
"This has the same time complexity as our improved inside algorithm, which is O ( dl 3 + mdl 2 ) .",
"The second stage uses the CYK algorithm to find the parse tree that has the highest expected number of constituents (Smith and Eisner, 2006): t = arg max t TG ( w ) w i,j t p ( w i,j w ) .",
"3 The CYK algorithm is similar to the inside algorithm.",
"The only difference is that it uses MAX whenever the inside algorithm performs SUM over k and B,C ( cf . Equation 1).",
"4 In Equation 8 all symbols become entangled through VT s i,k and WT s k + 1 ,j .",
"We are unable to perform MAX over B,C as in the CYK algorithm.",
"The time complexity of the second stage is O ( l 3 ) , so the overall time complexity of our decoding method is O ( dl 3 + mdl 2 ) , which is much faster than O ( m 3 l 3 ) in general.",
"We evaluate TN-PCFGs across ten languages.",
"We use the Wall Street Journal (WSJ) corpus of the Penn Treebank (Marcus et al., 1994) for English, the Penn Chinese Treebank 5.1 (CTB) (Xue et al., 2005) for Chinese, and the SPRML dataset (Seddah et al., 2014) for the other eight morphology-rich languages.",
"We use a unified data preprocessing pipeline 5 provided by Zhao and Titov (2021).",
"The same pipeline has been used in several recent papers (Shen et al., 2018a, 2019; Kim et al., 2019a; Zhao and Titov, 2020).",
"Specifically, for every treebank, punctuation is removed from all data splits and the top 10,000 frequent words in the training data are used as the vocabulary.",
"For baseline models we use the best configurations reported by the authors.",
"For example, we use 30 nonterminals and 60 preterminals for N-PCFGs and C-PCFGs.",
"We implement TN-PCFGs and reimplement N-PCFGs and C-PCFGs using automatic differentiation (Eisner, 2016) and we borrow the idea of Zhang et al. (2020) to batchify the inside algorithm.",
"Inspired by Kim et al. (2019a), for TN-PCFGs we set n / p , the ratio of the nonterminal number to the preterminal number, to 1 / 2 .",
"For U R n d and V , W R m d we set d = p when there are more than 200 preterminals and d = 200 otherwise.",
"The symbol embedding dimension k is set to 256.",
"We optimize TN-PCFGs using the Adam optimizer (Kingma and Ba, 2015) with 1 = 0 .",
"75 , 2 = 0 .",
"999 , and learning rate 0.001 with batch size 4.",
"We use the unit Gaussian distribution to initialize embedding parameters.",
"We do not use the curriculum learning strategy that is used by Kim et al. (2019a) when training TN-PCFGs.",
"Following Kim et al. (2019a), we train a TN-PCFG for each treebank separately.",
"For each setting we run the TN-PCFG and the baselines four times with different random seeds and for ten epochs each 5 https://github.com/zhaoyanpeng/xcfg.",
"time.",
"Early stopping is performed based on the perplexity on the development data.",
"The best model in each run is selected according to the perplexity on the development data.",
"We tune model hyperparameters only on the development data of WSJ and use the same model configurations on the other treebanks.",
"6 We report average sentence-level F1 score 7 as well as their biased standard deviations.",
"We evaluate our models mainly on WSJ (Sec-tion 8.1-8.3).",
"We first give an overview of model 6 Shi et al. (2020) suggest not using the gold parses of the development data for hyperparameter tuning and model selection in unsupervised parsing.",
"Here we still use the gold parses of the WSJ development set for the English experiments in order to conduct fair comparison with previous work.",
"No gold parse is used in the experiments of any other language.",
"7 Following Kim et al. (2019a), we remove all trivial spans (single-word spans and sentence-level spans).",
"Sentence-level means that we compute F1 for each sentence and then average over all sentences.",
"performance in Section 8.1 and then conduct ablation study of TN-PCFGs in Section 8.2.",
"We quantitatively and qualitatively analyze constituent labels induced by TN-PCFGs in Section 8.3.",
"In Section 8.4, we conduct a multilingual evaluation over nine additional languages.",
"Our best TN-PCFG model uses 500 preterminals ( p = 500 ).",
"We compare it with a wide range of recent unsupervised parsing models (see the top section of Table 1).",
"Since we use MBR decoding for TN-PCFGs, which produces higher F1-measure than the CYK decoding (Goodman, 1996), for fair comparison we also use MBR decoding for our reimplemented N-PCFGs and C-PCFGs (see the middle section of Table 1).",
"We draw three key observations from Table 1: (1) TN-PCFG ( p = 500 ) achieves the best mean and max F1 score.",
"Notebly, it outperforms the strong baseline model C-PCFG by 1.4% mean F1.",
"Compared with TN-PCFG ( p = 60 ), TN-PCFG ( p = 500 ) brings a 6.3% mean F1 improvement, demonstrating the effectiveness of using more symbols.",
"(2) Our reimplementations of N-PCFGs and C-PCFGs are comparable to those of Kim et al. (2019a), (3) MBR decoding indeed gives higher F1 scores (+1.4% mean F1 for N-PCFG and +0.9% mean F1 for C-PCFG).",
"In Table 1 we also show the results of Constituent test (CT) (Cao et al., 2020) and DIORA (Drozdov et al., 2019, 2020), two recent state-of-the-art approaches.",
"However, our work is not directly comparable to these approaches.",
"CT relies on pretrained language models (RoBERTa) and DIORA relies on pretrained word embeddings (con-text insensitive ELMo).",
"In contrast, our model and the other approaches do not use pretrained word embeddings and instead learn word embeddings from scratch.",
"We are also aware of URNNG (Kim et al., 2019b), which has a max F1 score of 45.4%, but it uses punctuation and hence is not directly comparable to the models listed in the table.",
"We report the average running time 8 per epoch and the parameter numbers of different models in Table 2.",
"We can see that TN-PCFG ( p = 500 ), which uses a much larger number of symbols, has even fewer parameters and is not significantly slower than N-PCFG.",
"Figure 1 illustrates the change of F1 scores and perplexities as the number of nonterminals and preterminals increase.",
"We can see that, as the symbol number increases, the perplexities decrease while F1 scores tend to increase.",
"We analyze model performance by breaking down recall numbers by constituent labels (Table 3).",
"We use the top six frequent constituent labels in the WSJ test data (NP, VP, PP, SBAR, ADJP, and ADVP).",
"We first observe that the right-branching baseline remains competitive.",
"It achieves the highest recall on VPs and SBARs.",
"TN-PCFG ( p = 500 ) displays a relatively even performance across the six labels.",
"Specifically, it performs best on NPs and PPs among all the labels and it beats all the other models on ADJPs.",
"Compared with TN-PCFG ( p = 60 ), TN-PCFG ( p = 500 ) results in the largest improvement on VPs (+19.5% recall), which are usually long (with an average length of 11) in comparison with the other types of constituents.",
"As NPs and VPs cover about 54% of the total constituents in the WSJ test data, it is not surprising that models which are accurate on these labels have high F1 scores (e.g., C-PCFGs and TN-PCFGs ( p = 500 )).",
"We further analyze the correspondence between the nonterminals of trained models and gold constituent labels.",
"For each model, we look at all the correctly-predicted constituents in the test set and estimate the empirical posterior distribution of nonterminals assigned to a constituent given the gold label of the constituent (see Figure 2).",
"Compared with the other three models, in TN-PCFG ( p = 500 ), the most frequent nonterminals are more likely to correspond to a single gold label.",
"One possible explanation is that it contains much more nonterminals and therefore constituents of different labels are less likely to compete for the same nonterminal.",
"Figure 2d (TN-PCFG ( p = 500 )) also illustrates that a gold label may correspond to multiple nonterminals.",
"A natural question that follows is: do these nonterminals capture different subtypes of the gold label?",
"We find it is indeed the case for some nonterminals.",
"Take the gold label NPs (noun phrases), while not all the nonterminals have clear interpretation, we find that NT-3 corresponds to constituents which represent a company name; NT-99 corresponds to constituents which contain a possessive affix (e.g., 's in the market 's decline); NT-94 represents constituents preceded by an in-definite article.",
"We further look into the gold label PPs (preposition phrases).",
"Interestingly, NT-108, NT-175, and NT-218 roughly divided preposition phrases into three groups starting with with, by, from, to', in, on, for', and of', respectively.",
"See Appendix for more examples.",
"In order to understand the generalizability of TD-PCFGs on languages beyond English, we conduct a multilingual evaluation of TD-PCFGs on CTB and SPMRL.",
"We use the best model configurations obtained on the English development data and do not perform any further tuning on CTB and SPMRL.",
"We compare TN-PCFGs with N-PCFGs and C-PCFGs and use MBR decoding by default.",
"The results are shown in Table 4.",
"In terms of the average F1 over the nine languages, all the three models beat trivial leftand right-branching baselines by a large margin, which suggests they have good generalizability on languages beyond English.",
"Among the three models, TN-PCFG ( p = 500 ) fares best.",
"It achieves the highest F1 score on six out of nine treebanks.",
"On Swedish, N-PCFG is worse than the right-branching baseline (-13.4% F1), while Model NP VP PP SBAR ADJP ADVP S-F1 Left Branching 10.4 0.5 5.0 5.3 2.5 8.0 8.7 Right Branching 24.1 71.5 42.4 68.7 27.7 38.1 39.5 Random Trees 22.5 0 .",
"TN-PCFG ( p = 500 ) surpasses the right-branching baseline by 9.6% F1.",
"9 Discussions In our experiments, we do not find it beneficial to use the compound trick (Kim et al., 2019a) in TN-PCFGs, which is commonly used in previous work of PCFG induction (Kim et al., 2019a; Zhao and Titov, 2020; Zhu et al., 2020).",
"We speculate that the additional expressiveness brought by compound parameterization may not be necessary for a TN-PCFG with many symbols which is already suf-ficiently expressive; on the other hand, compound parameterization makes learning harder when we use more symbols.",
"We also find neural parameterization and the choice of nonlinear activation functions greatly influence the performance.",
"Without using neural parameterization, TD-PCFGs have only around 30% S-F1 scores on WSJ, which are even worse than the right-branching baseline.",
"Activation functions other than ReLU (such as tanh and sigmoid) result in much worse performance.",
"It is an interesting open question why ReLU and neural parameterization are crucial in PCFG induction.",
"When evaluating our model with a large number of symbols, we find that only a small fraction of the symbols are predicted in the parse trees (for example, when our model uses 250 nonterminals, only tens of them are found in the predicted parse trees of the test corpus).",
"We expect that our models can benefit from regularization techniques such as state dropout (Chiu and Rush, 2020).",
"We have presented TD-PCFGs, a new parameterization form of PCFGs based on tensor decomposition.",
"TD-PCFGs rely on Kruskal decomposition of the binary-rule probability tensor to reduce the computational complexity of PCFG representation and parsing from cubic to at most quadratic in the symbol number, which allows us to scale up TD-PCFGs to a much larger number of (nonterminal and preterminal) symbols.",
"We further propose neurally parameterized TD-PCFGs (TN-PCFGs) and learn neural networks to produce the parameters of TD-PCFGs.",
"On WSJ test data, TN-PCFGs outperform strong baseline models; we empirically show that using more nonterminal and preterminal symbols contributes to the high unsupervised parsing performance of TN-PCFGs.",
"Our multiligual evaluation on nine additional languages further reveals the capability of TN-PCFGs to generalize to languages beyond English.",
"This work was supported by the National Natural Science Foundation of China (61976139)."
]
| [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"result",
"method",
"result",
"objective",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"abstain",
"other",
"method",
"other",
"other",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"other",
"method",
"method",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"method",
"objective",
"objective",
"result",
"other"
]
|
[
"There are two major classes of natural language grammars the dependency grammar that models one-to-one correspondences between words and the constituency grammar that models the assembly of one or several corresponded words.",
"While previous unsupervised parsing methods mostly focus on only inducing one class of grammars, we introduce a novel model, StructFormer, that can simultaneously induce dependency and constituency structure.",
"To achieve this, we propose a new parsing framework that can jointly generate a constituency tree and dependency graph.",
"Then we integrate the induced dependency relations into the transformer, in a differentiable manner, through a novel dependency-constrained self-attention mechanism.",
"Experimental results show that our model can achieve strong results on unsupervised constituency parsing, unsupervised dependency parsing, and masked language modeling at the same time.",
"Human languages have a rich latent structure.",
"This structure is multifaceted, with the two major classes of grammar being dependency and constituency structures.",
"There has been an exciting breath of recent work targeted at learning this structure in a data-driven unsupervised fashion (Klein and Manning, 2002; Klein, 2005; Le and Zuidema, 2015; Shen et al., 2018c; Kim et al., 2019a).",
"The core principle behind recent methods that induce structure from data is simple provide an inductive bias that is conducive for structure to emerge as a byproduct of some self-supervised training, e.g., language modeling.",
"To this end, a wide range of models have been proposed that are able to successfully learn grammar structures (Shen et al., 2018a,c; Corresponding author: [email protected] . Work done while interning at Google Reseach. Wang et al., 2019; Kim et al., 2019b,a).",
"However, most of these works focus on inducing either constituency or dependency structures alone.",
"In this paper, we make two important technical contributions.",
"First, we introduce a new neural model, StructFormer, that is able to simultaneously induce both dependency structure and constituency structure.",
"Specifically, our approach aims to unify latent structure induction of different types of grammar within the same framework.",
"Second, StructFormer is able to induce dependency structures from raw data in an end-to-end unsupervised fashion.",
"Most existing approaches induce dependency structures from other syntactic information like gold POS tags (Klein and Manning, 2004; Cohen and Smith, 2009; Jiang et al., 2016).",
"Previous works, having trained from words alone, often requires additional information, like pre-trained word clustering (Spitkovsky et al., 2011), pre-trained word embedding (He et al., 2018), acoustic cues (Pate and Goldwater, 2013), or annotated data from related languages (Cohen et al., 2011).",
"We introduce a new inductive bias that enables the Transformer models to induce a directed dependency graph in a fully unsupervised manner.",
"To avoid the necessity of using grammar labels during training, we use a distance-based parsing mechanism.",
"The parsing mechanism predicts a sequence of Syntactic Distances T (Shen et al., 2018b) and a sequence of Syntactic Heights (Luo et al., 2019) to represent dependency graphs and constituency trees at the same time.",
"Examples of and T are illustrated in Figure 1a.",
"Based on the syntactic distances ( T ) and syntactic heights ( ), we provide a new dependency-constrained self-attention layer to replace the multi-head self-attention layer in standard transformer model.",
"More specifically, the new attention head can only attend its parent (to avoid confusion with self-attention head, we use parent to denote head in dependency graph) or",
"(b) Two types of dependency relations.",
"The parent distribution allows each token to attend on its parent.",
"The dependent distribution allows each token to attend on its dependents.",
"For example the parent of cats is like .",
"Cats and I are dependents of like Each attention head will receive a different weighted sum of these relations.",
"its dependents in the predicted dependency structure, through a weighted sum of relations shown in Figure 1b.",
"In this way, we replace the complete graph in the standard transformer model with a differentiable directed dependency graph.",
"During the process of training on a downstream task (e.g. masked language model), the model will gradually converge to a reasonable dependency graph via gradient descent.",
"Incorporating the new parsing mechanism, the dependency-constrained self-attention, and the Transformer architecture, we introduce a new model named StructFormer.",
"The proposed model can perform unsupervised dependency and constituency parsing at the same time, and can leverage the parsing results to achieve strong performance on masked language model tasks.",
"Previous works on unsupervised dependency parsing are primarily based on the dependency model with valence (DMV) (Klein and Manning, 2004) and its extension (Daume III, 2009; Gillenwater et al., 2010).",
"To effectively learn the DMV model for better parsing accuracy, a variety of inductive biases and handcrafted features, such as correlations between parameters of grammar rules involving different part-of-speech (POS) tags, have been proposed to incorporate prior information into learning.",
"The most recent progress is the neural DMV model (Jiang et al., 2016), which uses a neural network model to predict the grammar rule probabilities based on the distributed representation of POS tags.",
"However, most existing unsupervised dependency parsing algorithms require the gold POS tags to ge provided as inputs.",
"These gold POS tags are labeled by humans and can be potentially difficult (or prohibitively expensive) to obtain for large corpora.",
"Spitkovsky et al. (2011) proposed to overcome this problem with unsupervised word clustering that can dynamically assign tags to each word considering its context.",
"He et al. (2018) overcame the problem by combining DMV model with invertible neural network to jointly model discrete syntactic structure and continuous word representations.",
"Unsupervised constituency parsing has recently received more attention.",
"PRPN (Shen et al., 2018a) and ON-LSTM (Shen et al., 2018c) induce tree structure by introducing an inductive bias to recurrent neural networks.",
"PRPN proposes a parsing network to compute the syntactic distance of all word pairs, while a reading network uses the syntactic structure to attend to relevant memories.",
"ON-LSTM allows hidden neurons to learn long-term or short-term information by a novel gating mechanism and activation function.",
"In URNNG (Kim et al., 2019b), amortized variational inference was applied between a recurrent neural network grammar (RNNG) (Dyer et al., 2016) decoder and a tree structure inference network, which encourages the decoder to generate reasonable tree structures.",
"DIORA (Drozdov et al., 2019) proposed using inside-outside dynamic programming to compose latent representations from all possible binary trees.",
"The representations of inside and outside passes from the same sentences are optimized to be close to each other.",
"The compound PCFG (Kim et al., 2019a) achieves grammar induction by maximizing the marginal likelihood of the sentences which are generated by a probabilistic context-free grammar (PCFG).",
"Tree Transformer (Wang et al., 2019) adds extra locality constraints to the Transformer encoder's self-attention to encourage the attention heads to follow a tree structure such that each token can only attend on nearby neighbors in lower layers and gradually extend the attention field to further tokens when climbing to higher layers.",
"Neural L-PCFG (Zhu et al., 2020) demonstrated that PCFG can benefit from modeling lexical dependencies.",
"Similar to StructFormer, the Neural L-PCFG induces both constituents and dependencies within a single model.",
"Though large scale pre-trained models have dominated most natural language processing tasks, some recent work indicates that neural network models can see accuracy gains by leveraging syntactic information rather than ignoring it (Marcheg-giani and Titov, 2017; Strubell et al., 2018).",
"Strubell et al. (2018) introduces syntactically-informed self-attention that force one attention head to attend on the syntactic governor of the input token.",
"Omote et al. (2019) and Deguchi et al. (2019) argue that dependency-informed self-attention can improve Transformer's performance on machine translation.",
"Kuncoro et al. (2020) shows that syntactic biases help large scale pre-trained models, like BERT, to achieve better language understanding.",
"In this section, we first reintroduce the concepts of syntactic distance and height, then discuss their relations in the context of StructFormer.",
"Syntactic distance is proposed in Shen et al. (2018b) to quantify the process of splitting sentences into smaller constituents.",
"Definition 3.1.",
"Let T be a constituency tree for sentence ( w 1 , ..., w n ) .",
"The height of the lowest common ancestor for consecutive words x i and x i +1 is i .",
"Syntactic distances T = ( 1 , ..., n 1 ) are defined as a sequence of n 1 real scalars that share the same rank as ( 1 , ..., n 1 ) .",
"In other words, each syntactic distance d i is associated with a split point ( i, i + 1) and specify the relative order in which the sentence will be split into smaller components.",
"Thus, any sequence of n 1 real values can unambiguously map to an unlabeled binary constituency tree with n leaves through the Algorithm 1 (Shen et al., 2018b).",
"As Shen et al. (2018c,a); Wang et al. (2019) pointed out, the syntactic distance reflects the information communication between constituents.",
"More concretely, a large syntactic distance i represents that short-term or local information should not be communicated between ( x i ) and ( x >i ) .",
"While cooperating with appropriate neural network architectures, we can leverage this feature to build unsupervised dependency parsing models.",
"Algorithm 1 Distance to binary constituency tree 1: function CONSTITUENT ( w , d ) 2: if d = [] then 3: T Leaf( w ) 4: else 5: i arg max i ( d ) 6: child l Constituent( w i , d <i ) 7: child r Constituent( w >i , d >i ) 8: T Node(child l , child r ) 9: return T Algorithm 2 Converting binary constituency tree to dependency graph 1: function DEPENDENT ( T , ) 2: if T = w then 3: D [] , parent w 4: else 5: child l , child r T 6: D l , parent l Dependent(child l , ) 7: D r , parent r Dependent(child r , ) 8: D Union( D l , D r ) 9: if (parent l ) > (parent r ) then 10: D .",
"Syntactic height is proposed in Luo et al. (2019), where it is used to capture the distance to the root node in a dependency graph.",
"A word with high syntactic height means it is close to the root node.",
"In this paper, to match the definition of syntactic distance, we redefine syntactic height as: Definition 3.2.",
"Let D be a dependency graph for sentence ( w 1 , ..., w n ) .",
"The height of a token w i in D is i .",
"The syntactic heights of D can be any sequence of n real scalars = ( 1 , ..., n ) that share the same rank as ( 1 , ..., n ) .",
"Although the syntactic height is defined based on the dependency structure, we cannot rebuild the original dependency structure by syntactic heights alone, since there is no information about whether a token should be attached to the left side or the right side.",
"However, given an unlabelled constituent tree, we can convert it into a dependency graph with the help of syntactic distance.",
"The converting process is similar to the standard process of converting constituency treebank to dependency treebank (Gel-bukh et al., 2005).",
"Instead of using the constituent labels and POS tags to identify the parent of each constituent, we simply assign the token with the largest syntactic height as the parent of each constituent.",
"The conversion algorithm is described in Algorithm 2.",
"In Appendix A.1, we also propose a joint algorithm, that takes T and as inputs and jointly outputs a constituency tree and dependency graph.",
"As discussed previously, the syntactic distance controls information communication between the two sides of the split point.",
"The syntactic height quanti-fies the centrality of each token in the dependency graph.",
"A token with large syntactic height tends to have more long-term dependency relations to connect different parts of the sentence together.",
"In StructFormer, we quantify the syntactic distance and height on the same scale.",
"Given a split point ( i, i + 1) and it's syntactic distance i , only tokens",
"x j with j > i can attend across the split point ( i, i + 1) .",
"Thus tokens with small syntactic height are limited to attend to nearby tokens.",
"Figure 2 provides an example of T , and respective dependency graph D .",
"However, if the left and right boundary syntactic distance of a constituent [ l, r ] are too large, all words in [ l, r ] will be forced to only attend to other words in [ l, r ] .",
"Their contextual embedding will not be able to encode the full context.",
"To avoid this phenomena, we propose calibrating T according to in Appendix A.2 4",
"In this section, we present the StructFormer model.",
"Figure 3a shows the architecture of StructFormer, which includes a parser network and a Transformer module.",
"The parser network predicts T and , then passes them to a set of differentiable functions to generate dependency distributions.",
"The Transformer module takes these distributions and the sentence as input to computes a contextual embedding for each position.",
"The StructFormer can be trained in an end-to-end fashion on a Masked Language Model task.",
"In this setting, the gradient back propagates through the relation distributions into the parser.",
"As shown in Figure 3b, the parsing network takes word embeddings as input and feeds them into several convolution layers:",
"where s l,i is the output of l -th layer at i -th position, s 0 ,i is the input embedding of token w i , and 2 W +1 is the convolution kernel size.",
"Given the output of the convolution stack s N,i , we parameterize the syntactic distance T as: i = W 1 tanh (cid:18) W 2 (cid:20) s N,i s N,i +1 (cid:21)(cid:19) , 1 i n 1 , i = 0 or i = n (2) where i is the contextualized distance for the i th split point between token w i and w i +1 .",
"The syntactic height is parameterized in a similar way: i = W 1 tanh (cid:16) W 2 s N,i + b 2 (cid:17) + b 1 (3) 4.2 Estimate the Dependency Distribution Given T and , we now explain how to estimate the probability p ( x j | x i ) such that the j -th token is the parent of the i -th token.",
"The first step is identifying the smallest legal constituent C ( x i ) , that contains x i and x i is not C ( x i ) 's parent.",
"The second step is identifying the parent of the constituent x j = Pr ( C ( x i )) .",
"Given the discussion in section 3.2, the parent of C ( x i ) must be the parent of x i .",
"Thus, the two-stages of identifying the parent of x i can be formulated as: D ( x i ) = Pr ( C ( x i )) (4) In StructFormer, C ( x i ) is represented as constituent [ l, r ] , where l is the starting index ( l i ) of C ( x i ) and r is the ending index ( r i ) of C ( x i ) .",
"In a dependency graph, x i is only connected to its parent and dependents.",
"This means that x i does not have direct connection to the outside of C ( x i ) .",
"In other words, C ( x i ) = [ l, r ] is the smallest constituent that satisfies: i < l 1 , i < r (5) where l 1 is the first <i that is larger then i while looking backward, and r is the first i that is larger then i while looking forward.",
"For example, in Figure 2, 4 = 3 .",
"5 , 3 = 4 > 4 and 8 = > 4 , thus C ( x 4 ) = [4 , 8] .",
"To make this process differentiable, we define k as a real value and i as a probability distribution p ( i ) .",
"For the simplicity and efficiency of computation, we directly parameterize the cumulative distribution function p ( i > k ) with sigmoid function: p ( i > k ) = (( i k ) / 1 ) (6) where is the sigmoid function, i is the mean of distribution p ( i ) and 1 is a learnable temperature term.",
"Thus the probability that the l -th ( l < i ) token is inside C ( x i ) is equal to the probability that i is larger then the maximum distance between l and i : p ( l C ( x i )) = p ( i > max( i 1 , ..., l )) (7) = (( i max( l , ..., i 1 )) / ) Then we can compute the probability distribution for l : p ( l | i ) = p ( l C ( x i )) p ( l 1 C ( x i )) = (( i max( l , ..., i 1 )) / ) (( i max( l 1 , ..., i 1 )) / ) (8) Similarly, we can compute the probability distribution for r : p ( r | i ) = (( i max( i , ..., r 1 )) / ) (( i max( i , ..., r )) / ) (9) The probability distribution for [ l, r ] = C ( x i ) can be computed as: p C ([ l, r ] | i ) = (cid:26) p ( l | i ) p ( r | i ) , l i r 0 , otherwise (10) The second step is to identify the parent of [ l, r ] .",
"For any constituent [ l, r ] , we choose the j = argmax k [ l,r ] ( k ) as the parent of [ l, r ] .",
"In the previous example, given constituent [4 , 8] , the maximum syntactic height is 6 = 4 .",
"5 , thus Pr ([4 , 8]) = x 6 .",
"We use softmax function to parameterize the probability p Pr ( j | [ l, r ]) : p Pr ( j | [ l, r ]) = (cid:40) exp( h j / 2 ) (cid:80) l k r exp( h k / 2 ) , l t r 0 , otherwise (11) Given probability p ( j | [ l, r ]) and p ([ l, r ] | i ) , we can compute the probability that x j is the parent of x i : p D ( j | i ) = (cid:26)(cid:80) [ l,r ] p Pr ( j | [ l, r ]) p C ([ l, r ] | i ) , i (cid:54) = j 0 , i = j (12) 4.3 Dependency-Constrained Multi-head Self-Attention The multi-head self-attention in the transformer can be seen as a information propagation mechanism on the complete graph G = ( X, E ) , where the set of vertices X contains all n tokens in the sentence, and the set of edges E contains all possible word pairs ( x i , x j ) .",
"StructFormer replace the complete graph G with a soft dependency graph D = ( X, A ) , where A is the matrix of n n probabilities.",
"A ij = p D ( j | i ) is the probability of the j -th token depending on the i -th token.",
"The reason that we called it a directed edge is that each specific head is only allow to propagate information either from parent to dependent or from from dependent to parent.",
"To do so, structformer associate each attention head with a probability distribution over parent or dependent relation.",
"p parent = exp( w parent ) exp( w parent ) + exp( w dep ) (13) p dep = exp( w dep ) exp( w parent ) + exp( w dep ) (14) where w parent and w dep are learnable parameters that associated with each attention head, p parent is the probability that this head will propagate information from parent to dependent, vice versa.",
"The model will learn to assign this association from the downstream task via gradient descent.",
"Then we can compute the probability that information can be propagated from node j to node i via this head: p i,j = p parent p D ( j | i ) + p dep p D ( i | j ) (15) However, Htut et al. (2019) pointed out that different heads tend to associate with different type of universal dependency relations (including nsubj , obj , advmod , etc), but there is no generalist head can that work with all different relations.",
"To accommodate this observation, we compute a individual probability for each head and pair of tokens ( x i , x j ) : q i,j = sigmoid (cid:18) QKT d k (cid:19) (16) where Q and K are query and key matrix in a standard transformer model and d k is the dimension of attention head.",
"The equation is inspired by the scaled dot-product attention in transformer.",
"We replace the original softmax function with a sigmoid function, so q i,j became an independent probability that indicates whether x i should attend on x j through the current attention head.",
"In the end, we propose to replace transformer's scaled dot-product attention with our dependency-constrained self-attention: Attention( Q i , K j , V j , D ) = p i,j q i,j V j (17) 5 Experiments We evaluate the proposed model on three tasks: Masked Language Modeling, Unsupervised Constituency Parsing and Unsupervised Dependency Parsing.",
"Our implementation of StructFormer is close to the original Transformer encoder (Vaswani et al., 2017).",
"Except that we put the layer normalization in front of each layer, similar to the T5 model (Raf-fel et al., 2019).",
"We found that this modification allows the model to converges faster.",
"For all experiments, we set the number of layers L = 8 , the embedding size and hidden size to be d model = 512 , the number of self-attention heads h = 8 , the feed-forward size d ff = 2048 , dropout rate as 0 .",
"1 , and the number of convolution layers in the parsing network as L p = 3 .",
"Masked Language Modeling (MLM) has been widely used as a pretraining object for larger-scale pretraining models.",
"In BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019), authors found that MLM perplexities on held-out evaluation set have a positive correlation with the end-task performance.",
"We trained and evaluated our model on 2 different datasets: the Penn TreeBank (PTB) and BLLIP.",
"In our MLM experiments, each token has an independent chance to be replaced by a mask token <mask> , except that we never replace < unk > token.",
"The training and evaluation object for Masked Language Model is to predict the replaced tokens.",
"The performance of MLM is evaluated by measuring perplexity on masked words.",
"PTB is a standard dataset for language modeling (Mikolov et al., 2012) and unsupervised constituency parsing (Shen et al., 2018c; Kim et al., 2019a).",
"Following the setting proposed in Shen et al. (2018c), we use Mikolov et al. (2012)'s prepossessing process, which removes all punc-tuations, and replaces low frequency tokens with <unk> .",
"The preprocessing results in a vocabulary size of 10001 (including <unk> , <pad> and <mask> ).",
"For PTB, we use a 30% mask rate.",
"BLLIP is a large Penn Treebank-style parsed corpus of approximately 24 million sentences.",
"We train and evaluate StructFormer on three splits of BLLIP: BLLIP-XS (40k sentences, 1M tokens), BLLIP-SM (200K sentences, 5M tokens), and BLLIP-MD (600K sentences, 14M tokens).",
"They are obtained by randomly sampling sections from Model PTB BLLIP BLLIP BLLIP -XS -SM -MD Transformer 64.05 93.90 19.92 14.31 StructFormer 60.94 57.28 18.70 13.70 Table 1: Masked Language Model perplexities on different datasets.",
"BLLIP 1987-89 Corpus Release 1.",
"All models are tested on a shared held-out test set (20k sentences, 500k tokens).",
"Following the settings provided in (Hu et al., 2020), we use subword-level vocabulary extracted from the GPT-2 pre-trained model rather than the BLLIP training corpora.",
"For BLLIP, we use a 15% mask rate.",
"The masked language model results are shown in Table 1.",
"StructFormer consistently outperforms our Transformer baseline.",
"This result aligns with previous observations that linguistically informed self-attention can help Transformers achieve stronger performance.",
"We also observe that StructFormer converges much faster than the standard Transformer model.",
"The unsupervised constituency parsing task compares the latent tree structure induced by the model with those annotated by human experts.",
"We use the Algorithm 1 to predict the constituency trees from T predicted by StructFormer.",
"Following the experiment settings proposed in Shen et al. (2018c), we take the model trained on PTB dataset and evaluate it on WSJ test set.",
"The WSJ test set is section 23 of WSJ corpus, it contains 2416 human expert labeled sentences.",
"Punctuation is ignored during the evaluation.",
"Table 2 shows that our model achieves strong results on unsupervised constituency parsing.",
"While PRPN ON C-PCFG Tree-T Ours SBAR 50.0% 52.5% 56.1% 36.4% 48.7% NP 59.2% 64.5% 74.7% 67.6% 72.1% VP 46.7% 41.0% 41.7% 38.5% 43.0% PP 57.2% 54.4% 68.8% 52.3% 74.1% ADJP 44.3% 38.1% 40.4% 24.7% 51.9% ADVP 32.8% 31.6% 52.5% 55.1% 69.5% Table 3: Fraction of ground truth constituents that were predicted as a constituent by the models broken down by label (i.e. label recall) the C-PCFG (Kim et al., 2019a) achieve a stronger parsing performance with its strong linguistic constraints (e.g. a finite set of production rules), StructFormer may have a border domain of application.",
"For example, it can replace the standard transformer encoder in most of the popular large-scale pre-trained language models (e.g. BERT and Re-BERTa) and transformer based machine translation models.",
"Different from the transformer-based Tree-T (Wang et al., 2019), we did not directly use constituents to restrict the self-attention receptive field.",
"But StructFormer achieves a stronger constituency parsing performance.",
"This result may suggest that dependency relations are more suitable for grammar induction in transformer-based models.",
"Table 3 shows that our model achieves strong accuracy while predicting Noun Phrase (NP), Preposition Phrase (PP), Adjective Phrase (ADJP), and Adverb Phrase (ADVP).",
"The unsupervised dependency parsing evaluation compares the induced dependency relations with those in the reference dependency graph.",
"The most common metric is the Unlabeled Attachment Score (UAS), which measures the percentage that a token is correctly attached to its parent in the reference tree.",
"Another widely used metric for unsupervised dependency parsing is Undirected Unlabeled Attachment Score (UUAS) measures the percentage that the reference undirected and unlabeled connections are recovered by the induced tree.",
"Similar to the unsupervised constituency parsing, we take the model trained on PTB dataset and evaluate it on WSJ test set (section 23).",
"For the WSJ test set, reference dependency graphs are converted from its human-annotated constituency trees.",
"However, there are two different sets of rules for the conversion: the Stanford dependencies and the CoNLL dependencies.",
"While Stanford dependencies are used as reference dependencies in previous unsupervised Relations MLM Constituency Stanford Conll PPL UF1 UAS UUAS UAS UUAS parent+dep 60.9 (1.0) 54.0 (0.3) 46.2 (0.4) 61.6 (0.4) 36.2 (0.1) 56.3 (0.2) parent 63.0 (1.2) 40.2 (3.5) 32.4 (5.6) 49.1 (5.7) 30.0 (3.7) 50.0 (5.3) dep 63.2 (0.6) 51.8 (2.4) 15.2 (18.2) 41.6 (16.8) 20.2 (12.2) 44.7 (13.9) Table 4: The performance of StructFormer with different combinations of attention masks.",
"parsing papers, we noticed that our model sometimes output dependency structures that are closer to the CoNLL dependencies.",
"Therefore, we report UAS and UUAS for both Stanford and CoNLL dependencies.",
"Following the setting of previous papers (Jiang et al., 2016), we ignored the punctuation during evaluation.",
"To obtain the dependency relation from our model, we compute the argmax for dependency distribution: k = argmax j (cid:54) = i p D ( j | i ) (18) and assign the k -th token as the parent of i -th token.",
"Table 5 shows that our model achieves competitive dependency parsing performance while comparing to other models that do not require gold POS tags.",
"While most of the baseline models still rely on some kind of latent POS tags or pre-trained word embeddings, StructFormer can be seen as an easy-to-use alternative that works in an end-to-end fashion.",
"Table 6 shows that our model recovers 61.6% of undirected dependency relations.",
"Given the strong performances on both dependency parsing and masked language modeling, we believe that the dependency graph schema could be a viable substitute for the complete graph schema used in the standard transformer.",
"Appendix A.4 provides examples of parent distribution.",
"Since our model uses a mixture of the relation probability distribution for each self-attention head, we also studied how different combinations of relations affect the performance of our model.",
"Table 6 shows that the model can achieve the best performance while using both parent and dependent relations.",
"The model suffers more on dependency parsing if the parent relation is removed.",
"And if the dependent relationship is removed, the model will suffer more on the constituency parsing.",
"Appendix A.3 shows the weight for parent and dependent relations learnt from MLM tasks.",
"It's interesting to observe that Structformer tends to focus on the parent relations in the first layer, and start to use both relations from the second layer.",
"In this paper, we introduce a novel dependency and constituency joint parsing framework.",
"Based on the framework, we propose StructFormer, a new unsupervised parsing algorithm that does unsupervised dependency and constituency parsing at the same time.",
"We also introduced a novel dependency-constrained self-attention mechanism that allows each attention head to focus on a specific mixture of dependency relations.",
"This brings Transformers closer to modeling a directed dependency graph.",
"The experiments show promising results that StructFormer can induce meaningful dependency and constituency structures and achieve better performance on masked language model tasks.",
"This research provides a new path to build more linguistic bias into a pre-trained language model."
]
| [
"abstain",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain"
]
|
[
"Abstract",
"Medical named entity recognition (NER) and normalization (NEN) are fundamental for constructing knowledge graphs and building QA systems.",
"Existing implementations for medical NER and NEN are suffered from the error propagation between the two tasks.",
"The mispredicted mentions from NER will directly influence the results of NEN.",
"Therefore, the NER module is the bottleneck of the whole system.",
"Besides, the learnable features for both tasks are beneficial to improving the model performance.",
"To avoid the disadvantages of existing models and exploit the generalized representation across the two tasks, we design an end-to-end progressive multi-task learning model for jointly modeling medical NER and NEN in an effective way.",
"There are three level tasks with progressive difficulty in the framework.",
"The progressive tasks can reduce the error propagation with the incremental task settings which implies the lower level tasks gain the supervised signals other than errors from the higher level tasks to improve their performances.",
"Besides, the context features are exploited to enrich the semantic information of entity mentions extracted by NER.",
"The performance of NEN profits from the enhanced entity mention features.",
"The standard entities from knowledge bases are introduced into the NER module for extracting corresponding entity mentions correctly.",
"The empirical results on two publicly available medical literature datasets demonstrate the superiority of our method over nine typical methods.",
"To dig into the large amount of electronic medical records, there has been an increasing interest in applying information extraction to them.",
"These techniques can generate tremendous benefit for corresponding research and applications, such as medCorresponding author.",
"ical knowledge graph (Wu et al., 2019) and QA systems (Lamurias and Couto, 2019).",
"Among the medical text mining tasks, medical named entity recognition and normalization are the most fundamental tasks.",
"Named entity recognition tries to find the boundaries of mentions from the medical texts.",
"And named entity normalization maps mentions extracted from the medical text to standard identifiers, such as MeSH and OMIM (Zhao et al., 2019).",
"The initial pipeline implementations for medical NER and NEN have a main limitation: error extractions from NER cascade into NEN which result in normalization errors.",
"Besides, the mutual use between recognition and normalization is not utilized in the pipeline models.",
"To alleviate the limitations and achieve a higher performance, some researchers focused on jointly modeling these two tasks.",
"Leaman and Lu (2016) proposed a joint scoring function for medical NER and NEN.",
"Lou et al. (2017) casted the output construction process of the two tasks as a state transition process to perform medical named entity recognition and normalization.",
"To capture the semantic features of two tasks, Zhao et al. (2019) proposed a multi-task learning framework with an explicit feedback strategy for medical NER and NEN.",
"As shown in Figure 1, there are two common frameworks: pipeline and parallel multi-task framework.",
"The former one is formulated to maximize the posterior probabilities p ( y NER | x ) and p ( y NEN | m, e ) where x is the medical text, m is the medical mentions extracted by a recognition model, e is the standard entity, y NER and y NEN are the labels.",
"The latter one tries to maximize the posterior probabilities p ( y NER , y NEN | x ) (Zhao et al., 2019).",
"Both of these are struggled with the bottleneck that is named entity recognition.",
"In the above frameworks, the NER module is trained to memorize the medial mentions in the training set.",
"However, the medical mentions are various and there is a gap between the training and test set.",
"It is natural that the unseen mentions in training set are hard to recognize during the testing phase.",
"Therefore, the conventional frameworks do not gain more ideal generalization ability.",
"To overcome the disadvantage mentioned above, we reconsidered the process of medical named entity recognition and normalization.",
"The ultimate goal is to map the extracted medical mentions to the standard entity base.",
"Therefore, the target standard entity base can be regarded as a dictionary.",
"The initial process of NEN and NER can be reconsidered as detecting whether the medical text contains the candidate standard entity and finding the mentions should be replaced.",
"Based on this idea, we propose an e nd-toe nd progressive multi-task learning framework for m edical named e ntity r ecognition and n ormalization ( E2EMERN 1 ).",
"Compared with ordinary multi-task learning, progressive multi-task learning focuses on the aggregation logic of tasks' specific features (Hong et al., 2020).",
"A difficult target is divided into a few tasks that are interconnected through the combination of features.",
"To take full advantage of the data attributes, we propose the framework including three tasks with progressive difficulty extended from the conventional NER and NEN tasks.",
"The low-level task is the traditional NER which tries to extract all entities in the medical text.",
"The mid-level task is defined to iden-1 When ready, the code will be published at https:// github.com/zhoubaohang/E2EMERN tify whether there exist medical mentions in the text that should be mapped to the candidate standard entity.",
"The high-level task combines the first two level tasks, and targets to extract the mentions which should be mapped to the candidate standard entity.",
"Unlike the existing frameworks, E2EMERN exploits the progressive tasks to learn the fine-grained representations.",
"The mid-level and high-level tasks facilitate the framework learning the corresponding features between the medical mentions and standard entities.",
"The low-level task can gain the supervised signals from the higher level tasks to extract medical mentions corresponded to standard entities in the knowledge bases more exactly.",
"Our contributions in this manuscript can be summarized as follows:",
"1. We reconsider the process of the NER and NEN tasks, and firstly propose to exploit the three tasks with progressive difficulty to train the end-to-end medical named entity recognition and normalization framework.",
"2. The experimental results on two medical benchmarks demonstrate that our framework outperforms the existing medical named entity recognition and normalization models.",
"And we conducted detailed analysis on the framework to represent its superiority.",
"Medical named entity recognition and normalization are two basic tasks for the medical text mining.",
"The conventional pipeline frameworks contains the NER model and NEN one separately (Vazquez et al., 2008; Leaman and Lu, 2014; Sahu and Anand, 2016; Zhou et al., 2020).",
"NER models extract medical mentions in texts and then NEN models map these mentions to standard entity identifiers.",
"To reduce the error propagation in the pipeline frameworks, some researchers proposed to model NER and NEN jointly.",
"Leaman et al. (2015) combined two traditional machine learning models as an ensemble NER and NEN model.",
"And to learn the joint probability distribution of the NER and NEN tasks, a semi-markov based model was proposed by Leaman and Lu (2016).",
"However, traditional methods depend on the human-based feature engineering.",
"With the development of the deep learning, recurrent neural networks (RNN) have replaced human effort and been utilized to extract features of raw texts.",
"Zhao et al. (2019) designed an RNN-based network architecture with feedback strategy to model the two tasks jointly.",
"Recently, the pre-trained models, such as BERT (Devlin et al., 2019), BioBERT (Lee et al., 2020), make impressive progress in the natural language processing (NLP) area.",
"Xiong et al. (2020) used BERT as the base module and proposed a machine reading comprehension framework to solve the NER and NEN problems jointly.",
"Named entity recognition can be regarded as a sequence labeling problem.",
"Sequence labeling was explored extensively as a basic task in NLP.",
"Probabilistic graphical models, such as: hidden markov model (Xiao et al., 2005) and conditional random fields (CRF) (Lafferty et al., 2001) are the typical methods to solve the problem.",
"With deep learning modules gradually replacing manual feature engineering, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) network stacked with CRF (Xu et al., 2008) has been a benchmark model for sequence labeling (Lample et al., 2016).",
"Some researchers utilized multi-task learning to model relevant NLP tasks and gained better performances on these tasks including sequence labeling (Aguilar et al., 2017; Cao et al., 2018).",
"Besides, the attributes of the data themselves are used to design the multi-task learning model.",
"Considering whether sentences contain entities, Wang et al. (2019) proposed the multitask learning model to predict whether input data have entities and then extract corresponding entities.",
"Kruengkrai et al. (2020) exploited sentence-level labels and token-level labels to propose a joint model supporting multi-class classification.",
"Named entity normalization is formulated as a short text matching problem.",
"The information retrieval method, such as: BM25 (Robertson et al., 1994), is a universal model to solve this problem.",
"With the development of neural language model, text semantic is exploited to model the similarity between two short texts.",
"The distributed representations of texts, such as: Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014), are utilized to calculate the similarity distance between two texts.",
"Some medical named entity normalization models are based on this method (Leaman and Lu, 2014; Zhou et al., 2020).",
"Considering local texts are more important than global ones, some researchers utilized convolution neural networks (CNN) to extract local features and exploited interactive attention mechanism to match the semantic similarity of two texts (Yin et al., 2016; Chen et al., 2018).",
"We introduce the notations about NER and NEN before getting into the details of the framework.",
"For NER task, we denote { ( X i , y i ) } N s i =1 as a training set with N s samples, where X i is the medical text and y i is the NER label.",
"Given a sentence with N w words, the medical text can be formulated as X = { x 1 , x 2 , . . . , x N w } and the NER label is y = { y 1 , y 2 , . . . , y N w } .",
"To solve the NER task, we try to maximize the posterior probability p ( y | X ) .",
"According to the NER label, we can extract the medical mentions { m i } N m i =1 from the medical text, where N m is the number of the mentions.",
"For NEN task, we need to map each mention m to a standard entity e in the entity base B = { e i } N e i =1 .",
"We formulate the object of NEN task as a posterior probability p ( e | m , B ) , and e is the standard entity which the mention m should be mapped to.",
"With the help of NER and NEN, we can map medical mentions in the raw texts to the corresponding standard entities.",
"Traditional pipeline implementations for the two tasks are composed of the individual NER and NEN models.",
"The simple partitioning of the two models leads to the error propagation between them.",
"Considering the correlation between the two tasks, Zhao et al. (2019) proposed the parallel task framework to improve the performance of the model.",
"However, the intuitive feedback strategy for the output layers of two tasks is not beneficial to modeling the fine-grained features between two tasks.",
"The above implementations lack thinking about the learning process.",
"The process of human learning often goes from easy to difficult (Xu et al., 2020).",
"Especially for the correlated tasks, humans can dig into the hidden knowledge and extract them from the easy tasks for completing the hard ones.",
"Based on this idea, we reconsider the process of conventional NER and NEN tasks, and propose three correlated tasks with progressive difficulty.",
"As shown in Figure 2, we take a medical text from the real dataset NCBI (Dogan et al., 2014) Attention Mechanism BERT most colon cancers arise from mutations Tokenize ID: D003110 Name: Colonic Neoplasms Content: Tumors or cancer ID: D003110 Name: Colonic Neoplasms Content: Tumors or cancer BERT 0/1 Dense Layer O B-Disease O O O I-Disease High-level Task Mid-level Task Low-level Task Dense Layer O B-Disease O O B-Disease I-Disease Gate Mechanism Standard Entity Base ( MeSH / OMIM ) Input medical text: Familial Mediterranean fever isarecessi vedisorder.",
"as an example to describe the tasks.",
"The medical text is Familial Mediterranean fever is a recessive disorder and its corresponding NER label is B-Disease I-Disease I-Disease O O B-Disease I-Disease.",
"Among the tokens, medical mentions Familial Mediterranean fever and recessive disorder are mapped to the standard entity identifiers D010505 and D030342 respectively.",
"Low-level task is defined to memorize all medical mentions seen in training set.",
"Given the medical text mentioned above, this task needs to predict the NER label and extract the mentions Familial Mediterranean fever and recessive disorder.",
"Similar to the process of human learning vocabulary, the low-level task forces the framework to learn the medical mentions indiscriminantly.",
"However, the final target is to map mentions to standard entities.",
"We should continue to bridge the gap between medical mentions in raw texts and standard entities in the database.",
"Mid-level task targets to determine whether medical texts implicit the query standard entities.",
"With the above medical text and the standard entity D010505 as input, this task should inference the text contains this entity.",
"Through this task, the framework establishes the coarse-grained relationship between the mentions with contexts and the query standard entities.",
"However, the mentions are incomplete correspondence to the query standard entities.",
"Because there is more than one mention in the raw text which should be extracted and mapped to the corresponding standard entities.",
"We need to specify which mention in the text should be mapped to the input standard entity.",
"High-level task is proposed to extract the mentions which should be mapped to the query standard entity.",
"After acquiring the above medical text and the standard entity D030342, this task should extract the mention recessive disorder.",
"If the input text contains no mention which should be mapped to the query entity, the output of this task is empty.",
"The effect of this task is the same as that of NEN, but it is harder than NEN.",
"To accomplish the high-level task, we need to build on the first two tasks.",
"The low-level task provides the representations of the medical mentions with contexts which is beneficial to locating them in raw texts.",
"The mid-level task forces the model to learn the correlated features between mentions with standard entities.",
"With the help of two pre-tasks, the high-level task can be accomplished in an effective way.",
"We build on the progressive tasks to implement the framework E2EMERN as shown in Figure",
"2. Considering the logic of feature aggregation and the strategies for training different tasks, we need to give detailed explanations by the level of tasks.",
"For a given sentence X = { x 1 , x 2 , . . . , x N w } , we need to map it to the dense vector representations.",
"With the impressive performances of pre-trained models, we utilize BERT (Devlin et al., 2019) as feature extractors to acquire the distributed representations of sentences.",
"The BERT architecture is composed by the transformer networks and its weights are trained with large number of corpus.",
"The feature extraction process is sim-plified as BERT ( X ) = { h 1 , h 2 , . . . , h N w } , where h R 1024 1 .",
"The low-level task is defined as the same as NER, and we utilize the NER labels as the target.",
"The sentence features { h i } N w i =1 are fed into the softmax layer, and we can compute the prediction probabilities of low-level task as: y i = softmax ( W l h i + b l ) where W l and b l are trainable parameters.",
"For training, we utilize the cross-entropy loss as the objective function.",
"The loss function of low-level task is defined as follows: L low = N w (cid:88) i =1 y i log y i .",
"(1) The sample for the mid-level task is defined as a tuple ( X , e , y m ) .",
"If the text X contain the mentions which should be mapped to the entity e , y m is assigned 1 otherwise 0 .",
"To bridge the gap between the mentions and standard entities in the mid-level task, we need also to extract the features of standard entities.",
"The standard entity e is described with the specific name and some medical contents.",
"We feed the name (or contents) of the entity into the BERT and perform the average pooling on the output of BERT.",
"The feature vector of i -th standard entity in the database is defined as h ei .",
"Considering the words of mentions in raw texts are more correlated to the standard entity, we adopt the attention mechanism (Zhou et al., 2016) to focus on the local words of sentences.",
"The attention weighted average feature can be calculated as: h a = (cid:80) N w i =1 i x i .",
"And the attention score is defined as: i = exp( s ( x i , h e )) (cid:80) Nwi =1 exp( s ( x i , h e )) where s ( x i , h e ) = W a [ x i ; h e ] + b a .",
"W a and b a are trainable weights in the attention module.",
"After acquiring the entity-attention feature h a and standard entity feature h e , we can calculate the prediction probabilities y m = ( W m [ h e ; h a ] + b m ) where is the sigmoid function.",
"The loss function for the mid-level task is formulated as the cross-entropy: L mid = ( y m log y m + (1 y m ) log(1 y m )) .",
"We define the tuple ( X , e , y h ) as the sample for the high-level task where y h = { y hi } N w i =1 .",
"Given that the medical text X is Familial Mediterranean fever is a recessive disorder. and standard en-Familial Mediterranean fever is a recessive disorder B-Disease I-Disease I-Disease O O B-Disease I-Disease :: D010505 D030342 Standard Entity: Original Sample Extended Sample 1.(,,D010505,1,`B-Disease I-Disease I-Disease O O O O (cid:4593) ) 2.(,,D030342,1,`O O O O O B-Disease I-Disease (cid:4593) ) 3.(,,D016870,0,`O O O O O O O (cid:4593) ) Figure 3: The original sample is from the dataset NCBI.",
"tity e is D030342, the label sequence y h should be O O O O O B-Disease I-Disease.",
"To take advantage of the pre-tasks, we propose the gate mechanism to aggregate the different features for solving this task.",
"The sentence feature { h i } N w i =1 implicit the medical mentions while the entity attention feature h a contains clearer locations of the corresponding mentions.",
"Therefore, we propose the gate mechanism to focus on the fine-grained feature dimensions.",
"The formulation of the gate mechanism is G ( H , H a ) = ( W g [ H ; H a ] + b g ) where H = { h i } N w i =1 and H a = [ h a ; . . . ; h a ] R 1024 N w .",
"Considering the semantic difference between the mentions and corresponding standard entities, we exploit the gate mechanism to fuse the standard entity feature with the sentence feature.",
"The fusion sentence feature is formulated as: H f = H (cid:12) (1 G ( H , H a ))+ H e (cid:12) G ( H , H a ) where (cid:12) is the element-wise production, H f = { h fi } N w i =1 and H e = [ h e ; . . . ; h e ] R 1024 N w .",
"We feed the fusion feature into the softmax layer to predict the probabilities y hi = softmax ( W h h fi + b h ) .",
"As the same as the low-level task, we utilize the cross-entropy loss function as follows: L high = N w (cid:88) i =1 y hi log y hi .",
"For the framework, we denote the training sample as ( X , y , e , y m , y h ) .",
"According to the definitions of the three tasks, we can generate the task labels corresponding to the input sentence.",
"The example is shown in Figure",
"3. Given the medical text X , the label y for the low-level task is the same as the original NER label.",
"We use the standard entities which the mentions { m i } N m i =1 should be mapped to as the input entity e respectively.",
"The high-level task label y h is based on y , and it only keeps the original labels of y which are correlated to the input e .",
"Besides, we adopt the negative sampling strategy to select the standard entity which is not related to the input sentence X as the input entity e .",
"To tackle the three level tasks at once, we introduce two hyper-parameters to sum Eqn.",
"1, Eqn.",
"2 and Eqn.",
"3. The overall loss function for the framework is defined as follows: L = L low + L mid + L high (4) where and are hyper-parameters for balancing different task losses.",
"After generating samples, we feed them into the model and then calculate the loss according to Eqn.",
"4. Following the back-propagation method, we update the weights of the networks with the acquired loss.",
"After every epoch of training, we re-sample the training samples for better generalization of the model.",
"We compare our framework with the existing methods on two medical benchmark datasets.",
"Table 1 presents the detailed statistical information of the two datasets.",
"There are 798 public medical abstracts in the NCBI dataset (Dogan et al., 2014).",
"Each medical mention in the text is annotated with MeSH/OMIM identifiers.",
"BC5CDR dataset (Li et al., 2016) contains 1500 public medical abstracts which are also annotated with MeSH identifiers.",
"We split each abstract into sentence samples with an average of 40 words according to the ends of sentences.",
"The padding char is used for filling the unequal length samples to the fixed length.",
"During the training process, we first train the model on the training set and test it on the development set for searching the best hyper-parameters.",
"Then, we fix the best hyper-parameters and train the model on the set composed of the training and development sets.",
"Before the model is trained to the searched maximum number of epochs, we take the F1 score as the reported result when the loss gets the lowest.",
"In our experiments, we set the hyper-parameters , and learning rate to 0 .",
"125 , 0 .",
"1 and 1e-5 respectively.",
"To train the model, we use the ADAM (Kingma and Ba, 2015) algorithm to update the weights.",
"And all experiments are accelerated by the two NVIDIA GTX 2080Ti devices.",
"Dnorm (Leaman et al., 2013) is the pipeline model for medical NER and NEN.",
"It utilizes the TF-IDF feature to learn the bilinear mapping matrix for the normalization task.",
"LeadMine (Lowe et al., 2015) considers Wikipedia as dictionary features for normalizing the medical mentions.",
"TaggerOne (Leaman and Lu, 2016) is the semi-Markov based model for jointly modeling medical NER and NEN.",
"Transition-based model (Lou et al., 2017) consists of the state transformation function for the output of NER and NEN.",
"To reduce human feature engineering, researchers focus on the deep learning for modeling NER and NEN.",
"IDCNN (Strubell et al., 2017) was proposed with an improved CNN module for NER.",
"MCNN (Zhao et al., 2017) was composed of the multiple-label CNN modules for better performances on NER.",
"CollaboNet (Yoon et al., 2019) exploited the multi-source datasets for training the multi-task model and gained better results on all benchmark datasets.",
"MTL-MERN (Zhao et al., 2019) consists of the NER and NEN parallel framework and utilizes the feedback strategy to improve the performances on two tasks.",
"With the impressive performance of pre-trained models, BioBERT (Lee et al., 2020) is built on the BERT (Devlin et al., 2019) and trained with a large medical corpus.",
"And it achieves state-of-the-art results on medical NER datasets.",
"Therefore, we use the BioBERT as the feature extractor and compare it with our framework.",
"We compare E2EMERN with the baseline methods on the named entity recognition and normalization.",
"The detailed experiment results on NCBI and BC5CDR are shown in Table",
"2. The first Method NCBI BC5CDR Recognition Normalization Recognition Normalization Dnorm (Leaman et al., 2013) 0.7980 0.7820 -0.8064 LeadMine (Lowe et al., 2015) --0.8612 TaggerOne (Leaman and Lu, 2016) 0.8290 0.8070 0.8260 0.8370 Transition-based Model (Lou et al., 2017) 0.8205 0.8262 0.8382 0.8562 IDCNN (Strubell et al., 2017) 0.7983 0.7425 0.8011 0.8107 MCNN (Zhao et al., 2017) 0.8517 -0.8783 CollaboNet (Yoon et al., 2019) 0.8636 -0.8818 -MTL-MERN (Zhao et al., 2019) 0.8743 0.8823 0.8763 0.8645 BioBERT (Lee et al., 2020) 0.8971 -0.9029 E2EMERN 0.9151 0.8901 0.9175 0.8965 w/o mid-level task 0.8733 0.8890 0.9073 0.8600 w/o high-level task 0.8862 -0.9065 w/o gate mechanism 0.8885 0.8224 0.9100 0.8681 w/o attention mechanism 0.8767 0.8675 0.9092 0.8676 Table 2: The F1 scores of the models on NCBI and BC5CDR.",
"four in the table is the traditional machine learning methods.",
"Among them, the joint models, such as TaggerOne and Transition-based Model, outperform the pipeline ones including Dnorm and LeadMine.",
"When deep learning was introduced into the pipeline frameworks, IDCNN can make a progress over conventional methods, such as Dnorm.",
"Compared with MCNN, CollaboNet utilizes the multi-source dataset as input and performs multi-task learning to improve the performances on NER task.",
"MTL-MERN takes full advantage of multi-task learning and deep semantic representations and outperforms the above methods.",
"By virtue of the dynamic language features, BioBERT can better model the language semantics and outperform the above NER models.",
"Compared with baseline methods, E2EMERN can always achieve the best results on NER and NEN.",
"The NER results of E2EMERN increase by 1% 2% over BioBERT.",
"Because our framework takes full advantage of the correlation between NER and NEN.",
"Unlike the simple strategy of MTL-MERN, E2EMERN consists of three progressive tasks that are well-designed for modeling the fine-grained features between medical mentions in raw texts and standard entities.",
"The standard entity information of NEN is introduced into the NER module by the mechanisms in our framework.",
"With the help of the dynamic language features and progressive multi-task learning, the framework can extract the medical mentions more exactly and map them to standard entities.",
"And the semantic correlation between medical mentions and standard entities is built on the three progressive tasks from low to high.",
"The rich semantics captured by the progressive tasks are beneficial to NER and NEN.",
"To dig into the framework, we conduct the detailed analysis for presenting it in different aspects.",
"The ablation study is conducted to present the effectiveness of the mechanisms proposed in the framework.",
"Besides the supervised learning, our framework exploits the standard entity information in the NER task and is potential in a zero-shot scenario compared with BioBERT.",
"We conduct the case study to analyze the prediction results and visualize the attention mechanism to prove its effectiveness.",
"As shown in Table 2, we conduct the ablation study to present the effectiveness of the progressive tasks and different mechanisms.",
"When free from completing the midor high-level tasks, E2EMERN gains worse results on NER and NEN.",
"The progressive tasks improves the ability of the framework to learn the multi-grained features between original texts and standard entities.",
"Besides, we replace the gate and attention mechanisms with the simple feature concatenation strategy as compared methods.",
"When removed the attention mech-Text1: the von hippel lindau tumor suppressor gene is required for cell cycle exit upon serum withdrawal .",
"anism, E2EMERN achieves worse results on two tasks.",
"It proves that the supervised signals from mid-level task are beneficial to the low-task.",
"And the entity-attention feature generated by the mechanism contributes to the high-level task.",
"E2EMERN without the gate mechanism gains the worse results on NEN.",
"Because the mechanism aggregates the features from lower level tasks which provides the multi-grained information between mentions and standard entities.",
"The ablation study proves the importance of the two mechanisms to E2EMERN.",
"We conduct the statistic analysis on the test set of NCBI and BC5CDR.",
"As shown in Figure 4, there are about 40% 50% samples contain the words or medial mentions which do not appear in the training set.",
"Therefore, we need to evaluate the generalization ability of models on the unseen samples.",
"We compare E2EMERN with BioBERT on the unseen samples in the test set.",
"To a certain extent, our framework can outperform the existing state-of-the-art NER model.",
"Compared with BioBERT, E2EMERN introduces the standard entity base into the framework.",
"The fine-grained location information of medical mentions from the high-level task is propagated to the low-level task.",
"With the help of standard entity information and progressive multi-task learning, E2EMERN can gain the better generalization ability on unseen samples.",
"We present the case study results in Table",
"3. Compared with BioBERT, our framework can extract the medical mentions which BioBERT can not extract.",
"We draw the label results of E2EMERN with Seen 47.1% Unseen 52.9% NCBI Seen 61.4% Unseen 38.6% BC5CDR NCBI BC5CDR 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 F 1 0.4521 0.5284 0.7622 0.7092 NER BioBERT E2EMERN Figure 4: The results on unseen samples.",
"the heat map.",
"As the color deepens, the importance of the token in the sentence increases.",
"The visualization results prove that the attention mechanism in E2EMERN focuses on the tokens which make of medical mentions.",
"Although Text2 and Text4 are unseen samples, E2EMERN can also extract the mentions in them.",
"The token convul-sions is paid more attention than seizures in Text3.",
"But convulsion is the symptom of seizures.",
"With the help of medical correlation between them, E2EMERN can extract the token seizures as medical mention.",
"To some extent, the effectiveness of E2EMERN can be proved by the case study.",
"In this paper, we reconsider the process of NER and NEN and propose the end-to-end progressive multitask learning framework for medical named entity recognition and normalization.",
"Compared with existing methods, the framework consists of three tasks with progressive difficulty which contributes to modeling the fine-grained features between medical mentions in raw texts and standard entities.",
"Furthermore, the detailed analysis of E2EMERN proves its effectiveness.",
"Considering the medical area is various, we will try to adapt the framework to the cross domain problem.",
"We would like to thank three anonymous reviewers for their insightful comments.",
"This research is supported by the Chinese Scientific and Technical Innovation Project 2030 (2018AAA0102100), NSFC-General Technology Joint Fund for Basic Research (No. U1936206), NSFC-Xinjiang Joint Fund (No. U1903128), National Natural Science Foundation of China (No. 62002178, No. 62077031), and Natural Science Foundation of Tianjin, China (No. 20JCQNJC01730)."
]
| [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"other",
"other"
]
|
[
"Kaustubh D. Dhole Amelia Science RnD, IPsoft New York, NY 10004 [email protected]",
"Christopher D. Manning Department of Computer Science Stanford University Stanford, CA 94305 [email protected]",
"Abstract",
"A set of crowd-sourced evaluations shows that our system can generate a larger number of highly grammatical and relevant questions than previous QG systems and that back-translation drastically improves grammaticality at a slight cost of generating irrelevant questions.",
"1 Introduction Automatic Question Generation (QG) is the task of generating question-answer pairs from a declarative sentence.",
"Question Generation (QG) is fundamentally a simple syntactic transformation; however, many aspects of semantics influence what questions are good to form.",
"We implement this observation by developing Syn-QG, a set of transparent syntactic rules leveraging universal dependencies, shallow semantic parsing, lexical resources, and custom rules which transform declarative sentences into question-answer pairs.",
"We utilize PropBank argument descriptions and VerbNet state predicates to incorporate shallow semantic content, which helps generate questions of a descriptive nature and produce inferential and semantically richer questions than existing systems.",
"In order to improve syntactic fluency and eliminate grammatically incorrect questions, we employ back-translation over the output of these syntactic rules.",
"However, successful, fluent question generation requires more than just understanding syntactic question transformations, since felicitous questions must also observe various semantic and RETRACTED This paper was retracted.",
"It has direct use in education and generating engagement, where a system automatically generates questions about passages that someone has read.",
"A more recent secondary use is for automatic generation of questions as a data augmentation approach for training Question Answering (QA) systems.",
"QG was initially approached by syntactic rules for question-generation, followed by some form of statistical ranking of goodness, e.g., (Heilman and Smith, 2009, 2010).",
"In recent years, as in most areas of NLP, the dominant approach has been neural network generation (Du et al., 2017), Figure 1: The SRL structure is leveraged to invoke a template, and a simple rearrangement of the modifying arguments is performed.",
"in particular using a sequence-to-sequence architecture, which exploits the data in the rapidly growing number of large QA data sets.",
"Previous rule-based approaches suffer from a significant lack of variety in the questions they generate, sticking to a few simple and reliable syntactic transformation patterns.",
"Neural architectures provide a pathway to solving this limitation since they can exploit QA datasets to learn the broad array of human question types, providing the usual neural network advantages of a data-exploiting, end-to-end trainable architecture.",
"Nevertheless, we observe that the quality of current neural QG systems is still lacking: The generated questions lack syntactic fluency, and the models lack transparency and an easy way to improve them.",
"We argue that in essence QG can be governed by simple syntactic question transformations while the implementation details vary, this is in accord with all major linguistic viewpoints, such as Construction Grammar and Chomskyan Generative Grammar, which emphasize grammatical rules and the existence of finite ways to create novel utterances.",
"pragmatic constraints.",
"We approach these by making use of semantic role labelers (SRL), previously unexploited linguistic semantic resources like VerbNet's predicates (Figure 2) and PropBank's rolesets and custom rules like implications, allowing us to generate a broader range of questions of a descriptive and inferential nature.",
"A simple transformation commonly used in rule-based QG is also displayed in Figure",
"1. Figure 2: VerbNet Predicate Question Generation.",
"We evaluate our QG framework, Syn-QG against three QG systems on a mixture of Wikipedia and commercial text sentences outperforming existing approaches in grammaticality and relevance in a crowd-sourced human evaluation while simultaneously generating more types of questions.",
"We also notice that back-translated questions are grammatically superior but are sometimes slightly irrelevant as compared to their original counterparts.",
"The Java code is publicly available at https://bitbucket.org/kaustubhdhole/syn-qg/.",
"Moreover, there is plenty of availability of core linguistic resources like VerbNet and PropBank, which provide RETRACTED This paper was retracted.",
"2 Related Work With the advent of large-scale QA datasets (Ra-jpurkar et al., 2016; Nguyen et al., 2016), recent work in QG (Du et al., 2017; Zhou et al., 2017) has primarily focused on training sequence-to-sequence and attention-based architectures.",
"Dong et al. (2019) fine-tuned the question generation task by taking advantage of a large pre-trained language model.",
"Success in reinforcement learning has inspired teacher-student frameworks (Wang et al., 2017; Tang et al., 2017) treating QA and QG as complementary tasks and performing joint training by using results from QA as rewards for the QG task.",
"Yuan et al. (2017); Hosking and Riedel (2019); Zhang and Bansal (2019) used evaluation metrics like BLEU, sentence perplexity, and QA probability as rewards for dealing with exposure bias.",
"Chen et al. (2019) trained a reinforcement learning based graph-to-sequence architecture by embedding the passage via a novel gated bi-directional graph neural network and generating the question via a recurrent neural network.",
"To estimate the positions of copied words, Liu et al. (2019) used a graph convolution network and convolved over the nodes of the dependency parse of the passage.",
"Li et al. (2019) jointly modeled OpenIE relations along with the passage using a gated-attention mechanism and a dual copy mechanism.",
"Traditionally, question generation has been tackled by numerous rule-based approaches (Heilman and Smith, 2009; Mostow and Chen, 2009; Yao and Zhang, 2010; Lindberg et al., 2013; Labutov et al., 2015).",
"Heilman and Smith (2009, 2010) introduced an overgenerate-and-rank approach that generated multiple questions via rule-based tree transformations of the constituency parse of a declarative sentence and then ranked them using a logistic-regression ranker with manually designed features.",
"Yao and Zhang (2010) described transformations of Minimal Recursion Semantics representations guaranteeing grammaticality.",
"Other transformations have been in the past defined in terms of templates (Mazidi and Nielsen, 2014, 2015; Mazidi and Tarau, 2016; Flor and Riordan, 2018), or explicitly performed (Heilman and Smith, 2009) by searching tree patterns via Tregex, followed by their manipulation using Tsurgeon (Levy and Andrew, 2006).",
"Kurdi et al. (2020) provide a comprehensive summary of QG, analysing and comparing approaches before and after 2014.",
"Vis-`a-vis current neural question generators, rule-based architectures are highly transparent, easily extensible, and generate well-formed questions since they perform clearly defined syntactic transformations like subject-auxiliary inversion and WH-movement over parse structures whilst leveraging fundamental NLP annotations like named entities, co-reference, temporal entities, etc.",
"However, most of the existing rule-based systems have lacked diversity, being mostly focused on generating What -type and boolean questions and have mainly exploited parse structures which are not semantically informed.",
"Mazidi and Tarau (2016); Flor and Riordan (2018) use Dependency, SRL, and NER templates but do not handle modalities and negation in a robust manner.",
"Syn-QG is a rule-based framework which generates questions by identifying potential short answers in 1) the nodes of crucial dependency relations 2) the modifying arguments of each predicate in the form of semantic roles 3) named entities and other generic entities 4) the states of VerbNet's thematic roles in the form of semantic predicates and 5) PropBank roleset specific natural language descriptions.",
"Each of the five heuristics works independently, generating a combined set of question-answer pairs, which are eventually back-translated.",
"We describe each of these five sources.",
"Dependency trees are syntactic tree structures, wherein syntactic units in the form of words are connected via directed links.",
"The finite verb is considered as the structural root of the tree, and all other syntactic units are either directly ( nsubj, dobj, , etc.) or indirectly ( xcomp, iobj, etc.) dependent on this finite verb.",
"We keep the order of arguments as they appear in the original RETRACTED This paper was retracted.",
"We present rules over such dependency trees annotated according to the Universal Dependencies (UD) format (de Marneffe et al., 2014).",
"To extract dependency structures, we use the parser of Gardner et al. (2018).",
"We make use of PropBank's predicate-argument structure (SRL) for clausal extraction of the verb headed by a select few dependency nodes which can serve as answers.",
"These rules treat the clause as a combination of a subject, an object, the head verb and other non-core arguments.",
"The clause is further refined with modals, auxiliaries and negations if found around the verb.",
"Finally, we make use of a set of predefined handwritten templates, a few of which are described in Table",
"1. In each of the templates, we convert What to Who/Whom , When or Where depending on the named entity of the potential answer and do to does or did according to the tense and number of the subject to ensure subject-verb agreement.",
"The pseudo code is described in Algorithm 2 of the Appendix.",
"While dependency representations are perhaps the most popular syntactic method for automatically extracting relationships between words, they lack sufficient semantic detail.",
"Being able to answer Who did what to whom and how, why, when and where has been a central focus in understanding language.",
"In recent decades, shallow semantic parsing has been a prominent choice in understanding these relationships and has been extensively used in question generation (Mazidi and Tarau, 2016; Flor and Riordan, 2018).",
"PropBank-style frames provide semantically motivated roles that arguments around a verb play.",
"Moreover, highly accurate semantic role labeling models are being developed owing to corpora like PropBank and FrameNet.",
"We take advantage of the SRL model of Gardner et al. (2018) for extracting the roles of each verb in the sentence.",
"We succinctly describe the steps taken in Algorithm",
"1. We first filter out all the predicates which have an Agent or a Patient and at least one other modifier like Extent, Manner, Direction, etc.",
"These modifiers would serve as our short answers.",
"We make use of a set of predefined handwritten templates described in Table 2, which rearrange the arguments within the fact to convert it into an interrogative statement depending on the modifier.",
"In Figure 1, the predicate won is modified by a Patient New Mexico, an Agent Obama, an Extent modifier by a margin of 5% and a Temporal modifier in 2008.",
"For Extent as a short answer, we fill a pre-defined template By how much mainAux nsubj otherAux verb obj modifiers ? to get the above question-answer pair.",
"When there are multiple auxiliaries, we only invert the first auxiliary while the second and further auxiliaries remain as they are just before the main verb.",
"We make the question auxiliary finite and agree with the subject.",
"We ensure that the object is kept immediately after the verb.",
"For passive cases, subj-verb-obj is changed to obj-verb-by-subj .",
"sentence.",
"The templates are described in Table",
"We create separate templates when any numbered SRL argument contains common named entities like Person , Location , Organization etc.",
"Like Flor and Riordan (2018), we add specific rules in the form of regexes to address special cases to differentiate between phrases like For how long and Till when instead of a generic When question type.",
"Some of the templates are described in Table 7 in the Appendix.",
"The approach is described in Algorithm 3 in the Appendix.",
"We also use WordNet (Miller, 1998) hypernyms of all potential short answers and replace What with the bigram Which hypernym .",
"So, for a sentence like Hermione plays badminton at the venue, we generate a question Which sport does Hermione play at the venue?.",
"For computing the hypernym, we use the sense disambiguation implementation of Tan (2014).",
"While supersenses do display a richer lexical variety, sense definitions don't always fit well.",
"During explicit inversion of the verb and arguments around it via our templates, we tried to ensure that the positions of auxiliaries are set, and negations are correctly treated.",
"We define a few simple rules to ensure that.",
"Previous rule-based approaches (Mazidi and Tarau, 2016; Flor and Riordan, 2018) have used the NEG dependency label to identify polarity.",
"But such an approach would suffer whenever polarities would be hierarchically entailed from their parent clauses in cases like Picard did not fail to X where the entailed polarity of X is, in fact, positive.",
"Moreover, in one-way implications like Bojack hesitated to X, it would be best not to generate a question for unsure cases since it is open-ended if Bojack did or did not X. A similar example is displayed in Figure 5.",
"For each verb representing a subordinate clause, we compute its entailed truth or falsity from its parent clause using the set of one-way and two-way implicative verbs, and verb-noun collocations provided by Karttunen (2012).",
"For example, the two-way implicative construction forget to X entails that X did not happen, so it would be wrong to ask questions about X.",
"Karttunen (2012) provides simple implications in the form of 92 verbs and phrasal implications in the form of 9 sets of verbs and 8 sets of nouns making 1002 verb-noun collocations.",
"clause can be either TRUE, FALSE, or UNSURE 1 .",
"For FALSE clauses, we only generate a boolean question with a NO answer.",
"For UNSURE clauses, we do not generate any question.",
"For TRUE clauses and verbs and collocations not present in the above set, we rely on the NEG label.",
"While SRL's event-based representations have permitted us to generate questions that talk about the roles participants of an event play, we exploit VerbNet's sub-event representation to ask questions on",
"1 Unsure clauses appear in one-way implicatives when it's unclear if the clause is true or false under either an affirmative or a negative parent clause.",
"how participants' states change across the time frame of the event.",
"In Figure 2, the event murder (VerbNet class murder-42.1 ) results in a final state in which the participant Julius Caesar is in a not-alive state.",
"Each class in VerbNet (Schuler, 2005; Brown et al., 2019) includes a set of member verbs, the thematic roles used in the predicate-argument structure, accompanied with flat syntactic patterns and their corresponding semantic predicates represented in neo-Davidsonian first-order-logic formulation.",
"These semantic predicates bring forth a temporal sequencing of sub-events tracking how participants' states change over the course of the event.",
"For example, in the sentence, Brutus murdered Julius Caesar, the event murder-42.1 entails a final state of death or the Patient participant not being alive at the end of the event.",
"So, we construct a template mainAux the Patient otherAux not alive?.",
"Similarly, the event pay-68-1 results in a final state in which the Recipient Perry has possession of $100 and the Agent John has possession of the car, against which we define the templates as shown in Figure 3.",
"RETRACTED This paper was retracted.",
"We formulate two sets of questions: boolean type and which-type questions asking specifically about these states.",
"We create templates for VerbNet's stateful predicates like has location, has possession, has information, seem, has state, cost, desire, harmed, has organization role, together, social interaction, authority relationship, etc. which are present in 64.4% of the member verbs in VerbNet 2 .",
"We outline a few of the templates in Table 3.",
"During inference time, we first compute the VerbNet sense, the associated thematic role mapping, 2 Out of 4854 member verbs, there are 3128 members whose syntactic frame contains at least one of these predicates.",
"and syntactic frame (along with the predicates) with the help of Brown et al. (2019)'s parser.",
"VerbNet's predicates are governed by the sub-events in which they occur.",
"Although VerbNet's representation lays out a sequence of sub-events, no sub-event is explicitly mentioned as the final one 3 .",
"We choose all the predicates of those sub-events which are preceded by other sub-events which possess at least one process-oriented predicate.",
"4 3.7 PropBank Argument Descriptions PropBank rolesets' course-grained annotation of verb-specific argument definitions (killer, payer, etc.) to represent semantic roles offers robustly specific natural language descriptions to ask questions about the exact roles participants play.",
"Nonetheless, not all descriptions are suitable to be utilized directly in rigid templates.",
"So, we incorporate back-translation to 1) get rid of grammatical errors propagated from incorrect parsing and template restrictions, and 2) eliminate rarely used Prop-Bank descriptions and generate highly probable questions.",
"While previous work in rule-based QG has used SRL templates and WordNet senses to describe the roles arguments around a verb play, previous SRL templates have always been verb-agnostic, and we believe there is a great deal of potential in PropBank descriptions.",
"Moreover, WordNet supersenses do not always give rise to acceptable questions.",
"On manual evaluation, question relevance decreased after incorporating templates with WordNet supersenses.",
"Instead, we make use of PropBank's verb-specific natural language argument descriptions to create an additional set of templates.",
"VerbNet senses have a one-to-one mapping with PropBank rolesets via the SemLink project (Palmer, 2009).",
"We hence make use of Brown et al. (2019)'s parser to find the appropriate PropBank roleset for a sentence.",
"However, we observed that a lot of PropBank descriptions were noisy and made use of phrases which would be unarguably rare in ordinary parlance like breather or truster.",
"To eliminate such descriptions, we computed the mean Google N-gram probabilities (Lin et al., 2012) of all the PropBank phrases in the timespan of the last 100 3 or a sub-event, which is an outcome of a process 4 Out of 174 VerbNet predicates, we manually categorize 84 predicates like HAS LOCATION, HAS POSSESSION as stateful predicates and the remaining ones like DESCRIBE, TRANSFER, etc. as process-oriented predicates.",
"Most of the prior QG studies have evaluated the performance of the generated questions using automatic evaluation metrics used in the machine translation literature.",
"We use the traditional BLEU scores (Papineni et al., 2002) and compare the performance of Syn-QG on the SQuAD (Rajpurkar et al., 2016) test split created by Zhou et al. (2017).",
"BLEU measures the average n-gram precision on a set of reference sentences.",
"A question lexically and syntactically similar to a human question would score high on such n-gram metrics.",
"Despite not utilizing any training data, Syn-QG performs better than the previous SOTA on two evaluation metrics BLEU-3 and BLEU-4 and close to SOTA on BLEU-1 and BLEU-2 (Table 4) at the time of submission.",
"The high scores obtained without conducting any training arguably shed a little light on the predictable nature of the SQuAD dataset too.",
"Syn-QG's questions also arise from VerbNet's predicates and PropBank's descriptions, which indeed by nature describe events not mentioned explicitly within the fact.",
"Like in Figure 3, the sentence with the event paid results in a question with a stateful event of cost.",
"Deducible questions like these have a good chance of having a distribution of n-grams quite different from the source sentences, possibly exposing the weakness of traditional ngram metrics and rendering them less useful for a task like QG.",
"In order to have a complete and more reliable evaluation to gauge the system, we also carry out a human evaluation using two of the metrics used in QG-STEC Task B (Rus et al., 2012), namely grammaticality, and relevance which we define below.",
"We compared the questions generated from our system against the constituency-based H&S (Heilman and Smith, 2009), a neural system NQG (Du et al., 2017) which does not depend on a separate answer extractor and QPP&QAP 5 (Zhang and Bansal, 2019) which has outperformed existing methods.",
"We fed a total of 100 facts randomly picked from Wikipedia and 5 commercial domains (IT, Healthcare, Sports, Banking and Politics) combined, to each of the four systems.",
"We then conducted a crowd-sourced evaluation over Amazon Mechanical Turk for the generated questions.",
"5 Since the QPP&QAP model does not have a separate answer extractor, we use the answer spans computed from Syn-QG (412 in total after discarding overlaps).",
"it is or how syntactically fluent it is, disregarding its underlying meaning.",
"Relevance Score : Raters had to give a score on how relevant the generated question is to the given fact.",
"The relevance score helps us gauge whether the question should have been generated or not irrespective of its grammaticality.",
"6 Each question was evaluated by three people scoring grammaticality and relevance on a 5 point Lik-ert scale.",
"The inter-rater agreement (Krippendorff's co-efficient) among human evaluations was 0.72.",
"The instructions given to the Mturk raters are provided in the Appendix Figure 7.",
"The results of the evaluation are shown in Table 5.",
"Syn-QG generates a larger number of questions than H&S and performs strongly on grammaticality ratings.",
"Syn-QG is also able to generate highly relevant questions without the use of a ranker.",
"Also, rule-based approaches seem to be much better at generating relevant questions than neural ones.",
"QG-STEC also used variety and question types as their evaluation criteria and rewarded systems to generate questions meeting a range of specific question types.",
"Syn-QG's questions cover each of those question types.",
"Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming RETRACTED This paper was retracted.",
"Since many times, despite the ability to paraphrase (Table 6), back-translated outputs tend to change the meaning of the original sentence, we also measured back-translation's impact on the above QG metrics.",
"We considered questions generated from 50 facts of Wikipedia measuring the grammaticality and relevance before and after backtranslation.",
"While grammaticality increased from 3.54 to 4.11, question relevance fell a bit from 3.96 to 3.88.",
"This observation, along with the performance of QPP&QAP shown in Table 4, accentuates that while neural models are learning syntactic structures well, there is still some progress to be made to generate relevant questions.",
"We introduced Syn-QG, a set of broad coverage rules leveraging event-based and sub-event based sentence views along with verb-specific argument descriptions.",
"Automatic and manual evaluations 6 In cases when the grammaticality is extremely low like 1 or 2, the relevance score will also tend to be low.",
"Otherwise, we assume that minor grammatical variations can be ignored while gauging relevance.",
"show that Syn-QG is able to generate a large number of diverse and highly relevant questions with better fluency.",
"Verb-focused rules help approach long-distance dependencies and reduce the need for explicit sentence simplification by breaking down a sentence into clauses while custom rules like implications serve a purpose similar to a re-ranker to discard irrelevant questions but with increased determinism.",
"While our work focuses on sentence-level QG, it would be interesting to see how questions generated from VerbNet predicates would have an impact on multi-sentence or passage level QG, where the verb-agnostic states of the participants would change as a function of multiple verbs.",
"The larger goal of QG is currently far from being solved.",
"Understanding abstract representations, leveraging world knowledge, and reasoning about them is crucial.",
"However, we believe that with an extensible and transparent architecture, it is very much possible to keep improving the system continuously in order to achieve this larger goal.",
"Acknowledgments We thank the three anonymous reviewers for their helpful comments and invaluable suggestions.",
"We also thank the members of Amelia Science, RnD IPsoft, India Manjunath Hegde, Anant Khandel-wal, Ashish Shrivastava for their work in QG and especially Viswa Teja Ravi, for helping in replicating Mazidi and Tarau (2016)'s work.",
"We also thank Uday Chinta and IPsoft, India, for supporting and providing access to Amazon Mechanical Turk."
]
| [
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"objective",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"result",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method"
]
|
[
"The patterns in which the syntax of different languages converges and diverges are often used to inform work on cross-lingual transfer.",
"Nevertheless, little empirical work has been done on quantifying the prevalence of different syntactic divergences across language pairs.",
"We propose a framework for extracting divergence patterns for any language pair from a parallel corpus, building on Universal Dependencies (UD; Nivre et al., 2016).",
"We show that our framework provides a detailed picture of cross-language divergences, generalizes previous approaches, and lends itself to full automation.",
"We further present a novel dataset, a manually word-aligned subset of the Parallel UD corpus in five languages, and use it to perform a detailed corpus study.",
"We demonstrate the usefulness of the resulting analysis by showing that it can help account for performance patterns of a cross-lingual parser.",
"The assumption that the syntactic structure of a sentence is predictably related to the syntactic structure of its translation has deep roots in NLP, notably in cross-lingual transfer methods, such as annotation projection and multi-lingual parsing (Hwa et al., 2005; McDonald et al., 2011; Kozhevnikov and Titov, 2013; Rasooli and Collins, 2017, inter alia ), as well as in syntax-aware machine translation (MT; Birch et al., 2008; Williams et al., 2016; Bastings et al., 2017).",
"Relatedly, typological parameters that provide information on the dimensions of similarity between grammars of different languages were found useful for a variety of NLP applications (Ponti et al., 2019).",
"For example, neural MT in low-resource settings has been shown to benefit from bridgWork mostly done while at the Hebrew University of Jerusalem.",
"ing morphosyntactic differences in parallel training data by different types of preprocessing, such as reordering (Zhou et al., 2019) and hand-coded syntactic manipulations (Ponti et al., 2018).",
"Nevertheless, little empirical work has been done on systematically quantifying the type and prevalence of syntactic divergences across languages.",
"Moreover, previous work generally clas-sified divergences into a small set of divergence classes, often based on theoretical considerations (Dorr, 1994) or on categorical (hard) typological features selected in an ad-hoc manner, and left basic questions, such as how often POS tags are preserved in translation and what syntactic structures are likely correspondents of different syntactic relations, largely unaddressed.",
"See 2.",
"We propose a language-neutral, fine-grained definition of cross-linguistic morphosyntactic divergences (CLMD) that allows for their extraction using a syntactically annotated, content-word-aligned parallel corpus.",
"Concretely, we classify CLMD based on the edge labels on the dependency paths between corresponding pairs of content words ( 3.2).",
"See Figure 1 for an example.",
"1 1 It may appear that divergences recoverable by means of UD edge labels are purely syntactic and not morphosyntactic.",
"However, this is not the case: the domain of pure syntax is not well defined in a non-theoretical perspective, and many phenomena we are dealing with, e.g. a switch from a direct-We further conduct a detailed corpus study, manually aligning content words in a subset of the PUD corpus (Zeman et al., 2017) over five language pairsEnglish-French (En-Fr), English-Russian (En-Ru), English-Chinese (En-Zh), English-Korean (En-Ko), and English-Japanese (En-Jp) ( 3.1)and analyze the prevalence of divergences by types ( 4).",
"The resulting resource can be useful for MT research, by guiding the creation of challenge sets focusing on particular constructions known to be cross-linguistically divergent, as well as by guiding preprocessing of source-side sentences for better MT performance.",
"2 The emerging CLMD provide information not only on the macro-structure of the grammar (e.g., whether the language is pro-dropping), but also on parameters specific to certain lexical classes (e.g., modal verbs) and probabilistic tendencies (e.g., Japanese tends to translate sequences of events, expressed in English using coordinating conjunctions, with subordinate clauses).",
"See 5. Further experiments demonstrate the methodol-ogy's applicative potential.",
"First, we show that the proposed methodology can be straightforwardly automated by replacing manual parses and alignments with automatically induced ones ( 7).",
"We present a study done on a larger En-Zh corpus, which yields results similar to those obtained manually.",
"Secondly, we show that the reported distribution over divergence types is predictive of the performance patterns of a zero-shot parser ( 8).",
"Comparing syntactic and semantic structures over parallel corpora is the subject of much previous work.",
"Dorr et al. (2010) compiled a multiply-parallel corpus and annotated it with increasingly refined categories in an attempt to abstract away from syntactic detail but did not report any systematic measurement of the distribution of divergences.",
"indlerov et al. (2013), Xue et al. (2014), Sulem et al. (2015), and Damonte and Cohen (2018) studied divergences over semantic graphs and argument-structure phenomena, while a related line of work examined divergences in discourse phenomena (otaric et al., 2018).",
"Other works studied the ability of a given grammar formalism to capture CLMD in a parallel corpus (e.g., object construction to an oblique construction, often involve morphological processes, such as adding a case ending. 2 The resource can be found at https://github. com/macleginn/exploring-clmd-divergences Sgaard and Wu, 2009).",
"However, none of these works defined a general methodology for extracting and classifying CLMD.",
"The only previous work we are aware of to use UD for identifying CLMD is (Wong et al., 2017), which addresses Mandarin-Cantonese divergences by comparing the marginal distribution of syntactic categories on both sides (with-out alignment).",
"Relatedly, Deng and Xue (DX17; 2017) aligned phrase structure trees over an En-Zh parallel corpus.",
"Notwithstanding the similarity in the general approach, we differ from DX17 in",
"(i) specifically targeting content words,",
"(ii) relying on UD, which is standardized cross-linguistically and allows to simplify the alignment process by focusing on the level of words, 3 and",
"(iii) addressing multiple language pairs.",
"It should be noted that the classification of divergences presented in DX17 is rather coarse-grained.",
"Of the seven classes in their study, five (Transitiv-ity, Absence of function words, Category mismatch, Reordering, and Dropped elements) reflect local syntactic differences; one (Lexical encoding) covers many-to-one/one-to-many alignments and non-literal word translations; and the remaining residual type (Structural paraphrase) indiscriminately covers more substantial CLMD.",
"We address this limitation and propose a methodology that automatically derives fine-grained CLMD from aligned annotated corpora and enables straightforward computation of their type statistics.",
"In this section, we present a novel cross-linguistic dataset that provides a high-resolution overview of morphosyntactic differences between pairs of languages and a formal definition of morphosyntactic divergences formulated based on it.",
"Divergences in the syntax of sentences and their translations can stem from a number of reasons.",
"Setting aside semantic divergences, which are differences in the content expressed by the source and the target (Carpuat et al., 2017; Vyas et al., 2018), the remaining varieties of divergences are essentially different ways to express the same content (Fisiak, 1984; Boas, 2010), which we call CLMD.",
"We define CLMD empirically to be recurrent divergence patterns in the syntactic structures of sentences and their translations.",
"While content 3 Their alignment process involved bottom-up and a top-down passes, sometimes yielding contradictory results.",
"differences may account for some of the observed syntactic divergences, by aiming for recurring patterns we expect to filter out most such cases, as they are subject to fewer grammatical constraints and should thus not yield systematic patterns of morphosyntactic divergence.",
"It is harder to distinguish between translation artifacts and CLMD in translated sentences that are due to the genuine differences between grammar and usage.",
"However, translated texts are usually characterized by a higher degree of morphosyntactic transfer and rarely portray the target language as more different from the source language than it needs to be (Koppel and Ordan, 2011; Volansky et al., 2015).",
"Therefore, we do not expect to find spurious recurrent morphosyntactic-divergence patterns introduced by the process of translation.",
"Universal Dependencies (UD) is a framework for treebank annotation, whose objectives include satisfactory analyses of individual languages, providing a suitable basis for bringing out cross-linguistic parallelism, suitability for rapid consistent annotation and accurate automatic parsing, ease of comprehension by non-linguists, and effective support for downstream tasks.",
"See Appendix A for a glossary of UD terms.",
"An important feature of the dependency analysis in UD is that content words are considered the principal components of dependency relations.",
"Within this framework, function words are generally dependents of the content word they relate to most closely.",
"The primacy of content words brings out cross-linguistic parallels that would be harder to detect with other annotation frameworks since function words are highly variable across languages.",
"Importantly, dependency paths between content words do not generally contain function words.",
"As a result, by comparing paths across languages, differences in the surface realization are often masked, and argument structure and linkage differences emphasized.",
"For example, a preposition accompanying a verb may be dropped in translation if the corresponding verb is transitive (cf. went around the world in En vs. oboshel went.around mir world in Ru).",
"As prepositions modify the head noun in UD prepositional phrases, the dependency path between the verb and the head noun is not altered.",
"The Parallel Universal Dependencies (PUD) corpus consists of 1000 sentences translated into various languages by professional translators.",
"4 In this paper, we study the Russian, French, Chinese, Japanese, and Korean versions of the PUD corpus, which were each aligned with the corresponding English corpus.",
"5 Each parallel corpus was aligned by a human annotator, proficient in the language of the corpus and in English.",
"The UD tokenization is adopted in all cases.",
"Due to the difficulty in finding annotators proficient in pairs of these languages, our annotation takes English as the source language.",
"However, it is possible to obtain an approximate alignment between any pair of these languages, pivoting through English.",
"Only content words are aligned, so as to sidestep the inherently ambiguous nature of aligning function words across divergent constructions.",
"For details on the function/content distinction we apply to words, see Appendix B. We restrict the alignment to include connected components of the following types: (1) one-to-one alignments, i.e., where a single content word is aligned with another single content word; (2) many-to-one alignments, where multiple source words are aligned with a single target word; (3) one-to-many alignments, where a single source word is aligned with multiple target words.",
"Where a source multi-word expression is translated with a target multi-word expression, we align their headwords, to indicate that their subtrees are in correspondence with one another (e.g., English with this and French par consquent ).",
"Most of the content words in the corpora were aligned in a one-to-one alignment, which accounts for around 90% of aligned En tokens across the corpora.",
"We present a framework for defining and investigating translation divergences across a variety of language pairs using UD.",
"The framework operates on a sentence-aligned parallel corpus, where both sides are annotated with UD and content words in corresponding sentences are aligned.",
"Let T s = ( V s , E s ) , T t = ( V t , E t ) be a pair 4 Half of the sentences in the corpus are taken from news articles and the other half from Wikipedia.",
"750 of the sentences were originally in English, 100 in German, 50 in French, 50 in Italian, and 50 in Spanish.",
"All sentences were translated to other languages via English.",
"5 Occasionally highly divergent translations prohibited constructing an alignment.",
"999 sentences were aligned for En-Fr, En-Zh, and En-Jp, 995 for En-Ru, and 884 for En-Ko.",
"of UD trees over corresponding sentences, and let CW s V s and CW t V t be the sets of content words in T s and T t respectively.",
"Let A CW s CW t be a token-level alignment, consisting of one-to-one, many-to-one, and one-to-many alignments.",
"There are two ways to restrict the definition of correspondences between nodes and edges in T s and T t : (1) by considering only one-to-one edges or (2) by defining a one-to-one correspondence A (cid:48) A by traversing all many-to-one alignments C = { ( v 1 , u ) , ( v 2 , u ) , ..., ( v k , u ) } A , and selecting for A (cid:48) only ( v i , u ) , where v i is the highest node in T s among the nodes in C .",
"The same is done for one-to-many alignments.",
"6 The first approach is preferable for analyzing syntactic-path correspondences and was followed in this presentation.",
"The second approach is more suitable for analysis of POS mappings, where headwords are more prominent.",
"We then define Corresponding Syntactic Relations (CSR) as a pair ( R s , R t ) such that R s and R t are dependency paths in T s and T t , and such that the origin and endpoint of R s are in CW s and the origin and endpoint of R t are their aligned tokens in CW t according to A (cid:48) .",
"If the origin or the endpoint of R s do not have a corresponding node in T t , R s does not have a corresponding relation in T t .",
"The types of R s and R t are the sequence of labels on the edges of the paths, optionally along with their directionality in the tree (linear order is not taken into account).",
"Without loss of generality, we assume that R s begins at the leftmost word of the pair in the En sentence, and R t by definition begins at the target word corresponding to the leftmost source word.",
"For brevity, we only present results where directionality is not taken into account.",
"Relations are thus written as sequences of UD edge labels separated by the +' sign.",
"Token pairs that do not share a POS tag and CSR not of the same type are said to form a divergence.",
"One-to-many and many-to-one alignments are another form of divergence.",
"We apply the proposed methodology to the aligned PUD.",
"We compare syntactic relations, analyzing correspondences between POS tags as well as cor-6 Cases of non-unique highest nodes are generally rare in PUD and are thus excluded to simplify the analysis.",
"The only frequent case is the Fr discontiguous negation marker ne... pas , generally corresponding to not .",
"respondences between single-edge relations in English and target-side dependency paths.",
"We begin by examining the mappings of the POS of corresponding tokens (see Appendix C.1 for the full percentage and count matrices).",
"We find that En POS tags of content words are mostly stable in translations to Fr and Ru (sums of the values on the main diagonals account for 78 and 77% of the total number of word pairs respectively).",
"Notable exceptions are the negative particle not , which is in a one-to-many alignment in French with ne and pas , certain types of auxiliaries analyzed as verbs in both Ru and Fr, and proper nouns, which often get mapped to Fr nouns (cf. 9 and discussion in [Samardic et al., 2010]).",
"The En-Zh matrix presents more divergences with only 65% of the alignment edges connecting tokens with the same POS.",
"11% of nouns were translated as verbs (the reverse mapping is found, albeit to a lesser extent, in all three corpora).",
"Most of such cases involve names of actions and agents ( borrowing , ruler , etc.).",
"En negative particles are split between Zh adverbs, verbs, and auxiliaries; adjectives are quite often mapped to nouns, which form parts of compounds (e.g., social media shjiao miti , lit. social-interaction media').",
"Adpositions involving spatial relations (the only type of adpositions we consider as content words) are predominantly mapped to adverbs.",
"The En-Ko matrix is even more divergent: only 62% of the alignment edges connect matching POS.",
"The most striking property of the En-Ko POS matrix is that NOUN serves as a sink for other POS: 27% of En adverbs, 56% of En adjectives, and 60.5% of En verbs correspond to Ko nouns.",
"For example, En trying (to do something) corresponds to Ko misu attempt'.",
"As we will show in the next section, this is due to drastic syntactic divergences in En-Ko.",
"The En-Jp matrix is similar: 62.4% of the edges connect matching POS.",
"Verbs are mostly translated as verbs (58.1%), which shows more affinity between En and Jp basic clause structure.",
"However, adjectives still mostly turn into nouns (53.7%), and adverbs are quite likely to get translated by a noun (16.4% vs. 25.8% for adverb adverb).",
"Both Ko and especially Jp tend to leave En pronouns unaligned (15% and 59% respectively), upholding their reputations as radically pro-drop languages (Neeleman and Szendroi, 2007).",
"Interestingly, Zh, another classical example of this phenomenon, loses only 9% of the pronouns.",
"Ru, a mildly pro-drop language, loses 4% of the pronouns, while the non-pro-drop Fr loses only 2%.",
"This demonstrates the fine granularity of distinctions an empirical approach to CLMD can yield.",
"Table 2 presents the matrices of target-side syntactic relations that correspond to single-edge source-side relations in the five parallel corpora.",
"Several observations can be made.",
"First, the En-Fr and En-Ru matrices are similar and are dominated by the elements on the main diagonal (60% of the total number of edges in En-Fr and 55% in En-Ru).",
"An exception are compound s (which in En are mostly noun compounds), as Ru does not have a truly productive nominal compounding process and Fr compounds are annotated as other relations in UD (Kahane et al., 2017).",
"The other three matrices are less dominated by the entries on the main diagonal (46% of the alignments in En-Zh, 32% in En-Ko, 25.8% in En-Jp) and show higher entropy in most rows, especially in nmod , amod , obl , and xcomp , compound again being a notable exception (entropy matrices for all relations can be found in Appendix D).",
"Adverbial clauses ( advcl ) have relatively low values on main diagonals and a high percentage of single edges corresponding to multi-edge paths.",
"This reflects the wide semantic range of advcl : in addition to modifying the matrix predicate ( died by drowning ), they can also denote sequential and parallel events ( published a paper sparking a debate ).",
"The latter two cases naturally give rise to conj and complement clauses (cf. published a paper and sparked a debate / published a paper to spark a debate ), the most common other path in En-Ru and En-Fr respectively.",
"As we show in 5.2, there is also a converse phenomenon: sequences of events represented using coordinated clauses, ccomp , or xcomp in En are translated with advcl in East Asian languages.",
"Of particular interest are the differences between En-Ko and En-Jp confusion matrices.",
"Japanese and Korean are largely similar from the point of view of language typology (SOV word order, topic prominence, agglutinative morphol-ogy), but there are also important differences on the level of usage.",
"Thus, the adjective class in Korean is less productive, and translations often resort to relative clauses for the purposes of nominal modification.",
"Another difference is the fact that Japanese has few compounds as those are usually translated as nmod with a genitive particle, while Korean translates nearly all En compounds as compounds.",
"See the discussion of further differences in the next section.",
"In this section, we analyze prominent cases of divergences revealed by applying our method, attempting to demonstrate how fine-grained CLMD may be detected from the confusion matrices and shedding light on what challenges are involved in bridging these divergences (e.g., for the purposes of MT or cross-lingual transfer).",
"Some of the divergences arise due to real differences between grammars; others are largely due to inconsistent application of the UD methodology.",
"When inspecting the translation correspondents of adjectives, we find that while in En-Fr and En-Ru the adjective classes are mostly overlapping, this is not the case for Zh, Ko and Jp.",
"Instead, translation into these languages shifts probability mass from adjectives to nouns: nouns are hardly ever translated to adjectives, but adjectives are more likely to be translated to nouns than remain adjectives.",
"This trend is related to a preference to translate adjectives into possessives (e.g., Korean company Jp: Kankoku no kaisha lit. a company of Ko-rea') or compounds (e.g., European market Jp: Oshu ichiba lit. Europe market').",
"The confusion matrix shows that En nsubj demonstrates very different multi-edge mappings into European languages (Fr and Ru) as opposed to East Asian ones (Zh, Jp, and Ko).",
"The most common other path for both Russian and French is xcomp+nsubj , which is easy to explain: PUD corpora of these languages demote fewer auxiliary predicates than English (criteria for demotion are formulated in terms of superficial syntax and differ between languages) and more often place the dependent predicates as the root.",
"Therefore, in constructions like he could do something the direct edge between the subject and the verb of the dependent clause is replaced with two edges going through the modal predicate.",
"7 In Zh, Ko and Jp, however, there is another issue: sequential events described using coordinated conjuncts and xcomp in En are analyzed as being described with temporal or causal subordinate clauses ( Kipling met and fell in love with Florence Garrard Ko: Kipeulring manna meet.subordinate sarange in.love ppajyeosseumyeo fell , lit. having met, fell in love').",
"This makes the direct nsubj edge in En correspond to an Ko nsubj edge within a subordinated clause, and thus a nsubj+advcl path.",
"Given that not all coordinated verbs are translated using a subordinate clause in Ko and Zh, bridging these divergences is likely to require more than a simple tree-transformation rule but possibly refinement of UD's categories to more abstract linkage types.",
"UD treats En modal verbs, such as can or may , as aux , which are dependent on the lexical verb (e.g., could aux do).",
"Corresponding verbs in other languages are often treated as simple verbs (for example, all Ru modal verbs are simple verbs in UD).",
"Even more drastically, Ko routinely expresses this semantics by using an existential construction with the literal meaning (for) X there was a possibility of doing Y' (instead of X could do Y ), which converts the En aux into nsubj + acl .",
"In this case, a tree transformation seems to be sufficient to bridge this divergence.",
"Ko also differs from other languages in the extent that it uses relative clauses for nominal modifica-tion.",
"Table 2 shows that nmod has a high percentage for other mappings (48%).",
"Investigation of this long tail shows that to a large extent it consists of acl -based constructions: acl+advmod , acl+nsubj , acl+obj .",
"Added to acl , the cumulative share of acl -based constructions is on par with compound , the main correspondent of this relation for non-possessive nmod (possessive nmod are the only ones that map to nmod in Ko).",
"This discrepancy is due to the fact that Ko nearly 7 Cf.",
"also 23 En nsubj edges mapped to Ru nsubj+obl .",
"Inspection of these sentences reveals that the CLMD can be ascribed to metaphorical usage (e.g., the sense of read employed in the post reads has no direct correspondent in Ru).",
"Some such cases can be disambiguated using existing annotation schemes.",
"obligatorily adds contextually-predictable predicates to oblique relations such as actions [taken] in Crimea or people [being] without children .",
"The Korean PUD does not demote these verbs to functional-word status (such an approach is advocated for in [Gerdes and Kahane, 2016]) but turns them into clause-heading verbs, thus yielding an acl+X divergence.",
"8 Language ru fr zh ko jp Thematic Full 25 4 8 5 5 nsubj to obj / obl 78 57 43 25 53 Promotional 0 0 0 0 0 Demotional 10 2 4 19 1 Structural 83 67 17 0 35 Conflational 10 5 5 6 2 Categorical nsubj + obj 8 12 23 11 4 nsubj +",
"We quote the original formulations of the divergences illustrated through English-Spanish or English-German examples.",
"Thematic divergence: E: I like Mary S: Mara me gusta a m Mary pleases me.' In UD, this corresponds to the situation when the original obj or obl becomes the nsubj and vice versa.",
"The divergence will correspond to a CSR of type ( nsubj, obj ) or ( nsubj, obl ) .",
"A full thematic divergence will also involve the inverse divergence ( obj, subj ) or ( obl, subj ) .",
"8 The list of examples we can discuss goes on.",
"For example, while investigating the cross-linguistic patterning of English advcl , we noticed that it often gets mapped to ccomp in French and acl in Russian.",
"Both divergent annotations seem to be erroneous as the sentences they appear in are covered by the definition of advcl provided in the UD manual.",
"However, the French case is interesting in that the source advcl in question can be characterized semantically: instead of denoting a secondary action, they reflect a sequence of events or parallel scenes (e.g., Columbus sailed across the Atlantic... sparking a period of European exploration of the Americas ).",
"Another problem is presented by multi-word expressions analyzed as proper nouns where all tokens have the same POS tag.",
"The UD manual advises to retain the original parts of speech in proper nouns consisting of phrases (e.g., Cat on a Hot Tin Roof ) but allows to treat words that are etymologically adjectives as PROPN in names such as the Yellow Pages .",
"When such names are translated, PROPN get reanalyzed as ADJ, NOUN, etc., producing spurious CLMD.",
"Promotional divergence: E: John usually goes home S: Juan suele ir a casa John tends to go home.' This corresponds to the situation where the original root predicate becomes an xcomp , and the original advmod takes its place as the root.",
"Corresponding CSR type: ( advmod, xcomp ) .",
"Demotional divergence: E: I like eating G: Ich esse gern I eat likingly.' The original xcomp becomes the root predicate, and the original root predicate is demoted to the position of an advmod .",
"The relevant CSR type is ( xcomp, advmod ) .",
"Structural divergence: E: John entered the house S: Juan entr en la casa John entered in the house.' The original obj becomes an obl .",
"CSR type: ( obj, obl ) .",
"Conflational divergence: E: I stabbed John S: Yo le di pualadas a Juan I gave knife-wounds to John.' The original root predicate is in a one-to-many alignment with a combination of a root predicate and its obj .",
"Categorial divergence: E: I am hungry G: Ich habe Hunger I have hunger.' The original root predicate becomes an obj .",
"CSR type: ( nsubj, nsubj + obj ) .",
"Lexical divergence: E: John broke into the room S: Juan forz la entrada al cuarto John forced (the) entry to the room.' Divergences of this type arise whenever aligned words have at best partially overlapping semantic content and never appear on their own but always with other divergences.",
"The information necessary to ascertain the degree of word-meaning overlap is not embedded into UD or any other cross-lingual annotation scheme; therefore we were unable to provide a formal interpretation of this type of divergence.",
"Frequencies of Dorr's Divergences in PUD are presented in Table 1 (except for Lexical divergences, which are hard to formalize).",
"It is evident that these types only account for a small portion of the encountered divergences, the point already made for En-Zh in DX17.",
"It seems then that hand-crafted translation divergences, however insightful they may be, receive attention disproportionate to their empirical frequency.",
"One of the strengths of our approach is that it only relies on UD parses and alignments, for which automatic tools exist in many languages.",
"To demonstrate the feasibility of an automated protocol, we conducted an analysis of the WMT14 En-Zh News Commentary corpus.",
"9 We used TsinghuaAligner (Liu and Sun, 2015) and pretrained English and Chinese UD parsers from the StanfordNLP toolkit (Qi et al., 2018).",
"To verify that the aligner we are using is adequate for the task, we aligned the En-Zh PUD corpus pair and checked the resulting precision and recall of the edges corresponding to content words.",
"10 The results (P= 0 . 86 , R= 0 . 32 ) indicate that the automated approach is able to recover around a third of the information obtained through manual alignment with reasonable precision.",
"Importantly, we find recall to be nearly uniform for all source edge types, which suggests that the low recall can be mitigated by using a larger corpus without biasing the results.",
"The POS and edge-type confusion matrices built from this experiment are very similar to the ones reported in this paper (save for compound , which is not produced by the Stanford Zh parser), and are not reproduced here (they can be found in the Supplementary Materials).",
"We come to demonstrate the applicability of our method for analyzing the performance of a downstream cross-lingual transfer task.",
"We consider zero-shot cross-lingual parsing (Ammar et al., 2016; Schuster et al., 2019) as a test case and investigate to what extent the performance of a zero-shot parser on a given dependency label can be predicted from its stability in translation.",
"As test sets we use the test sets of GSD UD corpora for the five languages (Ru, Fr, Zh, Ko, and Jp), as well as the corresponding PUD corpora.",
"We train a parser following the setup of Mulcaire et al. (2019) and use a pretrained multilingual BERT (Devlin et al., 2019), feeding its output embeddings into a biaffine-attention neural UD parser (Dozat and Manning, 2017) trained on the English EWT corpus.",
"We evaluate the parser's ability to predict relation types by computing F-scores for each de-9 http://www.statmt.org/wmt14/ 10 These were defined here as edges with the following labels: root , nsubj , amod , nmod , advmod , nummod , acl , advcl , xcomp , compound , flat , obj , obl .",
"pendency label (save for labels corresponding to function words that were generally not aligned).",
"Appendix E gives full implementation details.",
"We start by computing Spearman correlations between F-scores and the PRESERVATION indices, defined as the proportion of identity mappings in the confusion matrices for each corpus (e.g., PRESERVATION for acl in Ru is 0.48, while in Jp it is 0.37).",
"The correlations are very strong for some languages and noticeable for others ( = 0 . 62 , 0 . 75 , 0 . 31 , 0 . 42 , 0 . 77 for Ru, Fr, Zh, Ko, and Jp respectively on GSD test sets, and = 0 . 7 , 0 . 82 , 0 . 72 , 0 . 84 , 0 . 68 on PUD).",
"We hypothesize that the preservation of a relation in translation is related to the ability of a zero-shot parser to predict it.",
"In order to control for obvious covariates, we introduce two control variables: (1) SOURCE-SIDE HARDNESS (test-set F-scores attained by the parser on English dependency relations) and (2) TARGET-SIDE HARDNESS (F-scores attained by a parser trained on the target-language UD GSD corpus on the target-language test set).",
"We use a mixed-effects model with PRESERVATION , SOURCE-SIDE HARDNESS , and TARGET-SIDE HARDNESS as fixed effects, random intercepts for language, and F-scores for dependency relations as the dependent variable.",
"We then used likelihood-ratio test to compute p -values for the difference in predictive power between the model without PRESERVATION and one with it.",
"The p -value (using Holm correction) is highly significant ( < 0.001) for the PUD corpora, and for GSD it is significant with p = 0 .",
"02 .",
"These results suggest that morphosyntactic differences between languages, as uncovered by our method, play a role in the transferability of parsers across languages.",
"This also underscores the potential utility of bridging CLMD for improving syntactic transfer across languages.",
"The presented methodology gives easy access to different levels of analysis.",
"On one hand, by focusing on content words, the approach abstracts away from much local-syntactic detail (such as reordering or adding/removing function words).",
"At the same time, the methodology and datasets provide means to investigate essentially any kind of well-defined CLMD.",
"Indeed, since function words in UD tend to be dependents of content words, we may analyze the former by considering the distribution of function word types that each type of content word has.",
"Moreover, sub-typing dependency paths based on their linear direction can allow investigating word-order differences.",
"11 Other than informing the development of crosslingual transfer learning, our analysis directly supports the validation of UD annotation.",
"For example, we reveal inconsistencies in the treatment of multi-word expressions across languages.",
"Thus, the translation of many NPs with adjectival modifiers, such as Celtic sea or episcopal church , are analyzed as compound s.",
"Languages such as Ru, lacking a truly productive nominal-compound relation, carve this class up based mostly on the POS of the dependent element (e.g., episcopal corresponds to a Ru amod ), its semantic class (e.g., compounds with cardinal directions are Ru amod s), and whether the dependent element itself has dependents (these mostly correspond to Ru nmod s).",
"Our method can be used to detect and bridge such inconsistencies.",
"In conclusion we note that our analysis suggests that considerable entropy in the mapping between the syntactic relations of the source and target sides can be reduced by removing inconsistencies in the application of UD, and perhaps more importantly by refining UD with semantic distinctions that will normalize corresponding constructions across languages to have a similar annotation.",
"This will simultaneously advance UD's stated goal of bringing out cross-linguistic parallelism across languages and, as our results on zero-shot parsing suggest, make it more useful for cross-linguistic transfer.",
"We thank Nathan Schneider for helpful comments and anonymous reviewers for useful feedback.",
"This work was supported by the Israel Science Foundation (grant no. 929/17)."
]
| [
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"other"
]
|
[
"Unlike literal expressions, idioms' meanings do not directly follow from their parts, posing a challenge for neural machine translation (NMT).",
"NMT models are often unable to translate idioms accurately and over-generate compositional, literal translations.",
"In this work, we investigate whether the non-compositionality of idioms is reflected in the mechanics of the dominant NMT model, Transformer, by analysing the hidden states and attention patterns for models with English as source language and one of seven European languages as target language.",
"When Transformer emits a non-literal translation i.e. identifies the expression as idiomatic the encoder processes idioms more strongly as single lexical units compared to literal expressions.",
"This manifests in idioms' parts being grouped through attention and in reduced interaction between idioms and their context.",
"In the decoder's cross-attention, figurative inputs result in reduced attention on source-side tokens.",
"These results suggest that Transformer's tendency to process idioms as compositional expressions contributes to literal translations of idioms.",
"An idiom is a group of words of which the figurative meaning differs from the literal reading, such as kick the bucket, which means to die, instead of physically kicking a bucket.",
"An idiom's figurative meaning is established by convention and is typically non-compositional i.e. the meaning cannot be computed from the meanings of the idiom's parts.",
"Idioms are challenging for the task of neural machine translation (NMT) (Barreiro et al., 2013; Isabelle et al., 2017; Constant et al., 2017; Avramidis et al., 2019).",
"On the one hand, figures of speech are ubiquitous in natural language (Colson, 2019).",
"On the other hand, idioms occur much less frequently than their parts, their meanings need to be memorised due to the non-compositionality, context idiom translatedidiom 1 2 3 4 more attention for literal more attention for figurative </s> translatedcontext cross-attentionself-attention Figure 1: How do attention patterns of figurative PIEs that are paraphrased by the model compare to attention patterns of literal PIEs that are translated word for word?",
"and they require disambiguation before translation.",
"After all, not all potentially idiomatic expressions (PIEs) are figurative e.g. consider When I kicked the bucket, it fell over.",
"Whether PIEs should receive a figurative or literal translation depends on the context.",
"Yet, little is known about neural mechanisms enabling idiomatic translations and methods for improving them, other than data annotation (Za-ninello and Birch, 2020).",
"Related work studies how idioms are represented by Transformer-based language models (e.g. Garca et al., 2021a,b), but those models are not required to output a discrete representation of the idiom's meaning, which is a complicating factor for NMT models.",
"In this work, we analyse idiom processing for pre-trained NMT Transformer models (Vaswani et al., 2017) for seven European languages by comparing literal and figurative occurrences of PIEs.",
"The comparison can help identify mechanics that underlie neural idiom processing to pave the way for methods that improve idiomatic translations.",
"Large-scale analyses of idiom translations suffer 3608 from a lack of parallel corpora (Fadaee et al., 2018).",
"We, therefore, use a monolingual corpus, heuristically label Transformer's translations, and verify the heuristic works as intended through human evaluation, as described in 3.",
"To understand how idioms are represented in Transformer, we firstly apply interpretability techniques to measure the impact of PIEs on the encoder's self-attention and the cross-attention mechanisms (4), as well as the encoder's hidden representations (5).",
"Afterwards, in 6, we intervene in the models while they process idiomatic expressions to show that one can change non-compositional translations into compositional ones.",
"The results indicate that Transformer typically translates idioms in a too compositional manner, providing a word-for-word translation.",
"Analyses of attention patterns summarised in Figure 1 and hidden representations point to the encoder as the mechanism grouping components of figurative PIEs.",
"Increased attention within the PIE is accompanied by reduced attention to context.",
"When translating figurative PIEs, the decoder relies less on the encoder's output than for literal PIEs.",
"These patterns are stronger for figurative PIEs that the model paraphrases than for sentences that receive an overly compositional translation and hold across the seven European languages.",
"Considering that a recent trend in NLP is to encourage even more compositional processing in NMT (Raunak et al., 2019; Chaabouni et al., 2021; Li et al., 2021, i.a.), we recommend caution.",
"It may be beneficial to evaluate the effect of compositionality-favouring techniques on non-compositional phenomena like idioms to ensure their effect is not detrimental.",
"This section summarises work discussing human idiom comprehension, interpretability studies for NMT, and literature about figurative language processing in Transformer.",
"Idiom comprehension Historically, idioms were considered non-compositional units (Swinney and Cutler, 1979).",
"Two main views ( literal first and direct access ) existed for how humans interpreted them.",
"The former suggests humans attempt a compositional interpretation before considering the figurative interpretation in case of a contextual discrepancy (Bobrow and Bell, 1973; Grice, 1975, 1989).",
"The latter view suggests one can immediately retrieve the non-compositional meaning (Gibbs Jr et al., 1994).",
"The more recent hybrid view posits that idioms are simultaneously processed as a whole primed by a superlemma (Kuiper et al., 2007) and word for word (Caillies and Butcher, 2007).",
"The processing speed and retrieval of the figurative meaning depend on the idiom's semantic properties and the context (Cain et al., 2009; Vulchanova et al., 2019).",
"Examples of semantic properties are the conventionality and decomposability of idioms (Nunberg et al., 1994).",
"We do not expect processes in Transformer to resemble idiom processing in humans.",
"Nonetheless, this work helps us determine our focus of study on the role of the surrounding context and the extent to which idioms' parts are processed as a whole.",
"Translating PIEs that are used figuratively is not always straightforward.",
"Baker et al. (1992) discuss strategies for human translators:",
"(i) Using an idiom from the target language of similar meaning and form,",
"(ii) using an idiom from the target language with a similar meaning and a different form,",
"(iii) copying the idiom to the translation,",
"(iv) paraphrasing the idiom or",
"(v) omitting it.",
"In the absence of idioms with similar meanings across languages,",
"(iv) is the most common strategy.",
"Our main focus is on literal translations ( word-for-word transla-tions), and paraphrases .",
"Interpreting Transformer Analyses of Transformer for NMT studied the encoder's hidden representations and self-attention mechanism (e.g. Raganato and Tiedemann, 2018; Tang et al., 2019b; Voita et al., 2019), the cross-attention (e.g. Tang et al., 2019a) and the decoder (e.g. Yang et al., 2020).",
"The encoder is particularly important for the contextualisation of tokens from the source sentence; it acts as a feature extractor (Tang et al., 2019b).",
"The encoder's bottom three layers better represent low-level syntactic features, whereas the top three layers better capture semantic features (Raganato and Tiedemann, 2018).",
"As a result, one would expect the representations in higher layers to be more representative of idiomaticity.",
"Idioms are a specific kind of ambiguity, and whether a word is ambiguous can accurately be predicted from the encoder's hidden representations, as shown by Tang et al. (2019a) for ambiguous nouns.",
"Transformer's cross-attention is not crucial for disambiguating word senses (Tang et al., 2018), but the encoder's self-attention does reflect ambiguity through more distributed attention for ambiguous nouns (Tang et al., 2019a).",
"Tropes in Transformer Various studies examine the Transformer-based language model BERT's (Devlin et al., 2019) ability to capture tropes like metonyms (Pedinotti and Lenci, 2020), idioms (Kurfal and stling, 2020), and multiple types of figurative language (Shwartz and Dagan, 2019).",
"Kurfal and stling (2020) detect idioms based on the dissimilarity of BERT's representations of a PIE and its context, assuming that contextual discrepancies indicate figurative usage.",
"Pedinotti and Lenci (2020) measure whether BERT detects meaning shift for metonymic expressions but find cloze probabilities more indicative than vector similarities.",
"Shwartz and Dagan (2019) find that BERT is better at detecting figurative meaning shift than at predicting implicit meaning e.g. predicting that a hot argument does not involve temperature.",
"The most recent work studies properties of hidden representations of noun-noun compounds (NCs) and verb-noun compounds (VCs): Garca et al. (2021b) examine (contextualised) word embeddings, including BERT, to compare figurative and literal NC types .",
"They investigate the similarities between (1) NCs and their synonyms, (2) NCs and their components, (3) in-context and out-of-context representations, and (4) the impact of replacing one component in the NC.",
"Surprisingly, idiomatic NCs are quite similar to their components and are less similar to their synonym compared to literal NCs.",
"Moreover, the context of the NC hardly contributes to how indicative its representation is of idiomaticity, which was also shown by Garca et al. (2021a), who measured the correlation between token -level idiomaticity scores and NCs' similarity inand out-of-context.",
"In search of the idiomatic key of VCs (the part of the input that cues idiomatic usage), Nedumpozhimana and Kelleher (2021) train a probing classifier to distinguish literal usage from figurative usage.",
"They then compare the impact of masking the PIE to masking the context on the classifier's performance and conclude that the idiomatic key mainly lies within the PIE itself, although there is some information coming from the surrounding context.",
"We use Transformer models (Vaswani et al., 2017) with English as the source language and one of seven languages as the target language (Dutch, German, Swedish, Danish, French, Italian, Span-ish).",
"Span-ish).",
"1 Transformer contains encoder and decoder networks with six self-attention layers each and eight heads per attention mechanism.",
"The models are pre-trained by Tiedemann and Thottingal (2020) with the Marian-MT framework (Junczys-Dowmunt et al., 2018) on a collection of corpora (OPUS) (Tiedemann and Thottingal, 2020).",
"2 We extract hidden states and attention patterns for sentences with PIEs.",
"The analyses presented are detailed for Dutch, after which we explain how the results for the other languages compare to Dutch.",
"3 Parallel PIE corpora are rare, exist for a handful of languages only, and are limited in size (Fadaee et al., 2018).",
"Rather than rely on a small parallel corpus, we use the largest corpus of English PIEs to date and annotate the translations heuristically.",
"This section provides corpus statistics and discusses the heuristic annotation method.",
"MAGPIE corpus The MAGPIE corpus presented by Haagsma et al. (2020) contains 1756 English idioms from the Oxford Dictionary of English with 57k occurrences.",
"MAGPIE contains identical PIE matches and morphological and syntactic variants, through the inclusion of common modifications of PIEs, such as passivisation (the beans were spilled) and word insertions (spill all the beans).",
"4 We use 37k samples annotated as fully figurative or literal , for 1482 idioms that contain nouns, numerals or adjectives that are colours (which we refer to as keywords ).",
"Because idioms show syntactic and morphological variability, we focus mostly on the nouns.",
"Verbs and their translation are harder to identify due to the variability.",
"Moreover, idiom indexes are also typically organised based on the nominal constituents, instead of the verbs (Piirainen, 2013).",
"Only the PIE and its sentential context are presented to the model.",
"We distinguish between PIEs and their context using the corpus's word-level annotations.",
"Heuristic annotation method The MAGPIE sentences are translated by the models with beam search and a beam size of five.",
"The translations are labelled heuristically.",
"In the presence of a literal translation of at least one of the idiom's keywords, 1 Our figures refer to these languages using their ISO 639-1 codes, that are nl , de , sv , da , fr , it and es , respectively.",
"the entire translation is labelled as a word-for-word translation, where the literal translations of keywords are extracted from the model and Google translate.",
"When a literally translated keyword is not present, it is considered a paraphrase .",
"5 Shao et al. (2018) previously analysed NMT translations of 50 Chinese idioms using a similar method and manually curated lists of literal translations of idioms' words to detect literal translation errors.",
"Dankers et al. (2022) use a similar method for 20 English idioms, to track when a word-for-word translation changes into a paraphrased one during training for an English-Dutch ( En-Nl ) NMT model.",
"Table 1 summarises the distribution of these categories for all languages, for the subsets of figurative and literal examples from MAGPIE.",
"Generally, paraphrased translations of figurative PIEs are more appropriate than word-for-word translations, whereas literal PIEs can be translated word for word (Baker et al., 1992).",
"The vast majority of literal PIEs indeed result in word-for-word translations.",
"The subset of figurative samples results in more paraphrases, but 76% is still a word-for-word translation, dependent on the language.",
"Although the statistics are similar across languages, there are differences in which examples are paraphrased.",
"Figure 2 illustrates the agreement 5 The annotation does not evaluate whether paraphrases are correct, which requires expert idiom knowledge in both languages.",
"A paraphrase being provided is a first step to adequately translating idioms and, at present, the only way to detect how the model approaches the task for large datasets.",
"by computing the F 1 -score when using the predictions for figurative instances of one language as the target, and comparing them to predictions from another language.",
"The agreement positively correlates with genetic similarity as computed using the Uriel database (Littell et al., 2017).",
"To assess the quality of the heuristic method, one (near) native speaker per target language annotated 350 samples, where they were instructed to focus on one PIE keyword in the English sentence.",
"Annotators were asked whether (1) the English word was present in the translation (initially referred to as copy), (2) whether there was a literal translation for the word, or (3) whether neither of those options were suited, referred to as the paraphrase.",
"6 Due to the presence of cognates in the copy category, that category was merged with the word for word category after the annotation.",
"Table 2 summarises the accuracies obtained.",
"Of particular interest are samples that are figurative and paraphrased, since they represent the translations that are treated non-compositionally by the model, as well as instances that are literal and translated word for word, since they represent the compositional translations for non-idiomatic PIE occurrences.",
"These categories have annotation accuracies of 75% and 89% , respectively.",
"During preliminary analyses, an annotation study was conducted for Dutch by annotators from the crowd-sourcing platform Prolific.",
"The annotators and the heuristic method agreed in 83% of the annotated examples, and for 77% of the samples an average of 4 annotators agreed on the label unanimously (see Appendix A for more details).",
"Sentences containing idioms typically yield lower BLEU scores (Fadaee et al., 2018).",
"MAGPIE is a monolingual corpus and does not allow us to compute BLEU scores, but we refer the reader to Appendix G for an exploratory investigation for MAGPIE's idioms using the En-Nl training corpus.",
"We now turn to comparing how literal and figurative PIEs are processed by Transformer.",
"Whether a PIE is figurative depends on the context e.g. compare in culinary school, I felt at sea to the sailors were at sea .",
"Within Transformer, contextualisation of input tokens is achieved through the attention mechanisms, which is why they are expected to combine the representations of the idioms' tokens and embed the idiom in its context.",
"This section discusses the impact of PIEs on the encoder's self-attention and the encoder-decoder cross-attention.",
"To assert that the conclusions drawn in this section are not simply explained by shallow statistics of the data used, we recompute the results in Appendix C for (1) a data subset excluding variations of PIEs' standard surface forms, (2) a data subset that includes PIEs that appear in both figurative and literal contexts, (3) a data subset that controls for the number of tokens within a PIE.",
"Qualitatively, these results lead to the same findings.",
"Attention within the PIE For the En-Nl Transformer, Figure 3a visualises the distribution of attention weights in the encoder's self-attention mechanism for incoming weights to one noun contained in the PIE from the remaining PIE tokens.",
"Throughout the figures in the paper, we refer to the subset of sentences that have a figurative PIE and a 1 2 3 4 5 6 layer 0.0 0.2 0.4 0.6 0.8 1.0 a tt e n t i o n",
"paraphrased translation as fig-par '.",
"The subset of sentences with a literal PIE and a word-for-word translation are indicated by lit-wfw '.",
"We compare those two subsets, as well as all instances of figurative PIEs ( fig ') to all instances of literal PIEs ( lit ') using the labels from the MAGPIE dataset.",
"Overall, there is increased attention in figurative occurrences of PIEs compared to literal instances.",
"This difference is amplified for the subset of figurative PIEs yielding paraphrased translations.",
"This pattern is consistent for all languages, as is displayed in Figure 3d that presents the difference between the mean attention weights of the figurative, paraphrased instances, and the mean weights of the literal instances translated word for word.",
"7 In other words, figurative PIEs are grouped more strongly than their literal counterparts.",
"Attention between PIEs and context To examine the interaction between a PIE and its context, we obtain the attention weights from tokens within the PIE to nouns in the surrounding context of size 10 (Figure 3b).",
"8 Similarly, the attention from the surrounding context to PIE nouns is measured (Fig-ure 3c).",
"There is reduced attention from PIEs to context for figurative instances, which mirrors the effect observed in Figure 3a: increased attention 7 Appendix D details results per language per layer.",
"8 Throughout the paper, a context size of 10 to the left and 10 to the right or smaller is used, as sentence length permits.",
"within the PIE is accompanied by reduced attention to the context.",
"This pattern is consistent across languages (Figure 3d).",
"From the context to the PIE, the average weight is slightly higher for literal PIEs, but the effect size is small, indicating only a minor impact of figurativeness on the context's attention weights.",
"This will be further investigated in 5.",
"Cross-attention To analyse the encoder-decoder interaction, we decode translations with beam size five, and extract the cross-attention weights for those translations.",
"Afterwards, alignments are computed for the models' predictions by, together with 1M sentences from the OPUS corpus per target language, aligning them using the eflomal toolkit (stling et al., 2016).",
"The alignment is used to measure attention from a token aligned to a PIE's noun to that noun on the source side.",
"9 Figure 4a presents the attention distribution for the weights that go from the noun's translation to that PIE noun on the source side, for the En-Nl model.",
"There is a stark difference between figurative and literal PIEs, through reduced attention on the source-side noun for figurative PIEs.",
"This difference is particularly strong for the figurative sentences that are paraphrased during the translation: when paraphrasing the model appears to rely less on the source-side noun than when translating word for word.",
"Where does the attention flow, instead?",
"To some extent, to the remaining PIE tokens (Figure 4b).",
"A more pronounced pattern of increased attention on the </s> token is shown in Figure 4c.",
"Similar behaviour has been observed by Clark et al. (2019) for BERT's [SEP] token, who suggest that this indicates a no-operation .",
"In Transformer's cross-attention mechanism, this would mean that the decoder collects little information from the source side.",
"Figure 4d compares the mean attention weights of the seven languages for the figurative inputs that are paraphrased to the literal samples that are translated word for word, confirming that these patterns are not specific to En-Nl translation.",
"Collectively, the results provide the observations depicted in Figure 1.",
"When paraphrasing a figu-9 Automated alignments may be less accurate for paraphrases, and, therefore, we inspect the fig-par alignments: for all languages 34% of those sentences has no aligned word for the PIE noun.",
"Those sentences are excluded.",
"We manually inspect the most frequently aligned words for Dutch, that cover 48% of the fig-par subcategory in Ap.",
"B, and are all accurate.",
"rative PIE, the model groups idioms' parts more strongly than it would otherwise i.e. it captures the PIE more as one unit.",
"A lack of grouping all figurative PIEs could be a cause of too compositional translations.",
"Increased attention within the PIE is accompanied by reduced interaction with context, indicating that the PIE is translated in a stand-alone manner, contrary to what is expected, namely that contextualisation can resolve the figurative versus literal ambiguity.",
"There is less cross-attention on the source-side PIE and more attention on the </s> token when the model emits the translation of figurative (paraphrased) PIEs.",
"This suggests that even though the encoder cues figurative usage, the decoder retrieves a PIE's paraphrase and generates its translation more as a language model would.",
"Within Transformer, the encoder's upper layers have previously been found to encode semantic information (e.g. Raganato and Tiedemann, 2018).",
"PIEs' hidden states are expected to transform over layers due to contextualisation, and become increasingly more indicative of figurativeness.",
"This section focuses on the impact of PIEs on the hidden states of Transformer's encoder.",
"We firstly discuss how much these hidden states change between layers.",
"Secondly, we measure the influence of a token by masking it out in the attention and analysing the degree of change in the hidden representations of its neighbouring tokens.",
"This analysis is performed to consolidate findings from 4, since the extent to which attention can explain model behaviour is a topic of debate (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019).",
"To compare representations from different layers, we apply canonical correlation analysis (CCA) (Hotelling, 1936), using an implementation from Raghu et al. (2017).",
"Assume matrices A R d A N and B R d B N , that are representations for N data points, drawn from two different sources with dimensionalities d A and d B e.g. different layers of one network.",
"CCA linearly transforms these subspaces A (cid:48) = W A , B (cid:48) = V B such as to maximise the correlations { 1 , . . . , min ( d A ,d B ) } of the transformed subspaces.",
"We perform CCA using > 60 k random token vectors for a previously unused subset of the MAGPIE corpus the subset of sentences that did not contain nouns in the PIEs 3613 1-2 2-3 3-4 4-5 5-6 layers 0.75 0.80 0.85 CCA s i m il a r i t y",
"to compute the CCA projection matrices W and V .",
"W and V are then used to project new data points before measuring the data points' correlation.",
"The CCA similarity reported in the graphs is the average correlation of projected data points.",
"We do not perform CCA separately per data subset due to the small subset sizes and the impact of vocabulary sizes on CCA correlations for small datasets (see Appendix E).",
"10 We compute the CCA similarity for hidden states from adjacent layers for PIE and non-PIE nouns.",
"Figurative PIEs in layer l are typically less similar to their representation in layer l 1 compared to literal instances (shown in Figures 5b and 5c).",
"The results for non-PIE nouns (Figure 5a for the En-Nl Transformer) do not differ across data subsets, suggesting that changes observed for figurative PIEs are indeed due to figurativeness.",
"We now compute similarities of representations for the model in two setups: with and without one token masked in the attention mechanism, as suggested by Voita et al. (2019).",
"Masking a token means that other tokens are forbidden to attend to the chosen one.",
"This can reveal whether the attention patterns discussed in 4 are indicative of the 10 Extensions of CCA have been proposed that limit the number of CCA directions over which the correlation is computed, to only include directions that explain a large portion of the variance (Raghu et al., 2017; Morcos et al., 2018).",
"We do not remove directions such as to avoid removing smaller variance components that could still cue figurativeness (the focus of our work).",
"influence tokens have on each other's hidden representations.",
"The first representation is the hidden representation from layer l for a token encoded as usual.",
"The second one is the hidden representation of layer l when applying the first l 1 layers as usual and masking one token in the l th layer.",
"CCA is again performed on separate data, where a non-PIE noun is masked, to provide the projection matrices applied before computing similarities in the remainder of this subsection.",
"Masking a PIE token To estimate the influence of PIE nouns, we first compute the CCA similarity between two representations of tokens from the PIE's context while masking one PIE noun in the attention for one of those representations.",
"Similarly, we measure the influence on other tokens within the PIE when masking one PIE noun.",
"Within the PIE, the impact is the largest for figurative instances (see Figure 6a for En-Nl and 6e for averages over layers for all languages).",
"This is in line with the attention pattern observed.",
"However, whether the impact is the largest on context tokens from figurative or literal instances is dependent on the layer 3614 0.6 0.8 1.0 m a c r o F 1 nl de sv da e 1 2 3 4 5 6 layers 0.6 0.8 1.0 m a c r o F 1 fr e 1 2 3 4 5 6 layers es e 1 2 3 4 5 6 layers it fig-par/lit-wfw fig/lit Figure 7: Macro F 1 -score for probes predicting PIEs' labels.",
"(Figure 6b), suggesting that the slight difference in attention from the context to the PIE observed in 4 need not represent a difference in impact between figurative and literal PIEs.",
"Masking a context token Lastly, we measure the influence of masking a noun in the context of the PIE on PIE tokens and non-PIE tokens.",
"Within the PIE, as shown in Figures 6c and 6e, figurative instances are less affected by the masked context noun compared to literal occurrences of PIEs.",
"Again, this mirrors the patterns observed for attention where there was less attention on the context for figurative PIEs.",
"When masking a non-PIE noun and measuring the impact on non-PIE tokens, one would hardly expect any differences between data subsets, as is confirmed in Figures 6d and 6e.",
"In summary, these analyses confirm most of the trends noted for attention patterns.",
"Intercepting in the attention through masking indicated that for PIE tokens, there is less interaction with the context.",
"However, this does not necessarily mean that the context interacts less with figurative PIEs compared to literal PIEs, even if there was a slight difference in attention (see 4).",
"The CCA analyses furthermore showed that figurative PIEs are distinct from typical tokens in how they change over layers.",
"The previous analyses compared the hidden states for figurative and literal PIEs, but do not use these labels, otherwise.",
"We now train logistic regression probing classifiers (Conneau et al., 2018) to predict the label from hidden representations.",
"The probes' inputs are the hidden states of PIE tokens, and the F 1 -scores are averaged over five folds.",
"All samples from one PIE are in the same fold, such that the classifier is evaluated on PIEs that were absent from its training data.",
"The results (Figure 7) indicate figurativeness can be predicted from these encodings, with performance increasing until the top layer for all languages.",
"F 1 -scores for the embeddings already exceed a random baseline, indicating some idioms are recognisable independent of context.",
"Finally, we use probing classifiers to change models' PIE translations through amnesic probing (Elazar et al., 2021): removing features from hidden states with iterative null-space projection (INLP) (Ravfogel et al., 2020) and measuring the influence of these interventions.",
"INLP trains k classifiers to predict a property from vectors.",
"After training probe i , parametrised by W i , the vectors are projected onto the nullspace of W i .",
"The projection matrix of the intersection of all k null spaces can then remove features found by these classifiers.",
"Using INLP, we train 50 classifiers to distinguish figurative PIEs that will be paraphrased from those to be translated word for word.",
"Afterwards, we run the previously paraphrased PIE occurrences through the model while removing information from the PIE's hidden states using INLP i.e. information that could be captured by linear classifiers, which need not be the only features relevant to idiomatic translations.",
"Per idiom, we record the percentage of translations that are no longer paraphrased.",
"We report the scores for idioms from four folds and BLEU scores comparing translations that changed label before and after INLP.",
"A fifth fold is used for parameter estimation (Appendix F).",
"Table 3 presents the results.",
"When intervening in the hidden states for all layers l { 0 , 1 , 2 , 3 , 4 } , the average success rate per PIE ranges from 27% (for Swedish) to 40% (for Spanish).",
"The interventions yield reduced attention within the PIE and increased interaction with the context (see Table 3b for Dutch).",
"Table 3 also provides results for a baseline probe predicting whether the half-harmonic mean of the zipf-frequency of PIE tokens is below or above average.",
"This probe is successful too, 3615 Dutch German Swedish Danish French Italian Spanish Then, brisk again, ' I 'll bear it in mind. ' Entonces, rpido de nuevo, ' Lo tendr en cuenta. ' Entonces, anmate de nuevo, 'Lo tendr en mente'.",
"emphasising how brittle idiomatic translations are: when removing information from the hidden states, the model reverts to compositional translations.",
"Figure 8 provides example translations before and after the application of INLP, while indicating how the attention on the underlined noun changes.",
"Generally, the attention on that noun reduces for tokens other than itself.",
"In summary, when applying INLP to hidden states, the attention patterns resemble patterns for literal tokens more, confirming a causal connection between the model paraphrasing figurative PIEs and the attention.",
"However, amnesic probing cannot change the paraphrases for all idioms; thus, figurativeness is not merely linearly encoded in the hidden states.",
"The probing accuracies differed across layers and suggested figurativeness is more easily detectable in higher layers, which is in line with the changes across layers observed in 5.",
"Idioms are challenging for NMT models that often generate overly compositional idiom translations.",
"To understand why this occurs, we analysed idiom processing in Transformer, using an English idiom corpus and heuristically labelled translations in seven target languages.",
"We compared hidden states and attention patterns for figurative and literal PIEs.",
"In the encoder, figurative PIEs are grouped more strongly as one lexical unit than literal instances and interact less with their context.",
"The effect is stronger for paraphrased translations, suggesting that capturing idioms as single units and translating them in a stand-alone manner aids idiom processing.",
"This finding agrees with results from Zaninello and Birch (2020), who ascertain that encoding an idiom as one word improves translations.",
"It also agrees with the INLP application causing more compositional translations whilst changing the attention.",
"By relying less on the encoder's output, the decoder determines the meaning of figurative PIEs more independently than for literal ones.",
"To improve idiomatic translations, future work could use these insights to make architectural changes to improve the grouping of idioms as single units by training specific attention heads to capture multiword expressions or by penalising overly compositional translations in the training objective.",
"Although we learnt about mechanics involved in idiomatic translations, the vast majority of translations was still word for word, indicating that non-compositional processing does not emerge well (enough) in Transformer.",
"Paradoxically, a recent trend is to encourage more compositional processing in NMT (Chaabouni et al., 2021; Li et al., 2021; Raunak et al., 2019, i.a.).",
"We recommend caution since this inductive bias may harm idiom translations.",
"It may be beneficial to evaluate the effect of compositionality-favouring techniques on non-compositional phenomena to ensure their effect is not detrimental.",
"We are grateful to Rico Sennrich for providing feedback on an earlier version of the paper.",
"Many thanks to Agostina Calabrese, Matthias Lindemann, Gautier Dagan, Irene Winther, Ronald Cardenas, Helena Fabricius-Vieira and Emelie van de Vreken for data annotation and assistance with queries about their native languages.",
"VD is supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh, School of Informatics and School of Philosophy, Psychology & Language Sciences.",
"IT acknowledges the support of the European Research Council (ERC StG BroadSem 678254) and the 3616 Dutch National Science Foundation (NWO Vidi 639.022.518).",
"Raymond W Gibbs Jr, Raymond W Gibbs, and Jr Gibbs.",
"1994.",
"The poetics of mind: Figurative thought, language, and understanding .",
"Cambridge University Press.",
"H. Paul Grice.",
"1975.",
"Logic and conversation.",
"Syntax and Semantics , 3:4158.",
"H. Paul Grice.",
"1989.",
"Studies in the Way of Words .",
"Harvard University Press."
]
| [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
]
|
[
"We present a fast and scalable architecture called Explicit Modular Decomposition (EMD), in which we incorporate both classification-based and extraction-based methods and design four modules (for classification and sequence labelling) to jointly extract dialogue states.",
"Experimental results based on the MultiWoz 2.0 dataset validates the superiority of our proposed model in terms of both complexity and scalability when compared to the state-of-the-art methods, especially in the scenario of multi-domain dialogues entangled with many turns of utterances.",
"Dialogue state tracking (DST), responsible for extracting user goals/intentions from dialogues, is a core component in task-oriented dialogue sys-tems (Young et al., 2013).",
"A dialogue state is commonly represented as a ( DOMAIN , SLOT TYPE , SLOT VALUE ) triplet, e.g., (hotel, people, 3).",
"We show an illustrated example of a multi-domain dialogue in Figure 1, which involves two domains, i.e., TRAIN and HOTEL .",
"Previous approaches for DST usually fall into the following four categories: (1) adopt encoder-decoder models to generates states (Kim et al., 2020; Ren et al., 2019; Li et al., 2019; Lee et al., 2019; Wu et al., 2019) ; (2) cast DST as a multi-label classification task when a full candidate-value list is available (Shan et al., 2020; Ramadan et al., 2018; Zhong et al., 2018; Ren et al., 2018); (3) employ span-based methods to directly extract the states (Chao and Lane, 2019; Gao et al., 2019); and (4) combine both classification-based and span-based methods to jointly complete the dialogue state extraction (Zhang et al., 2019).",
"The most related work to ours is DS-DST (Zhang et al., 2019), a joint model which highlights the problem that using classification-based or span-Figure 1: A multi-domain dialogue example extracted from MultiWoz 2.0.",
"The S-type slot values are marked in bold and the arrow points to a pair of C-type slots and its corresponding value.",
"The domain discussed changes from train to hotel at the fourth turn.",
"Refer to Section 2 for the definitions of C-type and S-type .",
"based approach alone is insufficient to cover all cases of DST in the task-oriented dialogue.",
"While DS-DST has achieved some promising result on dialogue state tracking and demonstrated the utility of combining these two types of methods, some problems still remain unaddressed.",
"On one hand, since the model is conditioned on domain-slot pairs, the computational complexity is not constant and will grow as the number of domains and slots involved in dialogues increases.",
"To be more specific, if there are 1000 domain-slot pairs, the model needs to run 1000 times to obtain the expected dialogue states for the current turn at each time, which is a huge computational overhead.",
"On the other hand, previous works usually directly concatenate the history content and the current utterance as input, which is difficult to scale in the multi-turn scenarios, especially when the number of turns of a dialogue is large.",
"Furthermore, we observe that generative approaches may generate some domain outlier 1 triplets due to lack of domain constraints.",
"To tackle these issues, we propose a fast and 1 We refer a predicted result as domain outlier when slot types are out of the domain pertaining to current utterances.",
"scalable method called EMD, where we decompose DST into three classification modules and one sequence labeling module to jointly extract the dialogue states.",
"The benefits of our approach are summarised below: Efficient : Different to the previous work, we employ a sequence labeling approach to directly annotate the domain-slot values in the utterance instead of iterating over all domain-slot pairs one by one, and thus greatly reduce the model complexity.",
"Constrained output : To effectively model the relationship between the predicted domain and its associated slots, as well as to reduce the occurrence of domain outlier results, we propose a list-wise global ranking approach which uses Kullback-Leibler divergence to formulate the training objective.",
"Scalable : Based on turn-level utterances rather than the whole history dialogue content, our proposed model offers better scalability, especially in tackling dialogues with multiple turns.",
"Additionally, we employ a correction module to handle the changes of the states as the dialogue proceeds.",
"Formally, a multi-turn dialogue is represented as T = { ( s 1 , u 1 , d 1 ) , ( s 2 , u 2 , d 2 ) , , ( s n , u n , d n ) } , d i D , where s i , u i and d i refer to the system utterance, the user utterance, and the domain at turn i , respectively 2 , and D represents the set of all domains in the training dataset.",
"The overall architecture of our model is shown in Figure",
"2. In our proposed model, we choose MT-DNN (Liu et al., 2019), pretrained model which has the same architecture as BERT but trained on multiple GLUE tasks (Wang et al., 2019).",
"MT-DNN has been shown to be a better contextual feature extractor for downstream NLP tasks.",
"Given dialogue utterances as input, we represent the output of MT-DNN as { H [ CLS ] , H 1 , H 2 , , H n } , where n is the length of the concatenation of the system and user utterances.",
"As a sentence-level representation, H [ CLS ] is expected to encode the information of the whole input sequence (Devlin et al., 2019; Liu et al., 2019).",
"Based on these contextual representations, we predict the domain (see 2.1) and belief 2 We assume that the turn-level utterances only contain one domain, and the Multiwoz 2.0 dataset we use in this paper also conforms to this assumption.",
"Figure 1 shows a typical multi-domain dialogue example, from which we can observe that some slot values can be directly found from utterances (e.g. cambridge and london ), while other slot values are implicit which are more challenging to discover, e.g., requiring classification to infer the values (e.g. internet:Yes ).",
"We divide slots into two categories that are handled by two two separate modules: S-type slots whose values could be extracted from dialogue utterances, and C-type slots whose values do not appear in utterances and are chosen from one of the three values { yes, no, don't care } .",
"In a multi-domain dialogue, the target domain may change as the dialogue proceeds.",
"Different from some previous works (Chen et al., 2019; Castel-lucci et al., 2019), which directly use the first hidden state ( H [ CLS ] ), in our model, apart from H [ CLS ] , we additionally incorporate D l , the domain result of the last turn into the our domain prediction module.",
"The rationale behind is that when the domain of current utterances is not explicit, D l can provide useful reference information for domain identification.",
"Formally, the domain is predicted as: y d = softmax ( W d [ H [ CLS ] ; E ( D l )]) (1) D c = arg max( y d ) , D c D (2) where ; denotes the concatenation operation and E ( ) embeds a word into a distributed representation using fixed MT-DNN (Liu et al., 2019).",
"D c is the predicted domain result.",
"Domain-slot-matching constraints R To prevent our model from predicting some slots not belonging to the current domain, we generate a domain constrained contextual record R R 1 ( s +1) , where s is number of S-type slots of all domains 3 .",
"Concretely speaking, R is a distribution over all S-type slots and [EMPTY] using R = softmax ( WR [ H [ CLS ] ; E ( D l ]) (3) 3 We add a [EMPTY] , the value of which is expected to be 1 when there is no slot needed to be predicted.",
"In particular, LR , the loss for R is defined as the Kullback-Leibler (KL) divergence between Div ( R real || R ) , where distribution R real from the ground truth is computed as follows: If there is no slot required to be predicted, R real [ EMPTY ] receives a probability mass of 1 for the special slot [EMPTY] .",
"If the number of slots needed to be predicted is k ( 1) , then corresponding k slot positions receive an equal probability mass of 1 /k .",
"Next, we employ a sequence labeling approach to directly annotate the domain-slot values in the utterance instead of iterating over all domain-slot pairs one by one.",
"Specifically, to tag S-type slots of the given input, we feed the final hidden states of H 1 , H 2 , , H n into a softmax layer to classify all the S-type slots, y si = softmax ( W s H i ) , i [1 , 2 , , N ] (4) Instead of directly predicting S-type slot results based on y si , we introduce a domain-slot-matching constraint R , which helps avoid generating S-type slots that do not belong to the predicted domain.",
"The multiplication operation is given below, y si = R (cid:12) y si (5) where (cid:12) is the element-wise multiplication.",
"Given the currently predicted domain result D c we build a set CD c which contains all C-type slots from all domains D .",
"If CD c is empty, it indicates that there is no C-type slot needed to be predicted in the current domain.",
"Otherwise, we classify each slot c D c i in CD into one of the following following categories, i.e., {yes, no, don't care}, with the classification function below.",
"Previous models such as TRADE (Wu et al., 2019) and COMER (Ren et al., 2019) requires that all dialogue states need to be predicted from scratch at each turn, including those dialogue states that have already been predicted at previous turns.",
"This poses a big challenge to the model in terms of scalability, especially when the number of dialogue turns increases.",
"Conversely, the input of our model consists of the system utterance and the user utterance at the current turn, so our model only outputs the estimates of the dialogue states for the current turn, and the previous dialogues are directly included where no re-prediction is needed.",
"However, there is an issue with direct inclusion of previously predicted results in that some states may need to be updated or removed as the dialogue proceeds.",
"For example, a user firstly looks for a hotel located in the center area, then a state (hotel, area, center) is estimated.",
"Subsequently, the user utters a specified hotel name, e.g. I wanna the King House , then the previous state (hotel, area, center) is outdated and should be removed.",
"To this end, we design the dialogue state correction module to update previously predicted results in order to improve the precision of the outputted dialogues states at each turn.",
"Similar to the C-type classification module, we cast this situation as a classification task, and for each triple tuple p from the previous dialogue states, the classifier is formulated as y p = sigmoid ( W p [ p ; E ( D l ); H [ CLS ] ]) (7) Here each item in p is embedded using E ( ) and p is the embedding sum of the three items in p .",
"During training, we use cross entropy loss for y d , y c , y s and y p , which are represented as L y d , L y c , L y s and L y p , respectively.",
"The loss for R (denoted as LR ) is defined as Kullback-Leibler (KL) divergence between R real and R (i.e, KL ( R real || R ) ).",
"All parameters are jointly trained by minimizing the weighted-sum of five losses ( , , , , (cid:15) are hyper-parameters), Loss = L y d + L y c + L y s + L y p + (cid:15)L R (8) 2.5 Analysis of model complexity Table 1 reports the Inference Time Complexity (ITC) proposed by (Ren et al., 2019), which is used to measure the model complexity.",
"ITC calculates how many times inference must be performed to complete a prediction of the belief state in a dialogue turn.",
"By comparison, we can observe that our model achieves the lowest complexity, O (1) , attributed to the modular decomposition and the usage of the sequence label based model.",
"Dataset We evaluate our model performance based on the MultiWoZ 2.0 dataset (Budzianowski et al., 2018), which contains 10 , 000 dialogues of 7 domains and 35 domain-slot pairs.",
"Detailed dataset statistics is summarised in Table",
"2. Evaluation metrics We utilize joint goal accuracy (JGA) (Henderson et al., 2014) to evaluate the model performance.",
"Joint goal accuracy is the accuracy of the dialogue state of each turn and a dialogue state is regarded as correct only if all the values of slots are correctly predicted.",
"Implementation details The hyper-parameters of our model go as follows: both the embedding and the hidden size is 1024 ; we used a learning rate of 0.0001 with a gradient clip of 2.0, mini-batch SGD with a batch size of 32 , and Adam optimizer (Kingma and Ba, 2014) for 50 epoch training.",
"We set a value of 1 to the five weighted hyper-parameters: , , , , (cid:15) .",
"Overall comparison We compare our models against six strong baselines on the multi-domain dataset MultiWoz.",
"Results are reported in Table 3 based on joint goal accuracy (JGA).",
"Our model achieves the best performance of 50 .",
"18% in the multi-domain testset, while the accuracy achieved in the single-domain is on par with the state-of-the-art results, which demonstrates the superiority of our model.",
"Analysis of model scalability We select 200 samples from the testing dataset, in which each dialogue has more than 8 turns of utterances between the system and the user.",
"Then, taking the turn number 6 as a threshold, we divide the dialogue content into two categories, i.e., COLD and Turn Previous States Domain Target states Predicted states for the current turn COMMER TRADER EMD 1 {} Hotel (hotel, internet, yes) (hotel, internet, yes) (hotel, internet, yes) (hotel, internet, yes) 3 (hotel, internet, yes) (hotel, name, holiday inn) Taxi (hotel, internet, yes) (hotel, name, holiday inn) (taxi, destination, holiday inn) (hotel, internet, yes) (hotel, name, holiday inn) (train, destination, holiday inn) (hotel, internet, yes) (hotel, name, holiday inn) (taxi, destination, holiday inn) (hotel, internet, yes) (hotel, name, holiday inn) (taxi, destination, holiday inn) ... 8 (hotel, internet, yes) (hotel, name, holiday inn) (taxi, destination, holiday inn) Taxi (hotel, internet, yes), (hotel, name, holiday inn), (taxi, destination, holiday inn) (hotel, internet, yes) (hotel, name, holiday inn) (train, destination, holiday inn) (hotel, internet, no) (hotel, name, holiday inn) (taxi, destination, holiday inn) (hotel, internet, yes) (hotel, name, holiday inn) (taxi, destination, holiday inn) Figure 3: Case study of predicated states by our model and two baselines.",
"From Table 4, we observe that the model performance has a big drop for the four baseline models, but our model achieves a relatively stable performance, achieving 51.01% in HOT and 51 .",
"89% in COLD , respectively.",
"This demonstrates that our model is not only fast in terms of inference speed (cf. 2.5), but also has a good scalability which can maintain a high accuracy even when the dialogue proceeds into more turns and the input length becomes larger.",
"Ablation study We conduct two ablation experiments to investigate the impacts of D l and R .",
"We introduce a metric, called outlierslot ratio (OSR), denoting the proportion of slots predicted by our model that do not belong to the current domain.",
"From Table 5, we notice that adding D l improves the domain accuracy, where one possible reason is that some utterances may not have a clear domain attribute, and thus the incorporated previous domain is believed to provide useful guiding information in domain prediction.",
"Besides, by comparing OSR with and without using R , we can observe that using R reduces the proportion of generating slots that do not align to the predicted domain, which further improves the model performance.",
"itatively, we show an exemplary dialogue and illustrate some generated results by EMD and two baseline models in Figure",
"3. At turn 3 when the dialogue domain change from hotel to taxi , COMMER fails to capture the domain information and generates a domain outlier, train , which does not conform to the current context.",
"Conversely, dialogue generated by our model always conforms to the domain at the current turn, which may benefit from the incorporation of the domain constrained contextual record R .",
"Besides, another observation is that as the dialogue proceeds to the turn 8 when the history dialogue content accumulates, TRADER makes an incorrect prediction in the hotel-internet slot, which is correctly identified at the turn 1 .",
"One possible reason is that it becomes more challenging for the model to correctly predict all dialogue state from scratch when both the history dialogue content and states involved increase.",
"Instead of repeatedly generating those previously predicted states at each turn, our model only outputs the states for the current turn, and updates previous dialogue states with a separate module.",
"In this paper, we propose to decompose DST into multiple submodules to jointly estimate dialogue states.",
"Experimental results based on the MultiWoz 2.0 dataset show that our model not only reduces the model complexity, but also gives high scalability in coping with multi-domain and long task-oriented dialogue scenarios."
]
| [
"method",
"objective",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"method",
"objective",
"objective",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result"
]
|
[
"Evaluating the quality of a dialogue interaction between two agents is a difficult task, especially in open-domain chit-chat style dialogue.",
"There have been recent efforts to develop automatic dialogue evaluation metrics, but most of them do not generalize to unseen datasets and/or need a human-generated reference response during inference, making it infeasible for online evaluation.",
"Here, we propose an unreferenced automated evaluation metric that uses large pre-trained language models to extract latent representations of utterances, and leverages the temporal transitions that exist between them.",
"We show that our model achieves higher correlation with human annotations in an online setting, while not requiring true responses for comparison during inference.",
"Recent approaches in deep neural language generation have opened new possibilities in dialogue generation (Serban et al., 2017; Weston et al., 2018).",
"Most of the current language generation efforts are centered around language modelling or machine translation (Ott et al., 2018), which are evaluated by comparing directly against the reference sentences.",
"In dialogue, however, comparing with a single reference response is difficult, as there can be many reasonable responses given a context that have nothing to do with each other (Liu et al., 2016).",
"Still, dialogue research papers tend to report scores based on word-overlap metrics from the machine translation literature (e.g. BLEU (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014)).",
"However word-overlap metrics aggressively penalize the generated response based on lexical differences with the ground truth and correlate poorly to human judgements (Liu et al., 2016).",
"One can build dialogue evaluation metrics in two ways: referenced metrics, which compare the generated response with a provided ground-truth response (such as the above word-overlap metrics), or an unreferenced metrics, which evaluate the generated response without any such comparison.",
"Lowe et al. (2017) propose a learned referenced metric named ADEM, which learns an alignment score between context and response to predict human score annotations.",
"However, since the score is trained to mimic human judgements, it requires collecting large-scale human annotations on the dataset in question and cannot be easily applicable to new datasets (Lowe, 2019).",
"Recently, Tao et al. (2017) proposed a hybrid referenced-unreferenced metric named RUBER, where the metric is trained without requiring human responses by bootstrapping negative samples directly from the dataset.",
"However, referenced metrics (including RUBER, as it is part referenced) are not feasible for evaluation of dialogue models in an online settingwhen the model is pitched against a human agent (model-human) or a model agent (model-model)due to lack of a reference response.",
"In this setting, models are usually evaluated directly by humans, which is costly and requires careful annotator training (Li et al., 2019).",
"The contributions of this paper are (1) a completely unsupervised unreferenced metric MAUDE ( M etric for a utomatic U nreferenced d ialogue e valuation), which leverages state-of-the-art pre-trained language models (Devlin et al., 2018; Sanh et al., 2019), combined with a novel discourse-structure aware text encoder and contrastive training approach; and (2) results showing that MAUDE has good correlation with human judgements.",
"We consider the problem of evaluating the response of a dialogue system, where an agent is provided with a sequence of sentences (or utterances) c = { u 1 , u 2 , ..., u n } (termed as context ) to generate a response r = u n +1 .",
"Each utterance, u i , can be represented as a set of words u i = { w 1 , w 2 , ..., w n } .",
"An utterance u i can be represented as a vector as h i = f e ( u i ) , where f e is an encoder that encodes the words into a fixed vector representation.",
"This work focuses on the evaluation of generative neural dialogue models , which typically consist of an encoder-decoder style architecture that is trained to generate u n +1 word-by-word (Serban et al., 2017).",
"The response of a generative model is typically evaluated by comparing with the ground-truth response using various automatic word-overlap metrics, such as BLEU or METEOR.",
"These metrics, along with ADEM and RUBER, are essentially single-step evaluation metrics, where a score is calculated for each context-response pair.",
"If a dialogue D i contains n utterances, we can extract n 1 context-response pairs : ( c 1 : { u 1 } , r 1 : { u 2 } ) , ( c 2 : { u 1 , u 2 } , r 2 : { u 3 } ) , . . . , ( c n 1 : { u 1 . . . u n 1 } , r n 1 : u n ) .",
"In this paper, we are interested in devising a scalar metric that can evaluate the quality of a context-response pair: score ( c i , r i ) = R (0 , 1) .",
"A key benefit of this approach is that this metric can be used to evaluate online and also for better training and optimization, as it provides partial credit during response generation.",
"We propose a new model, MAUDE , for online unreferenced dialogue evaluation.",
"We first describe the general framework behind MAUDE , which is inspired by the task of measuring alignment in natural language inference (NLI) (Williams et al., 2017).",
"It involves training text encoders via noise contrastive estimation (NCE) to distinguish between valid dialogue responses and carefully generated negative examples.",
"Following this, we introduce our novel text encoder that is designed to leverage the unique structural properties of dialogue.",
"MAUDE is designed to output a scalar score ( c i , r i ) = R (0 , 1) , which measures how appropriate a response r i is given a dialogue context c i .",
"This task is analogous to measuring alignment in NLI, but instead of measuring entailment or contradiction, our notion of alignment aims to quantify the quality of a dialogue response.",
"As in NLI, we approach this task by defining encoders f e ( c ) and f e ( r ) to encode the context and response, a combination function f comb ( . ) to combine the representations, and a final classifier f t ( . ) , which outputs the alignment score: score ( c, r ) = ( f t ( f comb ( f 1 e ( c ) , f 2 e ( r ))) .",
"The key idea behind an unreferenced dialogue metric is the use of Noise Contrastive Estimation (NCE) (Gutmann and Hyvarinen, 2010) for training.",
"Specifically, we train the model to differentiate between a correct response (score ( c, r ) 1 ), and a negative response (score ( c, r ) 0 ), where r represents a candidate false response for the given context c .",
"The loss to minimize contains one positive example and a range of negative examples chosen from a sampling policy P ( r ) : L = log( score ( c, r )) E r P ( r ) log( score ( c, r )) .",
"Syntactic negative samples .",
"We consider three variants of syntax level adversarial samples: word-order (shuffling the ordering of the words of r ), word-drop (dropping x % of words in r ) and word-repeat (randomly repeating words in r ).",
"Semantic negative samples .",
"We also consider three variants of negative samples that are syntactically well formed, but represent corruption in the semantic space.",
"First, we choose a response r j which is chosen at random from a different dialogue such that r j (cid:54) = r i ( random utterance ).",
"Second, we use a pre-trained seq2seq model on the dataset, and pair random seq2seq generated response with r i ( random seq2seq ).",
"Third, to provide a bigger variation of semantically negative samples, for each r i we generate high-quality paraphrases r bi using Back-Translation (Edunov et al., 2018).",
"We pair random Back-Translations r bj with r i as in the above setup ( random back-translation ).",
"We also provide the paired r bi as positive example for the models to learn variation in semantic similarity.",
"We further discuss the effect of different sampling policies in Appendix C. Dialogue-structure aware encoder .",
"Traditional NLI approaches (e.g., Conneau et al. (2017)) use the general setup of Equation 1 to score context-response pairs.",
"The encoder f e is typically a Bidirectional LSTMor, more recently, a BERT-based model (Devlin et al., 2018), which uses a large pre-trained language model.",
"f comb is defined as in Conneau et al. (2017): f comb ( u, v ) = concat ([ u, v, u v, u v ]) .",
"However, the standard text encoders used in these traditional NLI approaches ignore the temporal structure of dialogues, which is critical in our setting where the context is composed of a sequence of distinct utterances, with natural and stereotypical transitions between them.",
"(See Appendix A for a qualitative analysis of these transitions).",
"Thus we propose a specialized text encoder for MAUDE , which uses a BERT-based encoder f BERT e but additionally models dialogue transitions using a recurrent neural network: h u i = D g f BERT e ( u i ) , h (cid:48) u i +1 = f R ( h u i , h (cid:48) u i ) , c i = W .",
"where h u i R d is a downsampled BERT representation of the utterance u i (using a global learned mapping D g RB d ).",
"h (cid:48) u i is the hidden representation of f R for u i , where f R is a Bidirectional LSTM.",
"The final representation of the dialogue context is learned by pooling the individual hidden states of the RNN using max-pool (Equation 4).",
"This context representation is mapped into the response vector space using weight W , to obtain c i .",
"We then learn the alignment score between the context c i and response r i 's representation h r i following Equation 1, by using the combination function f comb being the same as in Equation 3.",
"To empirically evaluate our proposed unreferenced dialogue evaluation metric, we are interested in answering the following key research questions: Q1: How robust is our proposed metric on different types of responses?",
"correlate with human judgements?",
"Datasets .",
"For training MAUDE , we use PersonaChat (Zhang et al., 2018), a large-scale open-domain chit-chat style dataset which is collected by human-human conversations over provided user persona .",
"We extract and process the dataset using ParlAI (Miller et al.) platform.",
"We use the public train split for our training and validation, and the public validation split for testing.",
"We use the human-human and human-model data collected by See et al. (2019) for correlation analysis, where the models themselves are trained on PersonaChat.",
"Baselines .",
"We use InferSent (Conneau et al., 2017) and unreferenced RUBER as LSTM-based baselines.",
"We also compare against BERT-NLI, which is the same as the InferSent model but with the LSTM encoder replaced with a pre-trained BERT encoder.",
"Note that these baselines can be viewed as ablations of the MAUDE framework using sim-plified text encoders, since we use the same NCE training loss to provide a fair comparison.",
"Also, note that in practice, we use DistilBERT (Sanh et al., 2019) instead of BERT in both MAUDE and the BERT-NLI baseline (and thus we refer to the BERT-NLI baseline as DistilBERT-NLI).",
"1 .",
"We first analyze the robustness of MAUDE by comparing with the baselines, by using the same NCE training for all the models for fairness.",
"We evaluate the models on the difference score, = score ( c, r ground-truth ) score ( c, r ) (Table 6).",
"provides an insight on the range of score function.",
"An optimal metric would cover the full range of good and bad responses.",
"We evaluate response r in three settings: Semantic Positive : responses that are semantically equivalent to the ground truth response; Semantic Negative : responses that are semantically opposite to the ground truth response; and Syntactic 1 DistilBERT is the same BERT encoder with significantly reduced memory footprint and training time, which is trained by knowledge distillation (Bucilu et al., 2006; Hinton et al., 2015) on the large pre-trained model of BERT.",
"Negative : responses that have been adversarially modified in the lexical units.",
"Ideally, we would want 1 for semantic and syntactic negative responses, 0 for semantic positive responses.",
"We observe that the MAUDE scores perform robustly across all the setups.",
"RUBER and InferSent baselines are weak, quite understandably so because they cannot leverage the large pre-trained language model data and thus is poor at generalization.",
"DistilBERT-NLI baseline performs significantly better than InferSent and RUBER, while MAUDE scores even better and more consistently overall.",
"We provide a detailed ablation of various training scenarios as well as the absolute raw scores in Appendix C. We also observe both MAUDE and DistilBERT-NLI to be more robust on zero-shot generalization to different datasets, the results of which are available in Appendix B. 4.2 Correlation with human judgements Metrics are evaluated on correlation with human judgements (Lowe et al., 2017; Tao et al., 2017), or by evaluating the responses of a generative model trained on the metric (Wieting et al., 2019), by human evaluation.",
"However, this introduces a bias either during the questionnaire setup or during data post-processing in favor of the proposed metric.",
"In this work, we refrain from collecting human annotations ourselves, but refer to the recent work by See et al. (2019) on PersonaChat dataset.",
"Thus, the evaluation of our metric is less subject to bias.",
"See et al. (2019) conducted a large-scale human evaluation of 28 model configurations to study the effect of controllable attributes in dialogue generation.",
"We use the publicly released model-human and human-human chat logs from See et al. (2019) to generate the scores on our models, and correlate them with the associated human judgement on a Likert scale.",
"See et al. (2019) propose to use a multi-step evaluation methodology, where the hu-R IS DNLI M Fluency 0.322 0.246 0.443 0.37 Engagingness 0.204 0.091 0.192 0.232 Humanness 0.057 -0.108 0.129 0.095 Making Sense 0.0 0.005 0.256 0.208 Inquisitiveness 0.583 0.589 0.598 0.728 Interestingness 0.275 0.119 0.135 0.24 Avoiding Repetition 0.093 -0.118 -0.039 -0.035 Listening 0.061 -0.086 0.124 0.112 Mean 0.199 0.092 0.23 0.244 Table 2: Correlation with calibrated scores between RUBER (R), InferSent (IS), DistilBERT-NLI (DNI) and MAUDE (M) when trained on PersonaChat dataset man annotators rate the entire dialogue and not a context-response pair.",
"On the other hand, our setup is essentially a single-step evaluation method.",
"To align our scores with the multi-turn evaluation, we average the individual turns to get an aggregate score for a given dialogue.",
"We investigate the correlation between the scores and uncalibrated individual human scores from 100 crowdworkers (Fig. 2), as well as aggregated scores released by See et al. (2019) which are adjusted for annotator variance by using Bayesian calibration (Kulikov et al., 2018) (Table 2).",
"In all cases, we report Spearman's correlation coefficients.",
"For uncalibrated human judgements, we observe MAUDE having higher relative correlation in 6 out of 8 quality measures.",
"Interestingly, in case of calibrated human judgements, DistilBERT proves to be better in half of the quality measures.",
"MAUDE achieves marginally better overall correlation for calibrated human judgements, due to significantly strong correlation on specifically two measures: Interestingness and Engagingness.",
"These measures answers the questions How interesting or bor-ing did you find this conversation? and How much did you enjoy talking to this user? .",
"(Re-fer to Appendix B of See et al. (2019) for the full list of questions).",
"In this work, we explore the feasibility of learning an automatic dialogue evaluation metric by leveraging pre-trained language models and the temporal structure of dialogue.",
"We propose MAUDE , which is an unreferenced dialogue evaluation metric that leverages sentence representations from large pre-trained language models, and is trained via Noise Contrastive Estimation.",
"MAUDE also learns a recurrent neural network to model the transition between the utterances in a dialogue, allowing it to correlate better with human annotations.",
"This is a good indication that MAUDE can be used to evaluate online dialogue conversations.",
"Since it provides immediate continuous rewards and at the singlestep level, MAUDE can be also be used to optimize and train better dialogue generation models, which we want to pursue as future work.",
"We would like to thank the ParlAI team (Mar-garet Li, Stephen Roller, Jack Urbanek, Emily Dinan, Kurt Shuster and Jason Weston) for technical help, feedback and encouragement throughout this project.",
"We would like to thank Shagun Sodhani and Alborz Geramifard for helpful feedback on the manuscript.",
"We would also like to thank William Falcon and the entire Pytorch Lightning community for making research code awesome.",
"We are grateful to Facebook AI Research (FAIR) for providing extensive compute / GPU resources and support regarding the project.",
"This research, with respect to Quebec Artificial Intelligence Institute (Mila) and McGill University, was supported by the Canada CIFAR Chairs in AI program."
]
| [
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other"
]
|
[
"We propose a data augmentation method for neural machine translation.",
"It works by interpreting language models and phrasal alignment causally.",
"Specifically, it creates augmented parallel translation corpora by generating (path-specific) counterfactual aligned phrases.",
"We generate these by sampling new source phrases from a masked language model, then sampling an aligned counterfactual target phrase by noting that a translation language model can be interpreted as a Gumbel-Max Structural Causal Model (Oberst and Sontag, 2019).",
"Compared to previous work, our method takes both context and alignment into account to maintain the symmetry between source and target sequences.",
"Experiments on IWSLT'15 English Vietnamese, WMT'17 English German, WMT'18 English Turkish, and WMT'19 robust English French show that the method can improve the performance of translation, backtranslation and translation robustness.",
"Neural machine translation (NMT) models (Kalch-brenner and Blunsom, 2013; Bahdanau et al., 2014; Vaswani et al., 2017) have reached state-of-the-art performance on various benchmarks.",
"However, these models frequently rely on large-scale parallel corpora for training, exhibiting degraded performance on low-resource languages (Zoph et al., 2016).",
"Further, modern NMT systems are often brittle, as noises (e.g. grammatical errors) can cause significant mistranslations (Sakaguchi et al., 2017; Michel and Neubig, 2018).",
"Data augmentation is a promising direction to overcome these issues.",
"It works by enlarging the number of data points for training without manually collecting new data.",
"It is widely used to improve diversity and robustness and to avoid overfitting on small datasets.",
"Even though data augmentation (e.g. image flipping, cropping and blurring) has GX 1 <latexit sha1_base64=\"k9WkuKGgiaiqCUB1ED1VEkollNs=\">AAACGHicbVDLSsNAFJ3UV62vqDvdBIvgqiRV0GXBhS4r2Ae0sUymN+3QySTMTIQSAn6HH+BWP8GduHXnF/gbTtIsbOuBgXPPuZd753gRo1LZ9rdRWlldW98ob1a2tnd298z9g7YMY0GgRUIWiq6HJTDKoaWoYtCNBODAY9DxJteZ33kEIWnI79U0AjfAI059SrDS0sA86gdYjT0/uUkHiZM+5CXBLOmmA7Nq1+wc1jJxClJFBZoD86c/DEkcAFeEYSl7jh0pN8FCUcIgrfRjCREmEzyCnqYcByDdJP9Dap1qZWj5odCPKytX/04kOJByGni6MztRLnqZ+J/Xi5V/5SaUR7ECTmaL/JhZKrSyQKwhFUAUm2qCiaD6VouMscBE6djmtkidCwhgWTLOYg7LpF2vOee1+t1FtWEXGZXRMTpBZ8hBl6iBblETtRBBT+gFvaI349l4Nz6Mz1lryShmDtEcjK9fey2hUA==</latexit> GX 2 <latexit sha1_base64=\"rhCGUP7ylxcMUtAWVv78y5IyK28=\">AAACGHicbVDLSsNAFJ3UV62vqDvdBIvgqiRV0GXBhS4r2Ae0sUymN+3QySTMTIQSAn6HH+BWP8GduHXnF/gbTtIsbOuBgXPPuZd753gRo1LZ9rdRWlldW98ob1a2tnd298z9g7YMY0GgRUIWiq6HJTDKoaWoYtCNBODAY9DxJteZ33kEIWnI79U0AjfAI059SrDS0sA86gdYjT0/uUkHST19yEuCWdJNB2bVrtk5rGXiFKSKCjQH5k9/GJI4AK4Iw1L2HDtSboKFooRBWunHEiJMJngEPU05DkC6Sf6H1DrVytDyQ6EfV1au/p1IcCDlNPB0Z3aiXPQy8T+vFyv/yk0oj2IFnMwW+TGzVGhlgVhDKoAoNtUEE0H1rRYZY4GJ0rHNbZE6FxDAsmScxRyWSbtec85r9buLasMuMiqjY3SCzpCDLlED3aImaiGCntALekVvxrPxbnwYn7PWklHMHKI5GF+/fNOhUQ==</latexit> GX 3 <latexit sha1_base64=\"GGmnjJWgBubbJgjXJttnNZcxV6g=\">AAACGHicbVDLSsNAFJ34rPUVdaebYBFclaQVdFlwocsK9gFtLJPpTTt0MgkzE6GEgN/hB7jVT3Anbt35Bf6GkzQL23pg4Nxz7uXeOV7EqFS2/W2srK6tb2yWtsrbO7t7++bBYVuGsSDQIiELRdfDEhjl0FJUMehGAnDgMeh4k+vM7zyCkDTk92oagRvgEac+JVhpaWAe9wOsxp6f3KSDpJ4+5CXBLOmmA7NiV+0c1jJxClJBBZoD86c/DEkcAFeEYSl7jh0pN8FCUcIgLfdjCREmEzyCnqYcByDdJP9Dap1pZWj5odCPKytX/04kOJByGni6MztRLnqZ+J/Xi5V/5SaUR7ECTmaL/JhZKrSyQKwhFUAUm2qCiaD6VouMscBE6djmtkidCwhgWTLOYg7LpF2rOvVq7e6i0rCLjEroBJ2ic+SgS9RAt6iJWoigJ/SCXtGb8Wy8Gx/G56x1xShmjtAcjK9ffnmhUg==</latexit> GX 4 <latexit sha1_base64=\"RQRWpR4wEM/zCK6MyWqydogiQ+w=\">AAACGHicbVDLSsNAFJ34rPUVdaebYBFclaQWdFlwocsK9gFtLJPpTTt0MgkzE6GEgN/hB7jVT3Anbt35Bf6GkzQL23pg4Nxz7uXeOV7EqFS2/W2srK6tb2yWtsrbO7t7++bBYVuGsSDQIiELRdfDEhjl0FJUMehGAnDgMeh4k+vM7zyCkDTk92oagRvgEac+JVhpaWAe9wOsxp6f3KSDpJ4+5CXBLOmmA7NiV+0c1jJxClJBBZoD86c/DEkcAFeEYSl7jh0pN8FCUcIgLfdjCREmEzyCnqYcByDdJP9Dap1pZWj5odCPKytX/04kOJByGni6MztRLnqZ+J/Xi5V/5SaUR7ECTmaL/JhZKrSyQKwhFUAUm2qCiaD6VouMscBE6djmtkidCwhgWTLOYg7LpF2rOhfV2l290rCLjEroBJ2ic+SgS9RAt6iJWoigJ/SCXtGb8Wy8Gx/G56x1xShmjtAcjK9fgB+hUw==</latexit> i would love a sandwich ich wrde ein sandwich lieben GY 1 <latexit sha1_base64=\"z1XM2OhhTgc51FaxMG+3wlSz3zs=\">AAACGHicbVDLSsNAFJ34rPUVdaebwSK4KkkVdFlwocsK9iFtDJPpTTt08mBmIpQQ8Dv8ALf6Ce7ErTu/wN9w0mZhWw8MnHvOvdw7x4s5k8qyvo2l5ZXVtfXSRnlza3tn19zbb8koERSaNOKR6HhEAmchNBVTHDqxABJ4HNre6Cr3248gJIvCOzWOwQnIIGQ+o0RpyTUPewFRQ89PrzM3tbOHSUkJT+8z16xYVWsCvEjsglRQgYZr/vT6EU0CCBXlRMqubcXKSYlQjHLIyr1EQkzoiAygq2lIApBOOvlDhk+00sd+JPQLFZ6ofydSEkg5DjzdmZ8o571c/M/rJsq/dFIWxomCkE4X+QnHKsJ5ILjPBFDFx5oQKpi+FdMhEYQqHdvMFqlzAQE8T8aez2GRtGpV+6xauz2v1K0ioxI6QsfoFNnoAtXRDWqgJqLoCb2gV/RmPBvvxofxOW1dMoqZAzQD4+sXfMehUQ==</latexit> GY 3 <latexit sha1_base64=\"WCQxaEeNXu9Aufbl2yA+fYO1yGg=\">AAACGHicbVDLSsNAFJ3UV62vqDvdBIvgqiStoMuCC11WsA9pY5hMb9qhkwczE6GEgN/hB7jVT3Anbt35Bf6GkzQL23pg4Nxz7uXeOW7EqJCm+a2VVlbX1jfKm5Wt7Z3dPX3/oCPCmBNok5CFvOdiAYwG0JZUMuhFHLDvMui6k6vM7z4CFzQM7uQ0AtvHo4B6lGCpJEc/GvhYjl0vuU6dpJE+5CXBLLlPHb1q1swcxjKxClJFBVqO/jMYhiT2IZCEYSH6lhlJO8FcUsIgrQxiAREmEzyCvqIB9kHYSf6H1DhVytDwQq5eII1c/TuRYF+Iqe+qzuxEsehl4n9eP5bepZ3QIIolBGS2yIuZIUMjC8QYUg5EsqkimHCqbjXIGHNMpIptbotQuQAHliVjLeawTDr1mtWo1W/Pq02zyKiMjtEJOkMWukBNdINaqI0IekIv6BW9ac/au/ahfc5aS1oxc4jmoH39AoAToVM=</latexit> GY 4 <latexit sha1_base64=\"V4NqcQT+xKxlby5wn7lpUAxZg8Q=\">AAACGHicbVDLSsNAFJ3UV62vqDvdBIvgqiS1oMuCC11WsA9pY5hMb9qhkwczE6GEgN/hB7jVT3Anbt35Bf6GkzQL23pg4Nxz7uXeOW7EqJCm+a2VVlbX1jfKm5Wt7Z3dPX3/oCPCmBNok5CFvOdiAYwG0JZUMuhFHLDvMui6k6vM7z4CFzQM7uQ0AtvHo4B6lGCpJEc/GvhYjl0vuU6dpJE+5CXBLLlPHb1q1swcxjKxClJFBVqO/jMYhiT2IZCEYSH6lhlJO8FcUsIgrQxiAREmEzyCvqIB9kHYSf6H1DhVytDwQq5eII1c/TuRYF+Iqe+qzuxEsehl4n9eP5bepZ3QIIolBGS2yIuZIUMjC8QYUg5EsqkimHCqbjXIGHNMpIptbotQuQAHliVjLeawTDr1mnVeq982qk2zyKiMjtEJOkMWukBNdINaqI0IekIv6BW9ac/au/ahfc5aS1oxc4jmoH39AoG5oVQ=</latexit> GY 5 <latexit sha1_base64=\"V7US2RzZJ4vJ3zHJ61FCV1eBSc4=\">AAACGHicbVDLSsNAFJ3UV62vqDvdBIvgqiRV0WXBhS4r2Ie0MUymN+3QyYOZiVBCwO/wA9zqJ7gTt+78An/DSZqFbT0wcO4593LvHDdiVEjT/NZKS8srq2vl9crG5tb2jr671xZhzAm0SMhC3nWxAEYDaEkqGXQjDth3GXTc8VXmdx6BCxoGd3ISge3jYUA9SrBUkqMf9H0sR66XXKdOcp4+5CXBLLlPHb1q1swcxiKxClJFBZqO/tMfhCT2IZCEYSF6lhlJO8FcUsIgrfRjAREmYzyEnqIB9kHYSf6H1DhWysDwQq5eII1c/TuRYF+Iie+qzuxEMe9l4n9eL5bepZ3QIIolBGS6yIuZIUMjC8QYUA5EsokimHCqbjXICHNMpIptZotQuQAHliVjzeewSNr1mnVaq9+eVRtmkVEZHaIjdIIsdIEa6AY1UQsR9IRe0Ct60561d+1D+5y2lrRiZh/NQPv6BYNfoVU=</latexit> GX 5 <latexit sha1_base64=\"/6YisMcD2nY5Ql5gU77jQTisf+Y=\">AAACGHicbVDLSsNAFJ34rPUVdaebYBFclaQquiy40GUF+4A2lsn0ph06mYSZiVBCwO/wA9zqJ7gTt+78An/DSZqFbT0wcO4593LvHC9iVCrb/jaWlldW19ZLG+XNre2dXXNvvyXDWBBokpCFouNhCYxyaCqqGHQiATjwGLS98XXmtx9BSBryezWJwA3wkFOfEqy01DcPewFWI89PbtJ+cpE+5CXBLOmkfbNiV+0c1iJxClJBBRp986c3CEkcAFeEYSm7jh0pN8FCUcIgLfdiCREmYzyErqYcByDdJP9Dap1oZWD5odCPKytX/04kOJByEni6MztRznuZ+J/XjZV/5SaUR7ECTqaL/JhZKrSyQKwBFUAUm2iCiaD6VouMsMBE6dhmtkidCwhgWTLOfA6LpFWrOmfV2t15pW4XGZXQETpGp8hBl6iOblEDNRFBT+gFvaI349l4Nz6Mz2nrklHMHKAZGF+/gcWhVA==</latexit> X <latexit sha1_base64=\"DSqt/3Qa7/Ug0Q+grni0IN8OAF8=\">AAACB3icbVDLSgMxFM34rPVVdekmWARXZaYKuiy6cVnBPmA6lEx6pw3NJEOSEcrQD/AD3OonuBO3foZf4G+YaWdhWw8EDufcyz05YcKZNq777aytb2xubZd2yrt7+weHlaPjtpapotCikkvVDYkGzgS0DDMcuokCEoccOuH4Lvc7T6A0k+LRTBIIYjIULGKUGCv5vZiYESU86077lapbc2fAq8QrSBUVaPYrP72BpGkMwlBOtPY9NzFBRpRhlMO03Es1JISOyRB8SwWJQQfZLPIUn1tlgCOp7BMGz9S/GxmJtZ7EoZ3MI+plLxf/8/zURDdBxkSSGhB0fihKOTYS5//HA6aAGj6xhFDFbFZMR0QRamxLC1e07QUU8LwZb7mHVdKu17zLWv3hqtq4LToqoVN0hi6Qh65RA92jJmohiiR6Qa/ozXl23p0P53M+uuYUOydoAc7XL+rvmqg=</latexit> Y <latexit sha1_base64=\"wVsefuj4A71dBp3jPzQ+93dXKxs=\">AAACB3icbVDLSgMxFM3UV62vqks3wSK4KjNV0GXRjcsK9iHtUDLpnTY0kwxJRihDP8APcKuf4E7c+hl+gb9hpp2FbT0QOJxzL/fkBDFn2rjut1NYW9/Y3Cpul3Z29/YPyodHLS0TRaFJJZeqExANnAloGmY4dGIFJAo4tIPxbea3n0BpJsWDmcTgR2QoWMgoMVbq9iJiRpTw9HHaL1fcqjsDXiVeTiooR6Nf/ukNJE0iEIZyonXXc2Pjp0QZRjlMS71EQ0zomAyha6kgEWg/nUWe4jOrDHAolX3C4Jn6dyMlkdaTKLCTWUS97GXif143MeG1nzIRJwYEnR8KE46NxNn/8YApoIZPLCFUMZsV0xFRhBrb0sIVbXsBBTxrxlvuYZW0alXvolq7v6zUb/KOiugEnaJz5KErVEd3qIGaiCKJXtArenOenXfnw/mcjxacfOcYLcD5+gXsiZqp</latexit> GY 2 <latexit sha1_base64=\"6VoUEA0hP001E3j1j8xL/LIcIFc=\">AAACGHicbVDLSsNAFJ34rPUVdaebwSK4KkkVdFlwocsK9iFtLJPpTTt08mBmIpQQ8Dv8ALf6Ce7ErTu/wN9wkmZhWw8MnHvOvdw7x404k8qyvo2l5ZXVtfXSRnlza3tn19zbb8kwFhSaNOSh6LhEAmcBNBVTHDqRAOK7HNru+Crz248gJAuDOzWJwPHJMGAeo0RpqW8e9nyiRq6XXKf9pJY+5CUlPLlP+2bFqlo58CKxC1JBBRp986c3CGnsQ6AoJ1J2bStSTkKEYpRDWu7FEiJCx2QIXU0D4oN0kvwPKT7RygB7odAvUDhX/04kxJdy4ru6MztRznuZ+J/XjZV36SQsiGIFAZ0u8mKOVYizQPCACaCKTzQhVDB9K6YjIghVOraZLVLnAgJ4low9n8MiadWq9lm1dnteqVtFRiV0hI7RKbLRBaqjG9RATUTRE3pBr+jNeDbejQ/jc9q6ZBQzB2gGxtcvfm2hUg==</latexit> Figure 1: We interpret a translation language model p ( Y j |X , Y j ) ( Y j means that phrase Y j has been removed from sequence Y ) as a causal model.",
"become a standard technique in computer vision (Krizhevsky et al., 2012; Huang et al., 2017; Chen et al., 2020), it is non-trivial to apply in machine translation since even a slight modification to a sequence can result in drastic changes in its syntax and semantics.",
"Indeed there is relatively little work in this direction due to these difficulties (Sennrich et al., 2016; Fadaee et al., 2017; Wang et al., 2018; Gao et al., 2019; Xia et al., 2019; Kobayashi, 2018).",
"Further, work based on word replacement either ignores the contexts of replaced words or breaks the alignment between source and target sequences, both detrimental for generating high-quality data.",
"In this paper we observe that a translation language model can be interpreted as a causal model, as described in Figure 1. Doing so allows us to ask counterfactual questions of the form: Given source and target sequences, if a phrase in the source sequence is changed, how would the target sequence change?",
"We propose a data augmentation method for machine translation that generates counterfactual parallel translation data.",
"To ensure these counterfactuals are close to the original data we sample a new source phrase from a masked language model.",
"We then consider the (path-specific) counterfactual target phrase that is aligned to that source phrase (given by an unsupervised phrasal alignment method).",
"The idea is that this augmentation procedure exposes inductive biases in existing language models that enables new translation models to learn more efficiently and exhibit more robust generalisation.",
"Specifically, our augmentation procedure performs the following three steps: 1. We utilize unsupervised phrasal alignment (e.g. Neubig et al. (2011) and Dyer et al. (2013)) to obtain correspondences between source and target phrases.",
"2. A source phrase is removed and then resampled according to a trained masked language model (Devlin et al., 2018; Raffel et al., 2019).",
"3. We perform (path-specific) counterfactual inference on the causal model given by a trained translation language model (Lample and Conneau, 2019) to resample only the aligned target phrase, given the changed source phrase.",
"Different from prior work, our approach takes advantage of both source/target context and alignment for data augmentation.",
"Experiments on IWSLT'15 English Vietnamese, WMT'17 English German, and WMT'18 English Turkish show that our method improves the translation performance on both high-resource and low-resource datasets.",
"We additionally demonstrate that our method complements existing approaches such as backtranslation (Sennrich et al., 2015a).",
"Finally, we demonstrate that our method improves translation robustness (we evaluate this on the WMT'19 English French robustness dataset).",
"Neural machine translation.",
"Given a set of parallel sequences, S = { ( X i , Y i ) } Ni =1 , NMT maximizes the log-likelihood of Y given X , assuming each ( X i , Y i ) pair is independently and identically distributed: max (cid:88) ( X i , Y i ) S log p ( Y i |X i ) .",
"However, paired sequences are usually expensive to collect, as it requires an expert to translate sequences X i into another language Y i .",
"Data augmentation aims to generate new parallel sequences ( X i , Y i ) without manually collecting new data.",
"Phrasal alignment.",
"Phrasal alignment identifies the translation relationships among phrases in parallel sequences.",
"Given a parallel sequence ( X , Y ) , where X = ( X 1 = x 1 , X 2 = x 2 , ..., X |X| = x |X| ) and Y = ( Y 1 = y 1 , Y 2 = y 2 , ..., Y |Y| = y |Y| ) ( X/Y and x/y denote a phrase and its value, re-spectively), phrasal alignment h learns a mapping that projects each position i of X to a position j of Y , i.e. j = h ( i ) .",
"In this paper, we use pialign (Neubig et al., 2011) to obtain alignments.",
"Causal modelling.",
"We formulate causality using the structural causal model (SCM) framework of Pearl (2003).",
"Each SCM is a set of structural equations represented by a graph.",
"The edges of this graph specify the inputs and outputs of the structural equations.",
"Specifically, a variable V i is caused by a set of observable parent variables pa ( V i ) and unobserved variables U i if there exists a (determin-istic or stochastic) structural equation f i : V i = f i ( pa ( V i ) , U i ) .",
"If the structural equations f are identified, it is possible to compute a causal quantity called counterfactuals .",
"Counterfactuals are questions that, given the current state of the world, ask what would have changed if some variable V had been different.",
"For example, Would a person have been able to obtain a visa if they had been born in a different country?.",
"Formally we denote the counterfactual value of a variable V i , had another variable W pa ( V i ) been w (i.e., compared to its observed value w ) as V i,W w .",
"To compute counterfactuals we can follow a three-step procedure (for more details see Chapter 4 of Pearl et al. (2016)): 1. Abduction : Given a prior distribution on unobserved variables p ( U i ) , compute the posterior given all observed variables V = v : p ( U i | V = v ) ; 2. Action : Modify the structural equation for V i , so that W is fixed to the counterfactual value w (the modified equation is denoted as f i, w ); 3. Prediction : Compute the distribution p ( V i,W w | V = v ) using p ( U i | V = v ) , the observed variables v , and the modified structural equation f i, w .",
"3 Method Our goal is to take an input sequence pair ( X , Y ) and create augmented data from it.",
"We aim to do so by removing phrases, resampling them in the source sequence, and computing the counterfactual effect of doing so in the target sequence.",
"We argue that for any such augmentation method for NMT, it is crucial to leverage both contextual and alignment information, for the following reasons.",
"(1) Context : As contextual information is widely used to disambiguate words (Peters et al., 2018) and generate realistic-looking sequences (Zellers et al., 2019), it is critical to utilize contextual information to obtain grammatically-correct and semantically-sound sequences.",
"(2) Alignment : Phrasal alignment plays a critical role in statistical machine translation (Brown et al., 1993; Vogel et al., 1996).",
"As phrasal alignment provides information about which phrase in the source sequence produces a phrase in the target sequence, a data augmentation algorithm which disregards alignment risks breaking the symmetry between source and target sequences.",
"To this end, in Section 3.1, we introduce a technique called Translation-Counterfactual Word Replacement (TCWR) for leveraging both context and alignment to replace phrases in source and target sequences.",
"In Section 3.2, we propose a new data augmentation algorithm based on this replacement technique.",
"In Section 3.3, we describe the architectures used to parameterize the models.",
"Consider the sequence pair ( X , Y ) in Figure 2. A translation language model that learns p ( Y j |X , Y j ) (where Y j indicates the sequence Y with Y j removed) for all j { 1 , ..., |Y|} induces a causal graph on this pair.",
"Specifically it is fully connected, in the following way:",
"(a) all phrases in X cause all phrases in Y ,",
"(b) all phrases in Y cause all other phrases in Y (these connections are signified by the wide gray arrow in Figure 2).",
"Additionally, there are unobserved variables GX i , GY i that cause each individual phrase (more on this be-low).",
"We choose this fully connected structure to take contexts of each phrase into account.",
"Note that this graph is cyclic, yet the counterfactual distribution we care about is identifiable given the posterior of the unobserved variables (which we describe below) and the known equations of the causal model (i.e., the translation language model).",
"Consider that we have an alignment between X and Y , which singles-out the causal effects shown with black arrows in Figure 2. Our idea is to derive a new sequence pair ( X , Y ) by computing a counterfactual.",
"We propose to calculate the counterfactual corresponding to a single alignment, i.e. a path-specific counterfactual: What would Y j have looked like, had X i = x i instead of x i , given that Y j is aligned to X i , and all other phrases X i , Y j had been held constant ?.",
"This allows us to consider 1. Context : By holding all other phrases constant we control for the specific context around the changed phrases x i , y j 1 ; 2. Alignment : The derived counterfactual is based on the direct effect of X i on Y j , where this singled-out link is identified from an alignment.",
"We now outline the three steps to calculate the counterfactual.",
"The example in Figure 2 is used for illustration.",
"The goal is to sample from the following counterfactual distribution: p ( Y 2 , { X 3 x 3 , X 3 x 3 , Y 2 y 2 } |X , Y ) with the translation language model, which describes What would Y 2 have looked like, had X 3 = x 3 instead of x 3 , given that Y 2 is aligned to X 3 , and all other phrases X 3 , Y 2 had been held constant ?.",
"For ease of illustration, we assume both X 3 and Y 2 1 Further, the posterior of unobserved random variables GY j given X , Y will encode additional context w.r.t. the original sequence pair X , Y .",
"contain one token after Byte Pair Encoding (BPE) segmentation (Sennrich et al., 2015b).",
"In Section 3.3, we explain how we use a sequence-to-sequence model to generate phrases containing multiple tokens after segmentation.",
"1. Abduction.",
"The goal of the abduction step is to estimate any unobserved variables that im-pact the counterfactual.",
"As our translation language model specifies a categorical distribution p ( Y 2 |X , Y 2 ) , this unobserved randomness, i.e. the prior of GY 2 , takes the form of a Gumbel random vector.",
"This is due to the fact that random sampling from a categorical distribution can be done via a procedure called the Gumbel-Max Trick (Maddison et al., 2014b).",
"Definition 3.1 (Gumbel-Max Trick) .",
"Two steps are required to sample from a categorical distribution p ( Y ) with K categories: 1. Sample g 1 , . . . , g K Gumbel (0 , 1) .",
"Each g k can be computed as g k = log( log u k ) where u k Uniform (0 , 1) ; 2. Compute y = arg max k =1 ,...,K log p ( Y = k ) + g k .",
"y 2 = arg max k =1 ,..., | V | log p ( Y 2 = k |X , Y 2 ) + g k , s.t. g k Gumbel (0 , 1) .",
"The abduction step samples from the posterior distribution over these Gumbel random variables, given the observed pair ( X , Y ) , i.e., p ( GY 2 |X , Y ) .",
"Fortunately, sampling from the posterior is straightforward to do in two steps (Maddison et al., 2014a; Maddison and Tarlow, 2017): 1. Let y 2 = k .",
"Sample g k Gumbel (0 , 1) ; 2. For the remaining k , compute the probabilities from the model p ( Y 2 = k |X , Y 2 ) , and sample from the distribution g k Gumbel (log p ( Y 2 = k |X , Y 2 ) , 1) truncated within the range ( , g k ) .",
"The resulting samples [ g 1 , . . . , g | V | ] are from the posterior p ( GY 2 |X , Y ) .",
"We describe these steps in more detail in Algorithm 1. 2. Action.",
"In this step, we replace a phrase x 3 in the source sequence with a substitute phrase x 3 .",
"While any replacement leads to a valid counterfactual, we propose to sample x 3 as x 3 p ( X 3 |X 3 ) , where p ( X 3 |X 3 ) is given by a trained masked language model.",
"By sampling from a distribution Algorithm 1: Gumbel Posterior Sampling Input : The observed phrase y j = k Probabilities p ( Y j = k |X , Y j ) for k = 1 , . . . , | V | Output : Sampled Gumbel values g p ( GY j |X , Y ) Sample g k Gumbel(0 , 1) for k 1 to | V | do if k (cid:54) = k then // Sample from truncated Gumbel Sample h k Gumbel (0 , 1) u k = h k + log p ( Y j = k |X , Y j ) g k = log( e u k + e g k ) conditioned on the remaining phrases in X , we sample a realistic replacement word for X 3 .",
"In Figure 2, we sample x 3 = apple' in place of x 3 = house'.",
"3. Prediction.",
"Given the posterior samples [ g 1 , . . . , g | V | ] p ( GY 2 |X , Y ) and the substitute phrase x 3 , we can compute the counterfactual distribution of interest p ( Y 2 , { X 3 x 3 , X 3 x 3 , Y 2 y 2 } |X , Y ) , via the trained translation language model.",
"We do so by computing: y 2 = arg max k =1 ,..., | V | log p ( Y 2 = k | x 3 , X 3 , Y 2 ) + g k .",
"The sample y 2 from the counterfactual distribution is based on the direct effect of X 3 on Y 2 .",
"We remark that the causal model we consider was first introduced by Oberst and Sontag (2019) and called the Gumbel-Max Structural Causal Model .",
"Our insight here is that counterfactuals from this model can be used as an effective data augmentation method for machine translation.",
"Given the above procedure to replace phrases, we propose a new data augmentation method, shown in Algorithm 2. The algorithm takes an input pair of sequences ( X , Y ) and loops through every phrase X i X .",
"At each iteration with probability c it replaces the phrase pair ( x i , y j ) with ( x i , y j ) .",
"We introduce a special [MASK] token (Devlin et al., 2018) to represent a removed phrase for parameterizing both p ( X i |X i ) and p ( Y j |X , Y j ) as:",
"Eq.",
"2 only requires monolingual datasets, which are abundant.",
"On the other hand, Eq.",
"3 requires parallel corpora to train.",
"We parameterize Eq.",
"3 using a variant of the translation language model (Lample and Conneau, 2019).",
"The main difference is that only phrases in target sequences are masked, whereas Lample and Conneau (2019) mask both source and target tokens, with the goal of learning bilingual relations.",
"Another difference is that a phrase with consecutive tokens is masked, while masked tokens in Lample and Conneau (2019) are not necessarily consecutive.",
"To better tackle unknown and rare tokens, we adopt BPE to segment phrases into tokens.",
"As the number of tokens is undetermined during generation, we use a sequence-to-sequence Transformer model (Vaswani et al., 2017) to encode inputs and decode tokens one by one until a special end-of-sequence symbol is encountered.",
"More specifically, given a sequence of N tokens ( t 1 , ..., t N ) , the sequence contains a special [MASK] token signifying a masked phrase.",
"Each token t i is first projected into its embedding e t i , which is a sum of its token embedding, position embedding, and language embedding, inspired by XLM (Lample and Conneau, 2019).",
"Then, a Transformer encoder is applied to encode the tokens into their hidden representations H RN o (where o denotes the hidden size), i.e. H = Encoder ( e t 1 , ..., e t N ) .",
"The hidden representation of [MASK], h [MASK] R o , is fed into a Transformer decoder to predict the tokens of the masked phrase.",
"We learn our models p 1 ( X i |X i ) , p 2 ( Y j |X , Y j ) by maximizing the following objectives: E XD (cid:2) E i Uniform (1 ,..., |X| ) [log p 1 ( X i |X i )] (cid:3) and E ( X , Y ) S (cid:2) E j Uniform (1 ,..., |Y| ) [log p 2 ( Y j |X , Y j )] (cid:3) .",
"We categorize previous work on data augmentation for NMT into two classes, word replacement and backtranslation .",
"Word replacement.",
"WordDropout (Sennrich et al., 2016) randomly zeros out word embeddings in order to introduce noises.",
"BPEDropout (Provilkov et al., 2020) stochastically corrupts the segmentation procedure of BPE, leading to different subword segmentations with the same BPE vocabulary.",
"RAML (Norouzi et al., 2016) applies a reward-augmented maximum likelihood objective, which essentially augments target sequences with sequences sampled based on metrics, such as edit distance and BLEU score (Wang et al., 2018).",
"SwitchOut (Wang et al., 2018) extends RAML, augmenting both source and target sequences by randomly replacing words with noisy words sampled from a uniform distribution.",
"These works do not take context and alignment into account.",
"TDA (Fadaee et al., 2017) first uses two uni-directional language models to replace a word in the source sequence, before replacing the corresponding word based on a bilingual lexicon.",
"TDA does not consider contexts in target sequences and relies on a high-quality bilingual lexicon.",
"SCDA (Gao et al., 2019) uses a soft augmentation approach, where Dataset # Sequences # Words # Chars News Commentary 0.46M 10.05M 63.96M News Crawl 2010 6.8M 0.14B 0.83B Table 1: The statistics of the monolingual datasets.",
"the one-hot representation of a word is replaced by a soft distribution of words given by a language model.",
"DADA (Cheng et al., 2019) uses gradient information to generate adversarial sequences for more robust NMT.",
"AdvAug (Cheng et al., 2020) extends DADA, where embeddings of virtual sequences are sampled from an adversarial distribution for augmentation.",
"SCDA and AdvAug ignore the alignment information, thereby breaking the symmetry of source and target sequences.",
"While DADA takes both context and alignment into account, it replaces multiple words in source and target sequences simultaneously, which risks generating unnatural sequences.",
"In this paper, we utilize both alignment and contextual information to sequentially replace aligned phrases for better performance.",
"Backtranslation.",
"The idea of backtranslation dates back to statistical machine translation (Goutte et al., 2009; Bojar and Tamchyna, 2011).",
"Senrich et al. (2016) use backtranslation, where monolingual sequences in the target language are translated into the source language, and obtain substantial improvements on the WMT and IWSLT tasks.",
"Currey et al. (2017) apply backtranslation to low-resource languages, finding that even low-quality translations due to limited parallel corpora are beneficial.",
"He et al. (2016) propose a dual learning framework, where the primal task (source-to-target translation) and the dual task (target-to-source translation) teach each other through a reinforcement learning process until convergence.",
"Edunov et al. (2018) scale backtranslation to millions of monolingual data and obtain state-of-the-art performance on WMT'14 English German.",
"Xia et al. (2019) use a two-step pivoting method for improving backtranslation on low-resource languages.",
"We show that TCWR can be used together with backtranslation and obtain further improvements.",
"We now describe the improvements with the data augmentation based on TCWR.",
"We use the monolingual training data, including News Commentary and News Crawl 2010, provided by WMT'18, for Eq.",
"2, while the training set of each language pair is used for Eq.",
"3. The statistics of the monolingual and parallel corpora are summarized in Table 1 and 2, respectively.",
"To reduce memory overhead, we train a shared language model for Eq.",
"2 and 3, i.e. 1 and 2 are tied.",
"A language model is trained for each language pair to avoid performing multilingual NMT for a fair comparison with baselines, as jointly training a single model for several language pairs has been shown to be effective for both low-resource and high-resource languages (Aharoni et al., 2019).",
"Therefore, we pre-train four models for En-Tr, EnDe, En-Vi and En-Fr, respectively.",
"The encoder and decoder are composed of six layers.",
"The encoder is initialized with XLM (Lample and Conneau, 2019) pre-trained with the masked language model, while the decoder is randomly initialized.",
"The input-output embeddings are tied for reducing the size of the model (Press and Wolf, 2016).",
"To achieve faster convergence, we apply PreNorm (Nguyen and Salazar, 2019) for getting rid of the warm-up stage of Transformer.",
"The learning rate is set to 1e-5 and is linearly decayed with more training steps.",
"The hidden size o is set to 1024.",
"Same as BERT, the maximum sequence size is set to 512.",
"We use LAMB (You et al., 2019) as the optimizer.",
"GELU (Hendrycks and Gimpel, 2016) is used as the activation function.",
"16 sequences are used at each pre-training step.",
"We train the masked language model for 50% of the time and the left time is used for training the translation language model.",
"After pre-training, we use the pre-trained models to perform data augmentation on training data.",
"Then, the augmented data are combined with training data for training NMT models.",
"We use fairseq 2 to implement the NMT models.",
"The vocabularty size is 37K.",
"Six encoder and decoder layers are applied.",
"The hidden size is set to 1024.",
"16 self-attention heads are employed.",
"We use Adam as the optimizer.",
"The learning rate is initially set to 1e-7 and is gradually increased to 5e-4 with 4K warm-up steps, before applying linear decay.",
"Dropout is set to 0.3.",
"Label smoothing with the smoothing factor 0.1 is used.",
"For decoding, we use beam search, and the beam size is set to 12.",
"SacreBLEU (Post, 2018) is used as the metric.",
"We study the effect of pre-training steps on machine translation quality.",
"We use the language models at different pre-training steps and evaluate these models on the development set of the WMT'18 English Turkish task.",
"The results are shown in Figure 3. The BLEU score improves with more pre-training steps and peaks at around 110K steps.",
"We do not observe better performance with more pre-training steps, as the models become more overfitted on the training sets.",
"We plot the learning curves of the language model for En-Tr with/without XLM initialization.",
"As shown in Figure 4, the model with the XLM initialization converges faster and better compared to the model with the random initialization.",
"As XLM is trained using the masked language model objective on large-scale monolingual data, we draw the conclusion that large-scale pre-training can improve downstream language model pre-training tasks.",
"We further evaluate the models on the development set of the WMT'18 English Turkish task.",
"The model with the XLM initialization also performs better (17.49 BLEU) compared to its counterpart (16.68 BLEU).",
"Thus, the model with the XLM initialization can also generate better data for improving NMT.",
"As shown in Figure 5, we vary the sampling probability in Algorithm 2 and evaluate on the development set of the WMT'18 English Turkish task.",
"We observe that the BLEU score is maximized with sampling probability 0.2.",
"The BLEU scores decrease with larger sampling probabilities.",
"components.",
"We randomly choose a phrase with uniform distribution to replace the original phrase for ablating source and target contexts.",
"For ablating alignment, we randomly choose a position in the target sequence instead of following the alignment given by pialign.",
"We also study removing g k in Eq.",
"1. Since g k comes from the abduction step, which encodes the information from the original pair ( X , Y ) , Eq.",
"1 encourages the model to sample a new pair that is similar to the original pair.",
"Therefore, the model without g k collapses to a probabilistic approach that directly samples phrases from the translation language model, disregarding the information from the original pair.",
"The results are shown in Table 3. We observe that ablating the source context, target context and alignment are negative for translation quality, demonstrating the necessity of considering all these components for data augmentation.",
"The result of ablating g k shows the effectiveness of incorporating the original information from ( X , Y ) .",
"We evaluate the algorithms on WMT'17 English German (En-De), WMT'18 English Turkish (En-Tr) and IWSLT'15 English Vietnamese (En-Vi).",
"As shown in Table 2, En-Tr and En-Vi are two low-resource language pairs, while En-De is a high-resource language pair.",
"For En-Tr, we use newstest17 for validation and newstest18 for testing.",
"For En-De, we use new-stest16 for validation and newstest17 for testing.",
"For En-Vi, we use the TED tst2012 for validation and the TED tst2013 for testing.",
"We compare TCWR with six baselines, WordDropout, BPEDropout, SwitchOut, SCDA, TDA and DADA.",
"For WordDropout and BPEDropout, we perform a range search on its dropout probability from 0 to 1 and select the best one on de-Method En Tr En De En Vi Baseline 15.35 27.54 31.66 +TCWR 17.38 29.37 33.76 +BT 19.24 29.19 33.38 +BT +TCWR 20.19 30.26 35.72 Table 5: The BLEU scores on the testing sets with backtranslation and TCWR.",
"velopment sets.",
"Similarly, we choose the temperature with the highest score on development sets for SwitchOut.",
"For SCDA, we search the replacing probability and set it to 0.15.",
"We follow the official implementation 3 of TDA.",
"We reuse the hyperpa-rameters from Cheng et al. (2019) for DADA.",
"The results on three language pairs are shown in Table 4. Compared to the baseline with no data augmentation, TCWR yields improvements of 2.03, 1.63 and 1.79 BLEU for En-Tr, En-De and En-Vi, respectively.",
"TCWR also outperforms the other augmentation methods, which further con-firms the effectiveness of considering source context, target context, and alignment for NMT data augmentation.",
"Besides, these results demonstrate that TCWR brings consistent improvements to both low-resource and high-resource language pairs.",
"As backtranslation is a widely-used data augmentation method by utilizing monolingual data to generate new parallel pairs, we show how TCWR can be used with backtranslation.",
"To perform backtranslation, we use the monolingual sequences from News Crawl 2017, News Crawl 2010 and VNTC 4 for EnTr, En-De and En-Vi, respectively.",
"Then we perform data augmentation on both training data and backtranslated data.",
"As shown in Table 5, TCWR improves upon backtranslation, demonstrating that TCWR and backtranslation are not mutually exclusive, and TCWR can enhance the performance of backtranslation.",
"Noisy or non-standard input text (e.g. text with spelling errors and code switching) can cause significant degradation in most NMT systems.",
"We use the WMT'19 English French robustness dataset for evaluating translation robustness.",
"As the parallel pairs are scarce for this task, we com-3 https://github.com/marziehf/ DataAugmentationNMT 4 https://github.com/duyvuleo/VNTC En: Kosovo is taking a hard look at its privatisation process in light of recurring [complaints / problems ].",
"bine its training data with the English French pairs from Europarl-v7.",
"The models are validated on the development set of the MTNT dataset and tested on the released test set of the WMT'19 robustness task.",
"The results are shown in Table 7.",
"We observe that TCWR outperforms the baseline without any data augmentation or with the other methods.",
"If we regard the task as adapting from the source dataset with clean text (Europarl-v7) to the target dataset with noisy text (WMT'19 robustness dataset), TCWR helps this adaptation via enlarging training examples with language models trained using noisy and non-standard text.",
"We thereby conclude that TCWR can improve NMT robustness.",
"As shown in Table 6, we perform a case study of TCWR.",
"We observe that TCWR can reasonably substitute words in source sequences based on contexts and modify corresponding target words, which demonstrates the benefits of considering both context and alignment for augmentation.",
"We proposed a data augmentation method for NMT, which introduces a causal inductive bias that takes both context and alignment into account.",
"The method was shown to improve the performance of translation, backtranslation and translation robustness on four NMT benchmarks.",
"We would like to thank the anonymous reviewers for their insightful comments.",
"We also thank Chris Dyer and Jiatao Gu for helpful discussions."
]
| [
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"method",
"abstain",
"result",
"objective",
"method",
"method",
"result",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"objective",
"method",
"other",
"other",
"other",
"method",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"objective",
"objective",
"method",
"abstain",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"objective",
"method",
"method",
"other",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"objective",
"abstain",
"other",
"other"
]
|
[
"We introduce neural particle smoothing , a sequential Monte Carlo method for sampling annotations of an input string from a given probability model.",
"In contrast to conventional particle filtering algorithms, we train a proposal distribution that looks ahead to the end of the input string by means of a right-to-left LSTM.",
"We demonstrate that this innovation can improve the quality of the sample.",
"To motivate our formal choices, we explain how our neural model and neural sampler can be viewed as low-dimensional but nonlinear approximations to working with HMMs over very large state spaces.",
"Many structured prediction problems in NLP can be reduced to labeling a lengthT input string x with a lengthT sequence y of tags.",
"In some cases, these tags are annotations such as syntactic parts of speech.",
"In other cases, they represent actions that incrementally build an output structure: IOB tags build a chunking of the input (Ramshaw and Marcus, 1999), shift-reduce actions build a tree (Yamada and Matsumoto, 2003), and finite-state transducer arcs build an output string (Pereira and Riley, 1997).",
"One may wish to score the possible taggings using a recurrent neural network, which can learn to be sensitive to complex patterns in the training data.",
"A globally normalized conditional probability model is particularly valuable because it quantifies uncertainty and does not suffer from label bias (Lafferty et al., 2001); also, such models often arise as the predictive conditional distribution p ( y | x ) corresponding to some well-designed generative model p ( x , y ) for the domain.",
"In the neural case, however, inference in such models becomes intractable.",
"It is hard to know what the model actually predicts and hard to compute gradients to improve its predictions.",
"sampling are common.",
"Unfortunately, beam search considers only the approximate-topk taggings from an exponential set (Wiseman and Rush, 2016), and importance sampling requires the construction of a good proposal distribution (Dyer et al., 2016).",
"In this paper we exploit the sequential structure of the tagging problem to do sequential importance sampling, which resembles beam search in that it constructs its proposed samples incrementallyone tag at a time, taking the actual model into account at every step.",
"This method is known as particle filtering (Doucet and Johansen, 2009).",
"We extend it here to take advantage of the fact that the sampler has access to the entire input string as it constructs its tagging, which allows it to look ahead oras we will show to use a neural network to approximate the effect of lookahead.",
"Our resulting method is called neural particle smoothing .",
"denote the prefix x 1 x t and the suffix x t +1 x T .",
"We develop neural particle smoothing a sequential importance sampling method which, given a string x , draws a sample of taggings y from p ( y | x ) .",
"Our method works for any conditional probability model of the quite general form 1 p ( y | x ) def exp GT (1) where G is an incremental stateful global scoring model that recursively defines scores G t of prefixes of ( x , y ) at all times 0 t T : G t def = G t 1 + g ( s t 1 , x t , y t ) (with G 0 def = 0 ) (2) s t def = f ( s t 1 , x t , y t ) (with s 0 given) (3) These quantities implicitly depend on x , y and .",
"Here s t is the model's state after observing the pair of lengtht prefixes ( x : t , y : t ) .",
"G t is the score-so-far 1 A model may require for convenience that each input end with a special end-of-sequence symbol: that is, x T = EOS .",
"of this prefix pair, while GT G t is the score-to-go .",
"The state s t summarizes the prefix pair in the sense that the score-to-go depends only on s t and the length( T t ) suffixes ( x t : , y t : ) .",
"The local scoring function g and state update function f may be any functions parameterized by perhaps neural networks.",
"We assume is fixed and given.",
"This model family is expressive enough to capture any desired p ( y | x ) .",
"Why?",
"Take any distribution p ( x , y ) with this desired conditionalization (e.g., the true joint distribution) and factor it as log p ( x , y )= P Tt =1 log p ( x t , y t | x : t 1 , y : t 1 ) = P Tt =1 log p ( x t , y t | s t 1 ) | {z } use as g ( s t 1 ,x t ,y t ) = GT (4) by making s t include as much information about ( x : t , y : t ) as needed for (4) to hold (possibly s t = ( x : t , y : t ) ).",
"2 Then by defining g as shown in (4), we get p ( x , y ) = exp GT and thus (1) holds for each x .",
"Our method is spelled out in 4 (one may look now).",
"It is a variant of the popular particle filtering method that tracks the state of a physical system in discrete time (Ristic et al., 2004).",
"Our particular proposal distribution for y t can be found in equations (5), (6), (25) and (26).",
"It considers not only past observations x : t as reflected in s t 1 , but also future observations x t : , as summarized by the state s t of a right-to-left recurrent neural network f that we will train: H t def = h ( s t +1 , x t +1 ) + H t +1 (5) s t def = f ( s t +1 , x t +1 ) (with s T given) (6) Conditioning the distribution of y t on future observations x t : means that we are doing smoothing rather than filtering (in signal processing terminol-ogy).",
"Doing so can reduce the bias and variance of our sampler.",
"It is possible so long as x is provided in its entirety before the sampler runswhich is often the case in NLP.",
"Why sample from p at all?",
"Many NLP systems instead simply search for the Viterbi sequence y that maximizes GT and thus maximizes p ( y | x ) .",
"If the space of states s is small, this can be done effi-ciently by dynamic programming (Viterbi, 1967); if 2 Furthermore, s t could even depend on all of x (if s 0 does), allowing direct expression of models such as stacked BiRNNs.",
"not, then A may be an option (see 2).",
"More common is to use an approximate method: beam search, or perhaps a sequential prediction policy trained with reinforcement learning.",
"Past work has already shown how to improve these approximate search algorithms by conditioning on the future (Bahdanau et al., 2017; Wiseman and Rush, 2016).",
"Sampling is essentially a generalization of maximization: sampling from exp GT temperature approaches maximization as temperature 0 .",
"It is a fundamental building block for other algorithms, as it can be used to take expectations over the whole space of possible y values.",
"For unfamiliar readers, Appendix E reviews how sampling is crucially used in minimum-risk decoding, supervised training, unsupervised training, imputation of missing data, pipeline decoding, and inference in graphical models.",
"To develop our method, it is useful to first consider exact samplers.",
"Exact sampling is tractable for only some of the models allowed by 1.1.",
"However, the form and notation of the exact algorithms in 2 will guide our development of approximations in 3.",
"An exact sequential sampler draws y t from p ( y t | x , y : t 1 ) for each t = 1 , . . . , T in sequence.",
"Then y is exactly distributed as p ( y | x ) .",
"For each given x , y : t 1 , observe that p ( y t | x , y : t 1 ) (7) p ( y : t | x ) = P y t : p ( y | x ) (8) P y t : exp GT (9) = exp ( G t + log P y t : exp ( GT G t ) | {z } call this H t ) (10) = exp ( G t 1 + g ( s t 1 , x t , y t ) + H t ) (11) exp ( g ( s t 1 , x t , y t ) + H t ) (12) Thus, we can easily construct the needed distribution (7) by normalizing (12) over all possible values of y t .",
"The challenging part of (12) is to compute H t : as defined in (10), H t involves a sum over exponentially many futures y t : .",
"(See Figure 1.)",
"We chose the symbols G and H in homage to the A search algorithm (Hart et al., 1968).",
"In that algorithm (which could be used to find the Viterbi sequence), g denotes the score-so-far of a partial solution y : t , and h denotes the optimal score-to-go.",
"Thus, g + h would be the score of the best sequence with prefix y : t .",
"Analogously, our G t + 930 x1=On x2=Thursday xt-1=Fed xt=raised xt+1=interest xt+2=rates y1=PREP y2=N yt-1=N yt=ADJ yt=V yt+1=V yt+1=N yt+2=N yt+2=N Ht x y g ( s t-1, xt, yt) Gt-1 Figure 1 : Sampling a single particle from a tagging model.",
"H t is the log of the total exponentiated scores of all sequences with prefix y : t .",
"G t and H t might be called the logprob-so-far and logprob-to-go of y : t .",
"Just as A approximates h with a heuristic h , the next section will approximate H t using a neural estimate H t (equations (5)(6)).",
"However, the specific form of our approximation is inspired by cases where H t can be computed exactly.",
"We consider those in the remainder of this section.",
"A hidden Markov model (HMM) specifies a normalized joint distribution p ( x , y ) = exp GT over state sequence y and observation sequence x , 3 Thus the posterior p ( y | x ) is proportional to exp GT , as required by equation (1).",
"The HMM specifically defines GT by equations (2)(3) with s t = y t and g ( s t 1 , x t , y t ) = log p ( y t | y t 1 ) + log p ( x t | y t ) .",
"4 In this setting, H t can be computed exactly by the backward algorithm (Rabiner, 1989).",
"(Details are given in Appendix A for completeness.) 2.2 Exact sampling from OOHMMs For sequence tagging, a weakness of (first-order) HMMs is that the model state s t = y t may contain little information: only the most recent tag y t is remembered, so the number of possible model states s t is limited by the vocabulary of output tags.",
"We may generalize so that the data generating process is in a latent state u t { 1 , . . . , k } at each time t , and the observed y t along with x t is generated from u t .",
"Now k may be arbitrarily large.",
"The 3 The HMM actually specifies a distribution over a pair of in-finite sequences, but here we consider the marginal distribution over just the lengthT prefixes.",
"model has the form p ( x , y ) = exp GT (13) = X u TY t =1 p ( u t | u t 1 ) p ( x t , y t | u t ) This is essentially a pair HMM (Knudsen and Miyamoto, 2003) without insertions or deletions, also known as an (cid:15) -free or same-length probabilistic finite-state transducer.",
"We refer to it here as an output-output HMM (OOHMM).",
"5 Is this still an example of the general model architecture from 1.1?",
"Yes.",
"Since u t is latent and evolves stochastically, it cannot be used as the state s t in equations (2)(3) or (4).",
"However, we can define s t to be the model's belief state after observing ( x : t , y : t ) .",
"The belief state is the posterior probability distribution over the underlying state u t of the system.",
"That is, s t deterministically keeps track of all possible states that the OOHMM might be injust as the state of a determinized FSA keeps track of all possible states that the original nondeterministic FSA might be in.",
"We may compute the belief state in terms of a vector of forward probabilities that starts at 0 , ( 0 ) u def = ( 1 if u = BOS (see footnote 4) 0 if u = any other state (14) and is updated deterministically for each 0 < t T by the forward algorithm (Rabiner, 1989): ( t ) u def = k X u 0 =1 ( t 1 ) u 0 p ( u | u 0 ) p ( x t , y t | u ) (15) 5 This is by analogy with the input-output HMM (IOHMM) of Bengio and Frasconi (1996), which defines p ( y | x ) directly and conditions the transition to u t on x t .",
"The OOHMM instead defines p ( y | x ) by conditionalizing (13)which avoids the label bias problem (Lafferty et al., 2001) that in the IOHMM, y t is independent of future input x t : (given the past input x : t ).",
"( t ) u can be interpreted as the logprob-so-far if the system is in state u after observing ( x : t , y : t ) .",
"We may express the update rule (15) by > t = > t 1 P where the matrix P depends on ( x t , y t ) , namely P u 0 u def = p ( u | u 0 ) p ( x t , y t | u ) .",
"The belief state s t def = J t K R k simply normalizes t into a probability vector, where J u K def = u / ( u > 1 ) denotes the normalization operator .",
"The state update (15) now takes the form (3) as desired, with f a normalized vector-matrix product: s > t = f ( s t 1 , x t , y t ) def = J s > t 1 P K (16) As in the HMM case, we define G t as the log of the generative prefix probability, G t def = log p ( x : t , y : t ) = log P u ( t ) u (17) which has the form (2) as desired if we put g ( s t 1 , x t , y t ) def = G t G t 1 (18) = log > t 1 P 1 > t 1 1 = log ( s > t 1 P 1 ) Again, exact sampling is possible.",
"It suffices to compute (9).",
"For the OOHMM, this is given by P y t : exp GT = > t t (19) where T def = 1 and the backward algorithm ( t ) v def = p ( x t : | u t = u ) (20) = X u t : , y t : p ( u t : , x t : , y t : | u t = u ) = X u 0 p ( u 0 | u ) p ( x t +1 | u 0 ) | {z } call this P uu 0 ( t +1 ) u 0 for 0 t < T uses dynamic programming to find the total probability of all ways to generate the future observations x t : .",
"Note that t is defined for a specific prefix y : t (though it sums over all u : t ), whereas t sums over all suffixes y t : (and over all u t : ), to achieve the asymmetric summation in (19).",
"Define s t def = J t K R k to be a normalized version of t .",
"The t recurrence (20) can clearly be expressed in the form s t = J P s t +1 K , much like (16).",
"2.3 The logprob-to-go for OOHMMs Let us now work out the definition of H t for OOHMMs (cf.",
"equation (35) in Appendix A for HMMs).",
"We will write it in terms of H t from 1.2.",
"Let us define H t symmetrically to G t (see (17)): H t def = log X u ( t ) u (= log 1 > t ) (21) which has the form (5) as desired if we put h ( s t +1 , x t +1 ) def = H t H t +1 = log ( 1 > P s t +1 ) (22) From equations (10), (17), (19) and (21), we see H t = log (cid:0) X y t : exp GT (cid:1) G t = log > t t ( > t 1 )( 1 > t ) + log ( 1 > t ) = log s > t s t | {z } call this C t + H t (23) where C t R can be regarded as evaluating the compatibility of the state distributions s t and s t .",
"In short, the generic strategy (12) for exact sampling says that for an OOHMM, y t is distributed as p ( y t | x , y : t 1 ) exp ( g ( s t 1 , x t , y t ) + H t ) exp ( g ( s t 1 , x t , y t ) | {z } depends on x : t , y : t + C t |{z} on x , y : t + H t |{z} on x t : ) exp ( g ( s t 1 , x t , y t ) + C t ) (24) This is equivalent to choosing y t in proportion to (19)but we now turn to settings where it is infeasible to compute (19) exactly.",
"There we will use the formulation (24) but approximate C t .",
"For completeness, we will also consider how to approximate H t , which dropped out of the above distribution (because it was the same for all choices of y t ) but may be useful for other algorithms (see 4).",
"The expressivity of an OOHMM is limited by the number of states k .",
"The state u t { 1 , . . . , k } is a bottleneck between the past ( x : t , y : t ) and the future ( x t : , y t : ) , in that past and future are conditionally independent given u t .",
"Thus, the mutual information between past and future is at most log 2 k bits.",
"In many NLP domains, however, the past seems to carry substantial information about the future.",
"The first half of a sentence greatly reduces the uncertainly about the second half, by providing information about topics, referents, syntax, semantics, and discourse.",
"This suggests that an accurate HMM language model p ( x ) would require very large k as would a generative OOHMM model p ( x , y ) of annotated language.",
"The situation is perhaps better for discriminative models p ( y | x ) , since much of 932 the information for predicting y t : might be available in x t : .",
"Still, it is important to let ( x : t , y : t ) contribute enough additional information about y t : : even for short strings, making k too small (giving log 2 k bits) may harm prediction (Dreyer et al., 2008).",
"Of course, (4) says that an OOHMM can express any joint distribution for which the mutual information is finite, 6 by taking k large enough for v t 1 to capture the relevant info from ( x : t 1 , y : t 1 ) .",
"So why not just take k to be largesay, k = 2 30 to allow 30 bits of information?",
"Unfortunately, evaluating GT then becomes very expensiveboth computationally and statistically.",
"As we have seen, if we define s t to be the belief state J t K R k , updating it at each observation ( x t , y t ) (equation (3)) requires multiplication by a k k matrix P .",
"This takes time O ( k 2 ) , and requires enough data to learn O ( k 2 ) transition probabilities.",
"As a solution, we might hope that for the inputs x observed in practice, the very high-dimensional belief states J t K R k might tend to lie near a d dimensional manifold where d (cid:28) k .",
"Then we could take s t to be a vector in R d that compactly encodes the approximate coordinates of J t K relative to the manifold: s t = ( J t K ) , where is the encoder.",
"In this new, nonlinearly warped coordinate system, the functions of s t 1 in (2)(3) are no longer the simple, essentially linear functions given by (16) and (18).",
"They become nonlinear functions operating on the manifold coordinates.",
"( f in (16) should now ensure that s > t ( J ( 1 ( s t 1 )) > PK ) , and g in (18) should now estimate log ( 1 ( s t 1 )) > P 1",
".) In a sense, this is the reverse of the kernel trick (Boser et al., 1992) that converts a low-dimensional nonlinear function to a high-dimensional linear one.",
"Our hope is that s t has enough dimensions d (cid:28) k to capture the useful information from the true J t K , and that has enough dimensions (cid:28) k 2 to capture most of the dynamics of equations (16) and (18).",
"We thus proceed to fit the neural networks f , g directly to the data, without ever knowing the true k or the structure of the original operators P R k k .",
"We regard this as the implicit justification for various published probabilistic sequence models p ( y | x ) that incorporate neural networks.",
"These models usually have the form of 1.1.",
"Most simply, ( f , g ) can be instantiated as one time step in an RNN (Aharoni and Goldberg, 2017), but it is com-6 This is not true for the language of balanced parentheses.",
"mon to use enriched versions such as deep LSTMs.",
"It is also common to have the state s t contain not only a vector of manifold coordinates in R d but also some unboundedly large representation of ( x , y : t ) (cf.",
"equation (4)), so the f neural network can refer to this material with an attentional (Bahdanau et al., 2015) or stack mechanism (Dyer et al., 2015).",
"A few such papers have used globally normalized conditional models that can be viewed as approximating some OOHMM, e.g., the parsers of Dyer et al. (2016) and Andor et al. (2016).",
"That is the case (1.1) that particle smoothing aims to support.",
"Most papers are locally normalized conditional models (e.g., Kann and Schtze, 2016; Aharoni and Goldberg, 2017); these simplify supervised training and can be viewed as approximating IOHMMs (footnote 5).",
"For locally normalized models, H t = 0 by construction, in which case particle filtering (which estimates H t = 0 ) is just as good as particle smoothing.",
"Particle filtering is still useful for these models, but lookahead's inability to help them is an expressive limitation (known as label bias ) of locally normalized models.",
"We hope the existence of particle smoothing (which learns an estimate H t ) will make it easier to adopt, train, and decode globally normalized models, as discussed in 1.3.",
"We can adopt the same neuralization trick to approximate the OOHMM's logprob-to-go H t = C t + H t .",
"We take s t R d on the same theory that it is a low-dimensional reparameterization of J t K , and define ( f , h ) in equations (5)(6) to be neural networks.",
"Finally, we must replace the definition of C t in (23) with another neural network c that works on the low-dimensional approximations: 7 C t def = c ( s t , s t ) (except that CT def = 0 ) (25) The resulting approximation to (24) (which does not actually require h ) will be denoted q , : q , ( y t | x , y : t 1 ) def exp ( g ( s t 1 , x t , y t ) + C t ) (26) The neural networks in the present section are all parameterized by , and are intended to produce an estimate of the logprobto-go H t a function of x t : , which sums over all possible y t : .",
"By contrast, the OOHMM-inspired neural networks suggested in 3.2 were used to specify an 7 CT = 0 is correct according to (23).",
"of x : t and y : t using separate parameters .",
"Arguably has a harder modeling job than because it must implicitly sum over possible futures y t : .",
"We now consider how to get corrected samples from q , even if gives poor estimates of H t , and then how to train to improve those estimates.",
"In this paper, we assume nothing about the given model GT except that it is given in the form of equations (1)(3) (including the parameter vector ).",
"Suppose we run the exact sampling strategy but approximate p in (7) with a proposal distribution q , of the form in (25)(26).",
"Suppressing the subscripts on p and q for brevity, this means we are effectively drawing y not from p ( y | x ) but from q ( y | x ) = TY t =1 q ( y t | x , y : t 1 ) (27) If C t H t + const within each y t draw, then q p .",
"Normalized importance sampling corrects (mostly) for the approximation by drawing many sequences y (1) , . . . y ( M ) IID from (27) and assigning y ( m ) a relative weight of w ( m ) def = p ( y ( m ) | x ) q ( y ( m ) | x ) .",
"This ensemble of weighted particles yields a distribution p ( y ) def = P Mm =1 w ( m ) I ( y = y ( m ) ) P Mm =1 w ( m ) p ( y | x ) (28) that can be used as discussed in 1.3.",
"To compute w ( m ) in practice, we replace the numerator p ( y ( m ) | x ) by the unnormalized version exp GT , which gives the same p .",
"Recall that each GT is a sum P Tt =1 g ( ) .",
"Sequential importance sampling is an equivalent implementation that makes t the outer loop and m the inner loop.",
"It computes a prefix ensemble Y t def = { ( y (1): t , w (1) t ) , . . . , ( y ( M ) : t , w ( M ) t ) } (29) for each 0 t T in sequence.",
"Initially, ( y ( m ) :0 , w ( m ) 0 ) = ( (cid:15), exp C 0 ) for all m .",
"Then for 0 < t T , we extend these particles in parallel: y ( m ) : t = y ( m ) : t 1 y ( m ) t (concatenation) (30) w ( m ) t = w ( m ) t 1 exp ( g ( s t 1 ,x t ,y t )+ C t C t 1 ) q ( y t | x , y : t 1 ) (31) where each y ( m ) t is drawn from (26).",
"Each Y t yields a distribution p t over prefixes y : t , which estimates the distribution p t ( y : t ) def exp ( G t + C t ) .",
"We return p def = p T p T = p .",
"This gives the same p as in (28): the final y ( m ) T are the same, with the same final weights w ( m ) T = exp GT q ( y ( m ) | x ) , where GT was now summed up as C 0 + P Tt =1 g ( ) + C t C t 1 .",
"That is our basic particle smoothing strategy.",
"If we use the naive approximation C t = 0 everywhere, it reduces to particle filtering .",
"In either case, various well-studied improvements become available, such as various resampling schemes (Douc and Capp, 2005) and the particle cascade (Paige et al., 2014).",
"8 An easy improvement is multinomial resampling .",
"After computing each p t , this replaces Y t with a set of M new draws from p t ( p t ) , each of weight 1which tends to drop low-weight particles and duplicate high-weight ones.",
"9 For this to usefully focus the ensemble on good prefixes y : t , p t should be a good approximation to the true marginal p ( y : t | x ) exp ( G t + H t ) from (10).",
"That is why we arranged for p t ( y : t ) exp ( G t + C t ) .",
"Without C t , we would have only p t ( y : t ) exp G t which is fine for the traditional particle filtering setting, but in our setting it ignores future information in x t : (which we have assumed is available) and also favors sequences y that happen to accumulate most of their global score GT early rather than late (which is possible when the globally normalized model (1)(2) is not factored in the generative form (4)).",
"We now consider training the parameters of our sampler.",
"These parameters determine the updates f in (6) and the compatibility function c in (25).",
"As a result, they determine the proposal distribution q used in equations (27) and (31), and thus determine the stochastic choice of p that is returned by the sampler on a given input x .",
"In this paper, we simply try to tune to yield good proposals.",
"Specifically, we try to ensure that q ( y | x ) in equation (27) is close to p ( y | x ) from equation (1).",
"While this may not be necessary for the sampler to perform well downstream, 10 it does 8 The particle cascade would benefit from an estimate of H t , as it (like A search) compares particles of different lengths.",
"9 While resampling mitigates the degeneracy problem, it could also reduce the diversity of particles.",
"In our experiments in this paper, we only do multinomial resampling when the effective sample size of p t is lower than M 2 .",
"Doucet and Johansen (2009) give a more thorough discussion on when to resample.",
"10 In principle, one could attempt to train end-to-end on some downstream objective by using reinforcement learning or the Gumbel-softmax trick (Jang et al., 2017; Maddison et al., 2017).",
"For example, we might try to ensure that p closely matches the model's distribution p (equation (28))the na-934 guarantee it (assuming that the model p is correct). Specifically, we seek to minimize (1 ) KL ( p || q ) + KL ( q || p ) (with [0 , 1] ) (32) averaged over examples x drawn from a training set. 11 (The training set need not provide true y 's.) The inclusive KL divergence KL ( p || q ) is an expectation under p . We estimate it by replacing p with a sample p , which in practice we can obtain with our sampler under the current . (The danger, then, is that p will be biased when is not yet well-trained; this can be mitigated by increasing the sample size M when drawing p for training purposes.) Intuitively, this term tries to encourage q in future to re-propose those y values that turned out to be good and survived into p with high weights.",
"The exclusive KL divergence KL ( q || p ) is an expectation under q .",
"Since we can sample from q exactly, we can get an unbiased estimate of KL ( q || p ) with the likelihood ratio trick (Glynn, 1990).",
"12 (The danger is that such REINFORCE methods tend to suffer from very high variance.) This term is a popular objective for variational approximation.",
"Here, it tries to discourage q from re-proposing bad y values that turned out to have low exp GT relative to their proposal probability.",
"Our experiments balance recall (inclusive) and precision (exclusive) by taking = 12 (which Appendix F compares to { 0 , 1 } ) .",
"Alas, because of our approximation to the inclusive term, neither term's gradient will find and directly encourage good y values that have never been proposed.",
"Appendix B gives further discussion and formulas.",
"To evaluate our methods, we needed pre-trained models p .",
"We experimented on several models.",
"In each case, we trained a generative model p ( x , y ) , so that we could try sampling from its posterior distribution p ( y | x ) .",
"This is a very common setting where particle smoothing should be able to help.",
"Details for replication are given in Appendix C. tural goal of sampling.",
"This objective can tolerate inaccurate local proposal distributions in cases where the algorithm could recover from them through resampling.",
"Looking even farther downstream, we might merely want p which is typically used to compute expectationsto provide accurate guidance to some decision or training process (see Appendix E).",
"This might not require fully matching the model, and might even make it desirable to deviate from an inaccurate model.",
"Training a single approximation q for all x is known as amortized inference .",
"12 The normalizing constant of p from (1) can be ignored because the gradient of a constant is 0.",
"We can regard a tagged sentence ( x , y ) as a string over the pair alphabet X Y .",
"We train an RNN language model over this pair alphabetthis is a neuralized OOHMM as suggested in 3.2: log p ( x , y ) = TX t =1 log p ( x t , y t | s t 1 ) (33) This model is locally normalized, so that log p ( x , y ) (as well as its gradient) is straightforward to compute for a given training pair ( x , y ) .",
"Joint sampling from it would also be easy (3.2).",
"However, p ( y | x ) is globally renormalized (by an unknown partition function that depends on x , namely exp H 0 ).",
"Conditional sampling of y is therefore potentially hard.",
"Choosing y t optimally requires knowledge of H t , which depends on the future x t : .",
"As we noted in 1, many NLP tasks can be seen as tagging problems.",
"In this paper we experiment with two such tasks: English stressed syllable tagging , where the stress of a syllable often depends on the number of remaining syllables, 13 providing good reason to use the lookahead provided by particle smoothing; and Chinese NER , which is a familiar textbook application and reminds the reader that our formal setup (tagging) provides enough machinery to treat other tasks (chunking).",
"English stressed syllable tagging This task tags a sequence of phonemes x , which form a word, with their stress markings y .",
"Our training examples are the stressed words in the CMU pronunciation dictionary (Weide, 1998).",
"We test the sampler on held-out unstressed words.",
"Chinese social media NER This task does named entity recognition in Chinese, by tagging the characters of a Chinese sentence in a way that marks the named entities.",
"We use the dataset from Peng and Dredze (2015), whose tagging scheme is a variant of the BIO scheme mentioned in 1.",
"We test the sampler on held-out sentences.",
"This is an artificial task that provides a discrete analogue of speech source separation (Zibulevsky and Pearlmutter, 2001).",
"The generative model is that J strings (possibly of different lengths) are generated 13 English, like many other languages, assigns stress from right to left (Hayes, 1995).",
"IID from an RNN language model, and are then combined into a single string x according to a random interleaving string y .",
"14 The posterior p ( y | x ) predicts the interleaving string, which suffices to reconstruct the original strings.",
"The interleaving string is selected from the uniform distribution over all possible interleavings (given the J strings' lengths).",
"For example, with J = 2 , a possible generative story is that we first sample two strings Foo and Bar from an RNN language model.",
"We then draw an interleaving string 112122 from the aforementioned uniform distribution, and interleave the J strings deterministically to get FoBoar.",
"p ( x , y ) is proportional to the product of the probabilities of the J strings.",
"The only parameters of p , then, are the parameters of the RNN language model, which we train on clean (non-interleaved) samples from a corpus.",
"We test the sampler on random interleavings of held-out samples.",
"The state s (which is provided as an input to c in (25)) is the concatenation of the J states of the language model as it independently generates the J strings, and g ( s t 1 , x t , y t ) is the log-probability of generating x t as the next character of the y t th string, given that string's language model state within s t 1 .",
"As a special case, x T = EOS (see footnote 1), and g ( s T 1 , EOS , EOS ) is the total log-probability of termination in all J language model states.",
"String source separation has good reason for lookahead: appending character o to a reconstructed string gh is only advisable if s and t are coming up soon to make ghost.",
"It also illustrates a powerful application settingposterior inference under a generative model.",
"This task conveniently allowed us to construct the generative model from a pre-trained language model.",
"Our constructed generative model illustrates that the state s and transition function f can reflect interesting problem-specific structure.",
"CMU Pronunciation dictionary The CMU pronunciation dictionary (already used above) provides sequences of phonemes.",
"Here we use words no longer than 5 phonemes.",
"We interleave the (un-stressed) phonemes of J = 5 words.",
"Penn Treebank The PTB corpus (Marcus et al., 1993) provides English sentences, from which we use only the sentences of length 8 .",
"We interleave the words of J = 2 sentences.",
"In our experiments, we are given a pre-trained scoring model p , and we train the parameters of a particle smoothing algorithm.",
"15 We now show that our proposed neural particle smoothing sampler does better than the particle filtering sampler.",
"To define better, we evaluate samplers on the offset KL divergence from the true posterior.",
"Given x , the natural goal of conditional sampling is for the sample distribution p ( y ) to approximate the true distribution p ( y | x ) = exp GT / exp H 0 from (1).",
"We will therefore reportaveraged over all held-out test examples x the KL divergence KL ( p || p ) = E y p [log p ( y )] (34) ( E y p [log p ( y | x )] log Z ( x )) , where p ( y | x ) denotes the unnormalized distribution given by exp GT in (2), and Z ( x ) denotes its normalizing constant, exp H 0 = P y p ( y | x ) .",
"As we are unable to compute log Z ( x ) in practice, we replace it with an estimate z ( x ) to obtain an offset KL divergence .",
"This change of constant does not change the measured difference between two samplers, KL ( p 1 || p ) KL ( p 2 || p ) .",
"Nonetheless, we try to use a reasonable estimate so that the reported KL divergence is interpretable in an absolute sense.",
"Specifically, we take z ( x ) = log P y Y p ( y | x ) log Z , where Y is the full set of distinct particles y that we ever drew for input x , including samples from the beam search models, while constructing the experimental results graph.",
"16 Thus, the offset KL divergence is a best effort lower bound on the true exclusive KL divergence KL ( p || p ) .",
"In all experiments we compute the offset KL divergence for both the particle filtering samplers and the particle smoothing samplers, for varying ensemble sizes M .",
"We also compare against a beam search baseline that keeps the highest-scoring M particles at each step (scored by exp G t with no lookahead).",
"The results are in Figures 2a2d.",
"15 For the details of the training procedures and the specific neural architectures in our models, see Appendices C and D. 16 Thus, Y was collected across all samplings, iterations,and ensemble sizes M , in an attempt to make the summation over Y as complete as possible.",
"For good measure, we added some extra particles: whenever we drew M particles via particle smoothing, we drew an additional 2 M particles by particle filtering and added them to Y .",
"Figure 2 : Offset KL divergences for the tasks in 6.1 and 6.2.",
"The logarithmic x -axis is the size of particles M ( 8 M 128 ).",
"The y -axis is the offset KL divergence described in 7.1 (in bits per sequence).",
"The smoothing samplers offer considerable speedup: for example, in Figure 2a, the non-resampled smoothing sampler achieves comparable offset KL divergences with only 1 / 4 as many particles as its filtering counterparts.",
"Abbreviations in the legend: PF=particle filtering.",
"PS=particle smoothing.",
"BEAM=beam search.",
":R' suffixes indicate resampled variants.",
"For readability, beam search results are omitted from Figure 2d, but appear in Figure 3 of the appendices.",
"Given a fixed ensemble size, we see the smoothing sampler consistently performs better than the filtering counterpart.",
"It often achieves comparable performance at a fraction of the ensemble size.",
"Beam search on the other hand falls behind on three tasks: stress prediction and the two source separation tasks.",
"It does perform better than the stochastic methods on the Chinese NER task, but only at small beam sizes.",
"Varying the beam size barely affects performance at all, across all tasks.",
"This suggests that beam search is unable to explore the hypothesis space well.",
"We experiment with resampling for both the particle filtering sampler and our smoothing sampler.",
"In source separation and stressed syllable prediction, where the right context contains critical information about how viable a particle is, resampling helps particle filtering almost catch up to particle smoothing.",
"Particle smoothing itself is not further improved by resampling, presumably because its effective sample size is high.",
"The goal of resampling is to kill off low-weight particles (which were overproposed) and reallocate their resources to higher-weight ones.",
"But with particle smoothing, there are fewer low-weight particles, so the benefit of resampling may be outweighted by its cost (namely, increased variance).",
"Much previous work has employed sequential importance sampling for approximate inference of intractable distributions (e.g., Thrun, 2000; Andrews et al., 2017).",
"Some of this work learns adaptive proposal distributions in this setting (e.g. Gu et al., 2015; Paige and Wood, 2016).",
"The key difference in our work is that we consider future inputs, which is impossible in online decision settings such as robotics.",
"Klaas et al. (2006) did do particle smoothing, like us, but they did not learn adaptive proposal distributions.",
"Just as we use a right-to-left RNN to guide posterior sampling of a left-to-right generative model, Krishnan et al. (2017) employed a right-to-left RNN to guide posterior marginal inference in the same sort of model.",
"Serdyuk et al. (2018) used a right-to-left RNN to regularize training of such a model.",
"We have described neural particle smoothing, a sequential Monte Carlo method for approximate sampling from the posterior of incremental neural scoring models.",
"Sequential importance sampling has arguably been underused in the natural language processing community.",
"It is quite a plausible strategy for dealing with rich, globally normalized probability models such as neural modelsparticularly if a good sequential proposal distribution can be found.",
"Our contribution is a neural proposal distribution, which goes beyond particle filtering in that it uses a right-to-left recurrent neural network to look ahead to future symbols of x when proposing each symbol y t .",
"The form of our distribution is well-motivated.",
"There are many possible extensions to the work in this paper.",
"For example, we can learn the generative model and proposal distribution jointly; we can also infuse them with hand-crafted structure, or use more deeply stacked architectures; and we can try training the proposal distribution end-to-end (footnote 10).",
"Another possible extension would be to allow each step of q to propose a sequence of actions, effectively making the tagset size .",
"This extension relaxes our | y | = | x | restriction from 1 and would allow us to do general sequence-to-sequence transduction.",
"This work has been generously supported by a Google Faculty Research Award and by Grant No. 1718846 from the National Science Foundation."
]
| [
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"result",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other"
]
|
[
"We propose a new approach to generate multiple variants of the target summary with diverse content and varying lengths, then score and select admissible ones according to users' needs.",
"Abstractive summarizers trained on single reference summaries may struggle to produce outputs that achieve multiple desirable properties, i.e., capturing the most important information, being faithful to the original, grammatical and fluent.",
"In this paper, we propose a two-staged strategy to generate a diverse set of candidate summaries from the source text in stage one, then score and select admissible ones in stage two.",
"Importantly, our generator gives a precise control over the length of the summary, which is especially well-suited when space is limited.",
"Our selectors are designed to predict the optimal summary length and put special emphasis on faithfulness to the original text.",
"Both stages can be effectively trained, optimized and evaluated.",
"Our experiments on benchmark summarization datasets suggest that this paradigm can achieve state-of-the-art performance.",
"The learning objective of a modern abstractive summarizer is to produce system outputs that resemble reference summaries on a word-to-word basis.",
"It does not promote outputs that possess multiple desirable properties, i.e., capturing the most important information, being faithful to the original text, grammatical and fluent, though some of these properties are exhibited by system abstracts as a natural outcome of a learned summarizer (See et al., 2017; Takase et al., 2016; Tan et al., 2017; Chen and Bansal, 2018; Celikyilmaz et al., 2018; Gehrmann et al., 2018; Liu and Lapata, 2019; Lebanoff et al., 2019b; Fabbri et al., 2019; Brainskas et al., 2020).",
"Without direct optimization of desired properties, system abstracts often change the meaning of the original document or fail to convey the main concepts (Kryscinski et al., 2020).",
"Source Text Police arrested five anti-nuclear protesters Thursday after they sought to disrupt loading of a French Antarctic research and supply vessel, a spokesman for the protesters said.",
"In this paper, we propose a new approach to over-generate and select admissible summaries, which allows a summarizer to juggle multiple objectives and strike a good balance between them (Belz and Reiter, 2006).",
"Our approach consists of two stages.",
"Given a source text, a generator explores the space of all possible lengths to produce multiple variants of the target summary that contain diverse content.",
"We then devise selectors to validate the quality of alternative summaries to predict whether they are admissible.",
"Our selection mechanism can be customized to suit particular needs without changing the generation space.",
"Both stages can be effectively trained, optimized and evaluated.",
"Crucially, we take a confidence-driven approach to summary generation rather than using a left-to-right order.",
"Beginning writers and language learners do not write in a strict sequential manner.",
"In a similar vein, our generator produces a summary by filling-in-the-blanks with appropriate words.",
"The most confident words are generated first, less vital ones later.",
"With confidence-driven generation, our summarizer learns to dynamically add or remove content, and even paraphrase to produce a summary of a given length.",
"In Table 2, we show an example illustrating the difference between our method and left-to-right generation.",
"Our method dramatically enhances the capability of the generator, making it possible to explore summaries of varying lengths.",
"Identifying admissible summaries with desired properties is critical for a summarizer.",
"Summaries of very short lengths may fail to capture the main concepts, and this kind of incomplete or partial information can lead to false assumptions about the original content.",
"Moreover, summaries of moderate lengths may still contain hallucinated content that is nonexistent in the source text (Maynez et al., 2020).",
"We present two summary selectors to combat these issues.",
"Our first selector aims to predict what summary length is most suitable for a source text, whereas a second selector puts special emphasis on the overall quality of the system summary, in particular its faithfulness to the original text (Falke et al., 2019; Durmus et al., 2020).",
"A novel dataset has been introduced in this work where we associate a source text with multiple summaries, and admissible ones are manually labelled by human annotators.",
"Not only can the dataset be used to judge the effectiveness of summary selectors, but it provides a new testbed for future summarizers to compare their outputs against multiple reference summaries, which is key to improve the reliability of evaluation results (Louis and Nenkova, 2013).",
"We have focused on generating abstractive summaries from single source sentences, but the insights gained from this study could inform the design of summarizers of all forms.",
"Our method also has a great potential to incorporate human-in-the-loop to teach the model to select the best summary.",
"The main contributions of this paper are: We propose a new approach to generate multiple variants of the target summary that have varying lengths, then score and select the best summaries according to our needs.",
"Our generator controls over the length of the summary, which is especially well-suited when space is limited.",
"Our selectors are designed to predict the optimal summary length and put special emphasis on faithfulness to the original text.",
"Our experiments on benchmark summarization datasets suggest that this paradigm can surpass results of previous studies or rival state-of-the-art.",
"We conclude with a discussion of our key findings, which has implications for the development of robust abstractive summarizers.",
"1 2 Related Work It is important for neural abstractive summarizers to produce summaries that are faithful to the original texts (Cao et al., 2017; Kryscinski et al., 2019; Lebanoff et al., 2019a; Wang et al., 2020; Dong et al., 2020; Zhang et al., 2020b).",
"However, it remains questionable as to whether a summarizer must acquire that ability by learning from human reference summaries, or possibly through external resources such as textual entailment predictions (Falke et al., 2019).",
"In this paper, we present a two-stage strategy to over-generate, then score system summaries externally for faithfulness and overall quality.",
"Previous work has sought to control various aspects of the generated summary, including the style, length and amount of reused text (Kikuchi et al., 2016; Hu et al., 2017; Fan et al., 2018; Keskar et al., 2019; Makino et al., 2019; Song et al., 2020).",
"In contrast, our generator focuses on producing multiple variants of the target summary that have diverse content and varying lengths.",
"It offers precise control over the length of the summary, which has an important implication for fair comparison between different summarization systems (Napoles et al., 2011; Shapira et al., 2018).",
"Our methodology allows for greater flexibility in designing summary selectors.",
"The selectors may allow multiple admissible summaries to be identi-1 Our code and annotated data are made available on Github at https://github.com/ucfnlp/varying-length-summ [CLS] [SEP] [SEP] [MASK] [MASK] [MASK] 0.51 0.22 0.27 0.02 0.07 0.12 0.80 0.08 0.91 [CLS] [SEP] [SEP] [MASK] dog [MASK] 0.01 0.12 0.87 0.22 0.20 0.68 [CLS] [SEP] [SEP] the dog [MASK] 0.10 0.38 0.52 Step 1 Step 2 Step 3 Vocabulary Vocabulary Vocabulary S ou r ce S u mm a r y barks the Figure 1: An illustration of the generation process.",
"fied for any source input according to users' needs.",
"On the contrary, post-editing of system summaries through a set of basic operations such as insertion and deletion (Gu et al., 2019; Malmi et al., 2019; Dong et al., 2019b; Correia and Martins, 2019) may have intrinsic limitations by learning from single reference summaries to produce single outputs.",
"In this paper, we provide a new dataset where each source text is associated with multiple admissible summaries to encourage diverse outputs.",
"Our generator is inspired by unsupervised pretraining of deep neural models (Peters et al., 2018; Radford et al., 2019; Devlin et al., 2019; Yan et al., 2020; Zhang et al., 2020a; Lewis et al., 2020) and non-autoregressive machine translation (Gu et al., 2018; Ghazvininejad et al., 2019).",
"Distinct from these is our confidence-driven generation that goes beyond left-to-right order.",
"It uses a denoising objective during training and is conveniently transformed into a semi-autoregressive generator at test time.",
"We introduce a customized beam search algorithm to promote the generation of diverse outputs.",
"In the following section, we describe in detail our two-step strategy.",
"We seek to produce a highly diverse set of alternative summaries from any source input, but standard neural language generators with beam search only produce high-likelihood sequences rather than diverse ones (Ippolito et al., 2019).",
"To address this limitation, we devise a new generator that is capable of producing summaries of varying lengths.",
"A long summary can cover more important information of the source text, whereas a short summary is easy-to-read.",
"Moreover, it produces a summary having the exact given length and with a proper endpoint.",
"This is achieved by shifting away from left-to-right generation but building a summary using a confidence-driven approach.",
"Our generator is illustrated in Figure 1.",
"To generate a summary of L tokens, we place a number of [ MASK ] tokens following the source text, which serve as placeholders for summary tokens.",
"Importantly, our generator simultaneously predicts the most probable tokens for all positions , as opposed to predicting only the most probable next token in an autoregressive setting.",
"We obtain the token that has the highest probability across all positions, and use it to replace the [ MASK ] token of that position.",
"Next, the model continues to make predictions for all remaining positions, conditioned on the source text and the summary tokens seen thus far of varying positions.",
"Let x = { x i } Ni =1 be the source and y = { y j } Mj =1 the summary sequence.",
"Our confidence-driven generation process defines a new order of summary tokens, o = { o j } Mj =1 , o j [ M ] , according to which P ( y | x ) is factorized into a product of conditional probabilities P ( y o j | y o <j , x ) (Eq.",
"(1)), where are model parameters to be optimized during training.",
"Our learning objective is to minimize the negative data log-likelihood (Eq.",
"(2)) to predict missing tokens y o j conditioned on the source text x and the summary tokens seen thus far y o <j .",
"P ( y | x ; o ) = M (cid:89) j =1 P ( y o j | y o <j , x ) (1) L ( ) = M (cid:88) j =1 log P ( y o j | y o <j , x ) (2) Our generator is trained with a denoising objective.",
"It consists of a decoder-only architecture with 12 Transformer blocks (Dong et al., 2019a).",
"Given Input The Bank of Japan appealed to financial markets to remain calm Friday following the US decision to order Daiwa Bank Ltd. to close its US operations.",
"L=6 BoJ 2 , 5 calls 4 for 6 calm.",
"3 , 1 L=7 BoJ 3 , 7 calls 4 for 5 market 6 calm.",
"2 , 1 L=8 BoJ 5 , 7 urges 6 markets 4 to 3 remain 1 calm.",
"8 , 2 L=9 BoJ 6 , 2 urges 7 financial 4 markets 5 to 9 remain 1 calm.",
"8 , 3 L=10 BoJ 1 , 2 calls 6 for 7 calm 5 after 8 Daiwa 10 , 4 closure.",
"9 , 3 L=11 BoJ 1 , 2 calls 6 for 7 calm 5 after 8 Daiwa 11 , 4 Bank 3 closure.",
"10 , 9 L=12 BoJ 2 , 3 calls 5 for 6 calm 1 after 11 Daiwa 8 , 7 Bank 9 closure 12 order.",
"10 , 4 L=13 BoJ 6 , 13 urges 8 markets 7 to 9 remain 11 calm 4 after 10 Daiwa 5 , 2 Bank 1 closure.",
"12 , 3 L=14 BoJ 3 , 4 calls 7 for 8 calm 2 after 14 Daiwa 13 , 6 Bank 5 's 10 , 9 US 11 closure.",
"12 , 1 L=15 BoJ 10 , 3 calls 4 for 5 calm 2 after 15 US 8 order 13 for 14 Daiwa 9 , 6 Bank 7 to 11 close.",
"12 , 1 L=16 BoJ 3 , 5 calls 4 for 7 calm 2 after 16 US 13 order 12 on 14 Daiwa 8 , 6 's 11 , 10 US 9 operations.",
"15 , 1 Table 3: The target summary length L is adjusted to produce alternative summaries that have diverse content.",
"a source text and a summary, we replace a portion of their tokens by the [ MASK ] token, and the model is trained to reconstruct the original data from the corrupted text.",
"It differs from autoregressive models in that the context of each position can consist of tokens from both left and righta source word can attend to other source words and a summary word can attend to source words and summary words seen thus far of varying positions hence capturing a bidirectional context.",
"The training procedure is thus analogous to that of permutation-based language modeling (Yang et al., 2019).",
"Our training schedule begins with masking out 10% of source tokens and linearly decreases it to 0% throughout training steps.",
"Masking out a portion of source tokens helps the model learn contextualized representations given bidirectional context.",
"On the target side, the schedule begins with masking out 90% of summary tokens and linearly decreases it to 60%.",
"It allows the model to learn to predict missing summary tokens and copy source tokens to the summary.",
"When a token is chosen, it is replaced with the [ MASK ] token 80% of the time, a random token of the vocabulary 10% of the time, and remains unchanged otherwise.",
"In Table 3, we present example summaries produced by our new confidence-driven generator for a source input.",
"The summaries have varying lengths and levels of details.",
"Our generator learns to add or remove content, and even paraphrase to produce a summary of a given length.",
"We adjust the target summary length ( L ) to produce diverse summaries.",
"Moreover, there exists more than one admissible summaries that capture the important information of the source text, while being grammatical and faithful to the original.",
"It is important to note that, to decode the best summary of length L , our generator requires a position-aware beam search algorithm to explore the space of candidate summaries, which is described next.",
"A position-aware beam of size K not only contains the K -best candidate summaries having the highest log-likelihood at any time step, but it also records the positions of summary tokens seen thus far for each candidate summary.",
"The tokens of candidate summaries can be decoded in any order and occur in different positions, marking an important distinction between position-aware and traditional beam search (Meister et al., 2020).",
"The method is realized by associating each candidate summary with a binary matrix M { 0 , 1 } L |V| , which records what positions have been filled by which summary tokens and what positions remain available.",
"mary, score (cid:48) is its data log-likelihood and M (cid:48) is a binary mask (Line 9).",
"Our generator predicts the token probabilities PL |V| for all positions, conditioned on the source text and the summary tokens seen thus far.",
"The binary mask M (cid:48) indicates positions that remain available (Line 1112).",
"We obtain the topK tokens that have the highest probability scores across all positions, record their summary hypotheses and likelihood scores.",
"These positions are then marked as taken (Line 1418).",
"The decoding process continues until all of the L positions are filled by summary tokens.",
"This makes our method different from traditional beam search, the latter terminates when an end-of-sequence symbol [SEP ] is generated for the summary.",
"Particularly, our method is advantageous as it exerts precise control over the summary length.",
"The model learns to decide what content to be included in the summary given the limited space available, yielding summaries with varying levels of details.",
"We present two selectors to respectively assess the overall quality of the summary and predict the optimal summary length.",
"Our selectors assume the role of a responsible agent that, when provided with a source text and multiple alternative summaries, can effectively recognize the admissible ones.",
"It has the potential to incorporate human-in-the-loop in future to teach the model to select best summaries.",
"Our goal is to build a selector to discern the difference between high and low-quality summaries.",
"In an ideal scenario, we have human annotators to vet each source text/summary pair, the annotated data are used to train the selector.",
"The process, however, is both expensive and time-consuming.",
"Inspired by Kryscinski et al. (2020), we automatically construct a large number of minimally different pairs, where a positive instance comprises of the source text and its ground-truth summary, and a negative instance includes the source text and a corrupted summary.",
"We experiment with various means to generate corrupted summaries from a ground-truth summary.",
"The corruptions should resemble common mistakes made by neural abstractive summarizers, including generating factually incorrect details, failing to convey the main points of the source text, and being ungrammatical.",
"The corruption types experimented in this paper are illustrated in Table 4.",
"Distinguishing our work from that of Kryscinski et al. (2020) are",
"(i) Search and Replace , we swap the ground-truth summary with a similar summary in the training set that have 4 common bigrams to form a negative instance.",
"(ii) Swap Segments splits a ground-truth summary into two parts of similar lengths, then swaps them to produce an ungrammatical summary.",
"(iii) Incomplete Summary replaces a ground-truth summary by one of its sentence constituents, yielding a corrupted summary that fails to convey the main ideas.",
"These corruptions are designed to emulate system summaries that are too short to capture the main concepts, or contain hallucinated content that is not found in the source text.",
"We next build a binary classifier to predict if a summary is admissible given the source text.",
"To distill information from the source text and the summary, we encode them into hidden vectors using RoBERTa (Liu et al., 2019).",
"These are denoted by h x and h y , respectively.",
"We create a vector for the pair, h = h x h y | h x h y | ( h x h y ) , consisting of a concatenation of the two hidden vectors, their absolute difference | h x h y | and their element-wise product ( h x h y ) .",
"is a concatenation of vectors.",
"The output vector h is expected to capture the gist of the source text and the summary, and a similar approach is being used for natural language inference (Chen et al., 2018).",
"The vector h is fed to a feed-forward layer to predict whether the summary is admissible given the source text.",
"We have chosen to design the selector as a classifier rather than a ranking model because there can exist multiple, equally valid summaries for any source input.",
"The classifier allows us to identify admissible summaries that are not only true-to-original but has the best overall quality.",
"Finding a suitable length for the summary is one of the most important open problems in automatic summarization (Shapira et al., 2018; Sun et al., 2019).",
"A summary should be shorter than the original, but long enough to include the most important information.",
"Length normalization seeks to rescale the log-likelihood score of a summary, denoted by S ( x , y ) = log p ( y | x ) , by its length | y | , with an exponent p (Eq.",
"(3)).",
"It is used by some neural abstractive summarizers (See et al., 2017; Lewis et al., 2020).",
"However, the method does not consider the density of information in the source text and it may still generate ultra-short summaries.",
"Instead, we attempt to estimate the appropriate length of the summary given a source text, denoted by L pred , and reward a system summary if it stays close to the estimated length (Huang et al., 2017).",
"Concretely, we assign a per-word reward to the summary, represented by r min( | y | , L pred ) (Eq.",
"(4)).",
"A system summary continues to be rewarded until it System R-1 R-2 R-L lvt2k-1sent (Nallapati et al., 2016) 32.67 15.59 30.64 SEASS (Zhou et al., 2017) 36.15 17.54 33.63 DRGD (Li et al., 2017) 36.27 17.57 33.62 Pointer-Gen (See et al., 2017) 34.19 16.92 31.81 R3Sum (Cao et al., 2018) 37.04 19.03 34.46 EntailGen (Guo et al., 2018) 35.98 17.76 33.63 BiSET (Wang et al., 2019) 38.45 19.53 36.04 MASS (Song et al., 2019) 38.73 19.71 35.96 UniLM (Dong et al., 2019a) 38.90 20.05 36.00 PEGASUS (Zhang et al., 2020a) 39.12 19.86 36.24 Ours ( Average ) 35.51 16.33 32.75 Ours ( Best Quality ) 36.71 17.27 33.63 Ours ( Best Summary Length ) 39.27 20.40 36.76 Table 5: Results on the Gigaword test set evaluated by ROUGE (Lin, 2004).",
"Beyond that, increasing the length of the summary does not lead to additional rewards.",
"We obtain the predicted length L pred using a baseline abstractive summarizer, which takes the source text as input and greedily decodes a summary in a left-to-right manner until an end-of-sequence symbol is predicted; L pred is the length of the decoding sequence.",
"r is a coefficient to scale the reward and it is tuned on the validation data.",
"Finally, the reward-augmented log-likelihood S rwd ( x , y ) is used as a scoring function to rank all summary hypotheses of varying lengths.",
"Datasets We perform extensive experiments on Gigaword (Parker, 2011) and Newsroom (Grusky et al., 2018) datasets.",
"The goal is to generate an abstractive summary from a lengthy source sentence.",
"For each article, we pair its first sentence with the title to form a summarization instance.",
"Both datasets contain large collections of news articles.",
"Gigaword (19952010) contains 3,810,674 / 10,000 / 1,951 instances, respectively, in the train, validation and test splits.",
"Newsroom (19982017) contains 199,341 / 21,530 / 21,377 instances, respectively.",
"We conduct experiments on both datasets to demonstrate the generality of our two-staged strategy.",
"Our method generates a diverse set of summaries from a source sentence in stage one, then score and select admissible summaries in stage two.",
"The system summaries are evaluated using both automatic metrics (ROUGE; Lin, 2004) and human evaluation of information coverage, grammaticality 2 Our experiments are performed on the original Gigaword dataset (Parker, 2011) without anonymization.",
"The data provided by Rush et al. (2015) replaced all digit characters with # and replaced word types seen less than 5 times with UNK.",
"and faithfulness to the original text.",
"We introduce a new dataset where a source sentence is associated with multiple summaries, and admissible ones are labelled by human annotators (5.1).",
"The dataset will serve as a useful testbed for future summarization research, where multiple reference summaries is key to improve the reliability of evaluation results (Louis and Nenkova, 2013).",
"This paper focuses on generating abstractive summaries from single source sentences.",
"However, we expect the insights gained from this study to inform the design of future summarizers of different kinds.",
"Experimental Setup Our generator is initialized with RoBERTaBASE (Liu et al., 2019) due to its high performance on generation-related tasks.",
"We use Byte Pair Encoding (Sennrich et al., 2016) with a vocabulary of 50,265 tokens.",
"The model contains 12 Transformer blocks (Vaswani et al., 2017), with a hidden size of 768 and 12 attention heads, for a total of 110M parameters.",
"We fine-tune the model on the train split of Gigaword and Newsroom, respectively, before applying it to the test sets.",
"The model is fine-tuned for 20 epochs.",
"Each epoch contains 24k / 1.5k batches and our batch size is 128.",
"The model uses 10k / 1k warm-up steps, respectively, for Gigaword and Newsroom.",
"We use the AdamW (Loshchilov and Hutter, 2017) optimizer with an initial learning rate of 1e-4.",
"The momentum parameters are set to 0.9 and 0.999.",
"On a deep learning workstation equipped with 2x Titan RTX GPUs, our model takes 64 and 5.5 hours to fine-tune on Gigaword and Newsroom.",
"At test time, our beam size is K =20.",
"The model produces summaries ranging from L = 7 to 16 tokens for a given source sentence.",
"Our selector for best overall quality is trained using 1.8M instances automatically constructed from the train split of Gigaword.",
"The set is balanced with an equal number of positive and negative instances.",
"226k instances are created with the type of Search and Replace, and 400k instances are created using each of the four remaining corruption types.",
"The reward coefficient r is set to 2.0 across all experiments.",
"Automatic Evaluation In Table 6, we present results on Gigaword and Newsroom test sets evaluated by ROUGE (Lin, 2004).",
"We report R-1, R-2 and R-L F 1 -scores that respectively measure the overlap of unigrams, bigrams, and longest common subsequences between system and reference summaries.",
"For each summarization instance, our generator produces multiple alternative summaries, ranging from L =7 to 16 tokens.",
"E.g., Daiwa Bank. corresponds to four tokens, Dai', wa', Bank' plus an ending period.",
"Our BEST-QUALITY and BESTLENGTH selectors each identifies a single best summary from the set of alternative summaries for each summarization instance.",
"We observe that the BEST-LENGTH selector has achieved the highest scores.",
"It performs better than using any single target length for all summaries.",
"Among summaries of different lengths, the highest R-2 F 1 -scores are obtained when the target summary length is set to 11 and 12 tokens, respectively, for Gigaword and Newsroom.",
"This is close to the median length of reference summaries, which are 12 and 13 tokens for these datasets.",
"Our findings show that, the target summary length can make a non-negligible impact on automatic evaluation results.",
"It is best for system summaries to be long enough to include the most important information to achieve satisfying results.",
"In Table 5, we report results on the Gigaword test split that contains 1,951 instances.",
"Our approach is compared against strong neural abstractive systems, including PEGASUS (Zhang et al., 2020a), UniLM (Dong et al., 2019a) and MASS (Song et al., 2019).",
"These systems draw on large-scale unsupervised pretraining to improve the quality of summaries, yielding some of the best reported results.",
"In comparison, our BEST-LENGTH selector either Candidate Summary Contains the main idea?",
"Is true-to-original?",
"Is grammatical?",
"surpasses or performs comparably to these systems.",
"The summaries selected by it achieve the highest R-2 F 1 -score of 20.4%.",
"We further choose the summary that yields the highest score for each instance, creating an oracle set of summaries, which yield a R-2 F 1 -score of 33.4%.",
"The results indicate that, with better summary selectors, there is a great potential that we can further boost summarization performance.",
"In Figure 2, we investigate the effectiveness of our position-aware beam search (3.1).",
"The beam size K is set to { 1 , 5 , 10 , 15 , 20 } .",
"We report the average R-2 F 1 -score across summaries of all lengths.",
"Results show that our position-aware beam search is effective at decoding summaries and works robustly across a range of beam sizes.",
"A larger beam ( K =20) tends to give better results.",
"Human Evaluation We are interested in a holistic evaluation of the multiple alternative summaries produced by the generator.",
"To accomplish this, we develop a new dataset containing 500 summarization instances randomly sampled from the Gigaword test set.",
"Our generator produces 7 alternative summaries for each instance, which have varying lengths that range from L = 7 to 13 tokens.",
"We recruit human evaluators to judge the quality of each summary given its source text.",
"3 3 Our annotated dataset is available on Github at https: //github.com/ucfnlp/varying-length-summ Content Truthful Grammatical Overall Average 80.7 82.6 96.5 74.2 Best Length 82.8 86.0 97.4 77.8 Best Quality 93.0 90.8 97.0 88.2 Table 8: Results of human assessment.",
"Our annotation interface is presented in Table 7. A human annotator is instructed to read over all summaries before seeing the source text.",
"It allows him/her to effectively recognize any hallucinated content that is not found in the source text.",
"The annotator is asked to answer three yes-no questions.",
"They include",
"(a) has the summary successfully convey the main points of the source text?",
"(b) is the summary truthful to the meaning of the original?",
"(c) is the summary grammatical?",
"A native speaker creates gold-standard annotations for multiple instances, they are shared with all annotators to provide guidance.",
"Our annotators are recruited using Appen ( appen.com ).",
"It is a crowdsourcing platform similar to Amazon Mechanical Turk ( mturk.com ), but provides great quality control mechanisms to ensure high-quality work.",
"We recruit 5 annotators to judge the quality of each summary.",
"A summary is deemed admissible under a criterion if the majority answer is yes.",
"We observe that, 74.2% of summaries produced by our generator are admissible under all three criteria.",
"The results suggest that our generator is able to produce multiple, equally valid summaries for a given source text.",
"We additionally examine the percentage of admissible summaries under each criterion, results are shown in Table 8. Grammaticality has the best performance (96.5%), followed by truthfulness (82.6%) and content coverage (80.7%).",
"There appears to be room for improvement for the latter two aspects.",
"Moreover, the summaries chosen by our BEST-QUALITY selector demonstrate a high admissible rate93%, 90.8% and 97%respectively for the three criteria, suggesting the effectiveness of the selector.",
"Further, we observe a discrepancy between ROUGE and human judgments (Fabbri et al., 2020) as summaries yielding highest ROUGE scores are not always deemed admissible by human evaluators.",
"We hope this dataset provides a testbed for future summarizers to be judged on their ability to produce multiple summaries per instance rather than a single summary.",
"In Table 3, we show example system summaries and the order in which summary tokens are produced.",
"E.g., {2,5} indicate the two tokens Bo-J (Bank of Japan) are generated the 2nd and 5th place in the summary.",
"We find that our generator can effectively decide what content should be included in the summary given the limited space available, yielding summaries with varying levels of details.",
"Important spans such as calls for calm tend to be generated first, less vital ones later.",
"Our findings corroborate the hypothesis that a masked language model may enable generation in a flexible word order (Liao et al., 2020).",
"Further, we observe that the order in which tokens are generated is related to their dependencies (call for), which supports the findings of Clark et al. (2019).",
"We investigate a new approach to neural abstractive summarization that focuses on producing multiple summary hypotheses with varying lengths and levels of details.",
"Our selectors are designed to identify summaries that have the optimal length and the best overall quality.",
"The approach obtains state-of-the-art results on summarization benchmarks and opens up a potential new avenue for customizing summary selectors to suit users' needs.",
"Future work includes extending this research to long documents.",
"Our confidence-driven generator and the selectors could potentially be extended to operate on spans of text (Joshi et al., 2020) rather than individual tokens, thus allowing for efficient generation of multiple summary hypotheses and identification of admissible summaries and/or summary segments.",
"We are grateful to the reviewers for their insightful comments, which have helped us improve the paper.",
"This research was supported in part by the National Science Foundation grant IIS-1909603."
]
| [
"objective",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"objective",
"method",
"method",
"result",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"method",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other"
]
|
[
"Researchers in NLP often frame and discuss research results in ways that serve to deemphasize the field's successes, often in response to the field's widespread hype.",
"Though well-meaning, this has yielded many misleading or false claims about the limits of our best technology.",
"This is a problem, and it may be more serious than it looks: It harms our credibility in ways that can make it harder to mitigate present-day harms, like those involving biased systems for content moderation or resume screening.",
"It also limits our ability to prepare for the potentially enormous impacts of more distant future advances.",
"This paper urges researchers to be careful about these claims and suggests some research directions and communication strategies that will make it easier to avoid or rebut them.",
"Over the last few years, natural language processing has seen a wave of surprising negative results overturning previously-reported success stories about what our models can do, and showing that widely-used models are surprisingly brittle (Jia and Liang, 2017; Niven and Kao, 2019; McCoy et al., 2019).",
"This shows that many of our standard practices for evaluation and reporting can lead to unrealistically positive initial claims about what we can do.",
"The resulting hype and overclaiming, whether intentional or not, are a problem.",
"They can encourage the reckless deployment of NLP systems in high-stakes settings where they can do significant harm.",
"They also threaten the health and credibility of NLP as a research field, and thereby threaten our ability to influence applied stakeholders or attract funding.",
"Fortunately, these results have led to a surge of research and writing that proposes more thorough and cautious practices for the evaluation of model ability (Ribeiro et al., 2020; Gardner et al., 2020; Figure 1: Hype is a problem.",
"Kiela et al., 2021; Bowman and Dahl, 2021).",
"While we have only a limited ability to control the public narrative taking place through industry PR and the media, there's reason to be hopeful that we researchers are getting much better at avoiding the worst forms of overconfidence about our systems.",
"Less fortunately, this pattern of disappointment seems to have led to many instances of pessimism about model performance that are ungrounded from real empirical results.",
"This leaves room for the research community's consensus about our capabilities to fall short of our actual capabilities.",
"I call this issue underclaiming , for lack of a better term, 1 and argue that it is more dangerous than it might seem.",
"It risks our credibility and thereby limits our ability to influence stakeholders in cases where our current systems are doing real harm.",
"It also limits our ability to accurately forecast and plan for the impacts that may result from the deployment of more capable systems in the future.",
"If we can truly reach near-human-level performance on many of the core problems of NLP, we should expect enormous impacts which will be potentially catastrophic if not planned for.",
"In this paper, I lay out case studies demonstrating four types of underclaiming, focusing especially on writing and citation practices.",
"I then argue that 1 While overclaiming generally refers to overstating the effectiveness of one's own methods or ideas, the phenomenon that I call underclaiming often involves downplaying the effectiveness of preexisting methods or ideas.",
"it is a problem.",
"I close by sketching some ways of reducing the prevalence of this kind of underclaiming, including straightforward best practices in writing and evaluation, a proposed rule of thumb for writing and reviewing, improvements to tooling for analysis and benchmarking, and research directions in model performance forecasting and test set design.",
"This paper addresses the phenomenon of scholarly claims that imply state-of-the-art systems are significantly less capable than they actually are.",
"This takes on several forms, including misleading presentations of valid negative results from weak or dated baseline models, misleading claims about the limits of what is conceptually possible with machine learning, and misleading reporting of results on adversarially collected data.",
"Despite many surprises and setbacks, NLP research seems to have made genuine progress on many problems over the last few years.",
"In light of this, discussions about the limitations of systems from past years don't straightforwardly apply to present systems.",
"The first two cases that I present involve failures to contextualize claims about the failures of weaker past systems: Adversarial Examples for SQuAD Jia and Liang (2017) published one of the first demonstrations of serious brittleness in neural-network-based systems for NLU, showing that a simple algorithm could automatically augment examples from the Model Year SQuAD AS AOS ReasoNet Ensemble 2017 81 39 50 BERT-Base 2018 87 64 72 XLNet-Base 2019 89 69 77 Table 1: F1 results on the original SQuAD development set and the two Jia and Liang adversarial evaluation sets.",
"SQuAD benchmark (Rajpurkar et al., 2016) in a way that fool many state-of-the-art systems, but not humans.",
"This work prompted a wave of much-needed analysis and a corresponding lowering of expectations about the effectiveness of neural network methods.",
"However, the results in Jia and Liang predate the development of modern pretraining methods in NLP (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019), and the best systems studied in this work have more than twice the error rate of the current state of the art.",
"While I am not aware of any results from current state-of-the-art systems on this data, results from 2019 systems suggest that we are making substantial progress (Table 1).",
"We have no reason to expect, then, that the failures documented in this work are quantitatively or qualitatively similar to the failures of current systems.",
"However, papers that cite these results often present them with no discussion of the model under study, yielding misleading implications.",
"For example, the award-winning work of Linzen (2020) cites the Jia and Liang result to justify this claim: [F]or current deep learning systems: when tested on cases sampled from a distribution that differs from the one they were trained on, their behavior is unpredictable and inconsistent with that of humans The chief concern in this context is the claim that this failure applies to current deep learning systems in general, and the corresponding unjustified implication that these failures are a fundamental or defining feature of neural network language models.",
"Looking only to highly-cited works from the last two years that cite Jia and Liang, similar state-7485 ments can be found in Xu et al. (2020), Zhang et al. (2020), and others.",
"The Long Shadow of BERT While the case of Jia and Liang is especially striking since it deals with models that predate pretraining entirely, a similar effect is much more common in a subtler form: Most analysis papers that identify limitations of a system come out well after the system description paper that claims the initial (typically positive) results.",
"BERT, first released in fall 2018, has been a major locus for this kind of analysis work, and continues to be long after its release.",
"Looking to a random sample of ten papers from the NAACL 2021 analysis track that study pretrained models, 2 none of them analyze models that have come out since summer 2019, and five only study BERT, representing a median lag of nearly three years from the release of a model to the publication of the relevant analysis.",
"3 This analysis work is often valuable and these long timelines can be justifiable: Good analysis work takes time, and researchers doing analysis work often have an incentive to focus on older models to ensure that they can reproduce previously observed effects.",
"Even so, this three-year lag makes it easy to seriously misjudge our progress.",
"In particular, this trend has consequences for the conclusions that one would draw from a broad review of the recent literature on some problem: A review of that literature will contrast the successes of the best current systems against the weaknesses of the best systems from an earlier period .",
"In many cases, these weaknesses will be so severe as to challenge the credibility of the successes if they are not properly recognized as belonging to different model generations.",
"The BERT-only results, though, represent a clear missed opportunity: There exist newer models like RoBERTa and DeBERTa (Liu et al., 2019; He et al., 2020) which follow nearly identical APIs and architectures to BERT, such that it should generally be possible to reuse any BERT-oriented analysis method on these newer models without modifica-tion.",
"In many cases, these newer models are differ-2 Papers studying only BERT: White et al. (2021); Slo-bodkin et al. (2021); Bian et al. (2021); Cao et al. (2021); Pezeshkpour et al. (2021).",
"Papers studying other models predating fall 2019: Wallace et al. (2021); Hall Maudslay and Cotterell (2021); Hollenstein et al. (2021); Bitton et al. (2021); Du et al. (2021) 3 A similar analysis of the late-2021 EMNLP conference, conducted after peer review for the present paper, shows a slightly better median lag of two years.",
"ent enough in their performance that we should expect analyzing them to yield very different conclusions: For example, BERT performs slightly worse than chance on the few-shot Winograd Schema commonsense reasoning test set in SuperGLUE (Levesque et al., 2011; Wang et al., 2019), while DeBERTa reaches a near-perfect 96% accuracy.",
"How much better would our understanding of current technology be if a few of these works had additionally reported results with DeBERTa?",
"The influential work of Bender and Koller (2020) is centered on the claim that:",
"[T]he language modeling task, because it only uses form as training data, cannot in principle lead to learning of meaning.",
"The proof of this claim is straightforward and convincing under some (but not all) mainstream defini-tions of the word meaning in the context of NLP: If meaning deals with the relationship between language and some external nonlinguistic reality, then a system that can only ever interact with the world through language cannot access meaning.",
"This argument does not, on its own, make any prediction about the behavior of these models on tasks that take place entirely through the medium of language.",
"Under this definition, a translation system is acting without reference to meaning even if it has a rich, structured internal model of the world, and even it interprets sentences with reference to that model when translating: As long as that model of the world is developed solely using language, no meaning is involved.",
"4 In addition, this argument does not justify any strong prediction about the behavior of models which are trained primarily, but not exclusively , on a language modeling objective, as with models that are fine-tuned to produce non-textual outputs like labels, or models which are trained in a multimodal language-and-vision regime.",
"While this core claim is sound and important, public discussion of the paper has often repeated the claim in ways that imply stronger conclusions about model behavior.",
"Utama et al. (2020), for example, write 4 See Merrill et al. (2021) for some limits on how closely such a model can correspond to the real world and Bommasani et al. (2021, 2.6.3) for further discussion of the implications of Bender and Koller's arguments for NLP.",
"Researchers have recently studied more closely the success of large fine-tuned LMs in many NLU tasks and found that models are simply better in leveraging biased patterns instead of capturing a better notion of language understanding for the intended task (Bender and Koller, 2020).",
", misleadingly suggesting that this result deals with the outward performance of specific language models on tasks.",
"In another vein, Jang and Lukasiewicz (2021) make the straightforward claim that Bender and Koller (2020) show that it is impossible to learn the meaning of language by only leveraging the form of sentences.",
"but they then use that claim to motivate a new regularization technique for language models, which does nothing to change the fact that they are trained on form alone.",
"In this context, it is hard to avoid the incorrect inference that Bender and Koller show a specific and contingent problem with recent language modelswhich could be mitigated by better regularization.",
"Similar claims can be found in many other citing works (Utama et al., 2020; van Noord et al., 2020; Hovy and Yang, 2021; Sayers et al., 2021; Peti-Stantic et al., 2021; Jang and Lukasiewicz, 2021).",
"While Bender and Koller raise important points for discussion, these strong implications in citing works are misleading and potentially harmful.",
"Adversarially collected test sets (Bartolo et al., 2020; Nie et al., 2020; Kiela et al., 2021)or test sets composed of examples that some target system gets wronghave recently become a popular tool in the evaluation of NLP systems.",
"Datasets of this kind are crowdsourced in a setting where an example-writer can interact with a model (or ensemble) in real time and is asked to come up with examples on which the model fails.",
"Writers are generally incentivized to find these failure cases, and the test section(s) of the resulting dataset will generally consist exclusively of such cases.",
"This process produces difficult test sets and it can be a useful tool in understanding the limits of existing training sets and models (Williams et al., 2020).",
"However, the constraint that a specified system must fail on the test examples makes it difficult to infer much from absolute measures of test-set performance: As long as a model makes any errors at all on any possible inputs, then we expect it to be possible to construct an adversarial test set against the model, and we expect the model to achieve zero test accuracy on that test set.",
"We can further infer that any models that are sufficiently similar to the adversary should also perform very poorly on this test set, regardless of their ability.",
"Neither of these observations would tell us anything non-trivial about the actual abilities of the models.",
"What's more, in many NLU data collection efforts, a large share of annotator disagreements represent subjective judgments rather than clear-cut errors (Pavlick and Kwiatkowski, 2019).",
"This means that even a perfectly careful and perfectly well-qualified human annotator should be expected to disagree with the majority judgment on some examples, and will thereby be coded as having made errors.",
"It is, therefore, possible to create an adversarial test set for which a careful human annotator would achieve 0% accuracy.",
"Absolute performance numbers on adversarially-collected test sets are meaningless as measures of model capabilities.",
"Adversarially-collected test sets are often used in standard experimental paradigms, and these caveats about the interpretation of results are not always clear when numbers are presented.",
"Sampling papers that cite Nie et al. (2020), for example, it is easy to find references that do not mention the adversarial design of the data and that therefore make claims that are hard to justify: 5 Talmor et al. (2020) use the results from Nie et al. to claim that LMs do not take into account the presence of negation in sentences, and Hidey et al. (2020) use them to justify the claim that examples for numerical reasoning and lexical inference have been shown to be difficult.",
"Bender et al. (2021) misleadingly describe a form of adversarial data collection 6 as a method for the careful manipulation of the test data to remove spurious cues the systems are lever-aging, and cite results on such data to argue that no actual language understanding is taking place 5 I focus here about claims about the absolute performance level of models. Whether adversarially collected test sets are appropriate for comparing the relative effectiveness of models is a largely orthogonal issue (Bowman and Dahl, 2021; Kaushik et al., 2021; Phang et al., 2021). 6 AFLite (Bras et al., 2020) uses ensembles of weak models to filter data. This avoids the most direct 0% accuracy concerns, but it can still provide arbitrarily large distortions to absolute performance in a way that is disconnected from any information about the skill or task that a dataset is meant to test. 7487 in LM-driven approaches.",
"Liu et al. (2020) similarly use absolute results on the adversary models to back up the trivial but easily-misread claim that BERT-style models may still suffer catastrophic failures in adversarial scenarios. 3 A Word on Hype The previous section has laid out some ways in which the mainstream NLP research community makes unjustifiable claims about the limitations of state-of-the-art methods.",
"These claims do not make the opposite phenomenon, hype , any less real or any less harmful.",
"While hype is likely most severe in industry PR and in the media, 7 it is nonetheless still prevalent in the research literature.",
"In one especially clear example, a prominent paper claiming of human parity in machine translation performance (Hassan et al., 2018) severely overstates what has been accomplished relative to commonsense intuitions about what a human-level translation system would do (Toral et al., 2018; Lubli et al., 2018; Zhang and Toral, 2019; Graham et al., 2020).",
"I do not aim to argue that overclaiming or hype is acceptable or safe.",
"Combating hype should be fully compatible with the goals laid out in this paper, and broad-based efforts to improve our practices in evaluation, analysis, writing, and forecasting should help reduce both underclaiming and hype.",
"Research papers are generally most useful when they're true and informative.",
"A research field that allows misleading claims to go unchallenged is likely to waste its time solving problems that it doesn't actually have, and is likely to lose credibility with serious funders, reporters, and industry stakeholders.",
"This is the most obvious reason that we should be concerned about underclaiming, but it is not the whole story.",
"This loss of insight and credibility can seriously challenge our ability to anticipate, understand, and manage the impacts of deploying NLP systems.",
"This is especially true of impacts that are contingent on NLP technologies actually working well , which we should expect will become more substantial as time goes on.",
"The deployment of modern NLP systems has had significant positive and negative impacts on the",
"world.",
"Researchers in NLP have an ethical obligation to inform (and if necessary, pressure) stakeholders about how to avoid or mitigate the negative impacts while realizing the positive ones.",
"Most prominently, typical applied NLP models show serious biases with respect to legally protected attributes like race and gender (Bolukbasi et al., 2016; Rudinger et al., 2018; Parrish et al., 2021).",
"We have no reliable mechanisms to mitigate these biases and no reason to believe that they will be satisfactorily resolved with larger scale.",
"Worse, it is not clear that even superhuman levels of fairness on some measures would be satisfactory: Fairness norms can conflict with one another, and in some cases, a machine decision-maker will be given more trust and deference than a human decision-maker would in the same situation (see, e.g., Rudin et al., 2020; Fazelpour and Lipton, 2020).",
"We thus are standing on shaky moral grounds when we deploy present systems in high-impact settings, but they are being widely deployed anyway (e.g. Dastin, 2018; Nayak, 2019; Dansby et al., 2020).",
"Beyond bias, similar present-day concerns can be seen around issues involving minority languages and dialects, deceptive design, and the concentration of power (Joshi et al., 2020; Bender et al., 2021; Kenton et al., 2021, 3.3).",
"Persuading the operators of deployed systems to take these issues seriously, and to mitigate harms or scale back deployments when necessary, will be difficult.",
"Intuitively, researchers concerned about these harms may find it appealing to emphasize the limitations of models in the hope that this will discourage the deployment of harmful systems.",
"This kind of strategic underclaiming can easily backfire: Models are often both useful and harmful, especially when the operator of the system is not the one being harmed.",
"If the operator of some deployed system sees firsthand that a system is effective for their purposes, they have little reason to trust researchers who argue that that same system does not understand language , or who argue something similarly broad and negative.",
"They will then be unlikely to listen to those researchers' further claims that such a system is harmful, even if those further claims are accurate.",
"We can reasonably expect NLP systems to improve over the coming decades.",
"Even if intellectual progress from research were to slow, the dropping 7488 price of compute should allow us to continue to reap the benefits of larger-scale training (Kaplan et al., 2020; Brown et al., 2020).",
"This improvement in capabilities is likely to amplify both the harms and benefits of language technology.",
"We have good reason to expect that this further progress in NLP, over many years or decades, will lead to upheavals in areas like education, medicine, law, and the service sector more broadly, as well as making mass surveillance and misinformation campaigns far more effective and opening up additional new use cases that will be hard for us to foresee (Brundage et al., 2018; Tamkin et al., 2021; Bommasani et al., 2021).",
"One can reasonably expect that the positive and negative impacts of these upheavals will far exceed the impacts that our technologies have produced to date.",
"In turn, NLP researchers who want to ensure that their career has a net-positive impact on the world should be concerned with these possibilities.",
"How does this relate to underclaiming?",
"It will be difficult to do the necessary technical, social, and governance work to prepare for these advances if we do not have a clear picture of our current capabilities, and it will be difficult to convince outside stakeholders to act appropriately to mitigate these risks if we don't acknowledge that we have made, and are making, real progress toward effective language technology.",
"Looking somewhat further into the future, a substantial community of philosophers, economists, and general ML researchers are concerned that highly-capable AI systemsof the kind that could plausibly be developed through existing ML research paradigmsare extremely dangerous by default (Bostrom, 2012; Critch and Krueger, 2020; Christian, 2020; Ord, 2020; Russell and Norvig, 2020).",
"Expert forecasts suggest that this could take place within a few decades (Grace et al., 2018).",
"If these hypotheses hold, and if we are poorly prepared for these developments, the worst-case outcomes could be catastrophic, even threatening the existence of human civilization on some views.",
"Investments in research into these potential catastrophic risks from advanced machine learning have become substantial: Funding from one foundation alone has totaled over $200M USD.",
"8 Concerns about risks from AI have also been the stated motivation for a significant fraction of the work from 8 https://www.openphilanthropy.org/ giving/grants DeepMind and OpenAI, which both have access to even greater amounts of funding.",
"The British Prime Minister Boris Johnson recently made a speech calling for further investment on the floor of the UN General Assembly (Nations, 2019).",
"Spurred on in particular by the shift in emergent capabilities from GPT-2 to GPT-3, the attention of these AI risk researchers has also been increasingly centered on language models and similar self-supervised multimodal models (Irving et al., 2018; Stiennon et al., 2020; Hendrycks et al., 2020; Kenton et al., 2021; Wu et al., 2021; Bommasani et al., 2021, 4.9).",
"Despite the scale of this research, and its recent shift of focus toward language models, there has been little interaction between the research communities working on long-term AI risk and on NLP.",
"The facts that AI risk research is growing in influence and that it is increasingly focused on language models put NLP in an exceptionally strange and troubling situation as a field.",
"To the extent that these concerns are valid, they represent an urgent call for reprioritization within NLP research to favor safety-relevant areas like interpretability, control, and evaluation over scaling, and to push for better oversight and regulation of large-scale research (Dafoe, 2018): Even a small risk of a globally significant catastrophe warrants a dramatic response.",
"On the other hand, to the extent that these concerns are unfounded or are built on misunderstandings about the possible trajectories of ML research, it would be quite valuable to correct this misunderstanding.",
"Correcting the record could redirect these resources and, more significantly, reduce the risk that popular or regulatory pressure will snuff out the positive potential of NLP technologies.",
"Because these more speculative concerns around advanced artificial intelligence are rarely discussed in the NLP literature, I will here offer a brief overview of that work.",
"Recent writing tends to focus on four clusters of hypotheses: Unaccountable Organizations Highly-capable AI is likely to lead to highly-profitable applications, making the institutions that first develop it quite powerful.",
"It is also likely to be able to displace human labor in technical fields to a large extent, increasing the relative value of capital over labor, and making it easier for the leaders of these organiza-7489 tions to take unpopular actions unilaterally.",
"In the longer term, highly-capable AI may also contribute to the effectiveness of persuasion campaigns, further insulating these organizations from outside pressure.",
"These forces could conspire to make the companies or governments that first produce highly-capable AI almost entirely unaccountable, and allowing their decisions to play a major role in the trajectory of humanity as a whole (Ord, 2020).",
"Alignment and Robustness Failures Even if a system is deployed by an actor with good intentions and substantial oversight, good outcomes are not guaranteed.",
"As AI systems become more capable, they become capable of effectingdirectly or indirectlysignificant force on the outside world.",
"In these cases, it becomes crucial that they behave in ways that we would endorse, even when they are pushed into unfamiliar new situations.",
"This requires both that the systems be optimized for the right objectives and that the systems actually internalize and generalize those objectives correctly.",
"Specifying and using safe objectives, such that aggressively optimizing them does not produce catastrophic outcomes, is difficult (Critch and Krueger, 2020).",
"Human preferences are complex, making the problem of specifying an objective that rules out unintended bad behavior non-trivial.",
"Goodhart's law 9 means that many objectives that serve as good proxies for what we want in in familiar situations can break down in new situations.",
"Further, training large models with high precision is difficult.",
"A small flaw in a highly-capable system's learned understanding of its objective can cause catastrophic failures, even if the true intended objective would have been safe (Hubinger et al., 2019).",
"Instrumentally-Convergent Subgoals The instrumental convergence hypothesis holds that systems that are optimizing for benign objectives, once they become sufficiently capable, have a predictable reason to take on dangerous subgoals like accumulating large amounts of computational, economic, or political powerto maximize the odds that their primary objectives are achieved (Bostrom, 2003; Omohundro, 2008; Bostrom, 2012).",
"10 Even with merely near-human-like lev-9 in the formulation of Strathern (1997): When a measure becomes a target, it ceases to be a good measure. 10 This is exemplified by the thought experiment of the paperclip maximizer (Figure 3), which points out that a machine tasked with manufacturing as many paperclips as possible, Figure 3: Downplaying the capabilities of current ML systems makes it less likely that we'll be well prepared for the impacts that come from developing highly-capable future sytsems.",
"els of performance, the ability of computational models to be copied and accelerated gives them considerable leeway to act in un-human-like ways.",
"Systems that interact with humans only through text, or systems whose goals are circumscribed to a well-defined task like question answering, are not exempt from this concern (Armstrong et al., 2012).",
"Risks Will Be Difficult to Spot Human-level capabilities are likely to emerge first from large machine learning models that, like modern neural networks, are not directly interpretable.",
"This means that it may be difficult to spot ways in which a model is unsafe or to forecast ways in which its behavior might change in novel settings (Critch and Krueger, 2020).",
"Further, we should expect highly-capable AI systems to be useful in the short term, giving potential users a strong incentive to deploy them as soon as they are affordable, even if their safety is not guaranteed.",
"This means that it is not enough that it simply be possible for us to develop safe systems, it is additionally necessary that it be nearly as easy and nearly as affordable as developing unsafe systems (Irving et al., 2018).",
"So What?",
"None of these arguments is conclusive in its current form, but as far as I am aware, all have resisted straightforward attempts at falsifica-tion.",
"All four are potentially applicable to neural if sufficiently capable, should be expected to turn nearly all matter on earth into paperclips.",
"While this vision of a single system acting alone on such a trivial objective is unrealistic, it demonstrates the key hypothesis that almost any reasonable-sounding goal starts to conflict with basic human needs if a sufficiently capable system pursues it single-mindedly.",
"network-based models and to models which operate primarily through language.",
"While the nascent field of AI alignment has proposed some mechanisms by which we might mitigate these risks, work in this area is still largely exploratory, with no clear research agenda in place to ensure that powerful models will be safe (Hadfield-Menell et al., 2016; Irving et al., 2018; Critch and Krueger, 2020; Kenton et al., 2021; Askell et al., 2021).",
"If these arguments hold, significant further work is needed to avoid catastrophe.",
"This will be difficult to achieve without a clear accounting of the abilities and limitations of current and plausible near-future systems.",
"In particular, we will need enough foresight to be able to see substantial progress of this kind coming well in advance, to avoid the complacency that comes with the perception that worrying about impacts from powerful AI is like worrying about overpopulation on mars (Garling, 2015, quoting Andrew Ng).",
"The core issue in this paper is one of sloppy communication about results.",
"The most straightforward step that we can take to remedy underclaiming is to simply use the same practices that we already use to avoid overclaiming: The peer-review process already polices overclaiming to a significant extent, and most researchers have learned to be careful about overclaiming in their writing.",
"We should apply high standards of evidence to our own empirical claims and those of others, both in peer-reviewed venues and in more informal scientific communication, even when those claims are negative and cloaked in a frame of individual or field-level modesty.",
"Beyond this, there are specific best practices or research directions that can help make these mistakes harder to make: A Rule of Thumb In light of the issues with negative results on older models discussed in Section 2.1, it could be productive to introduce a new heuristic when reviewing or evaluating papers that discuss model failures.",
"11 In the spirit of the Bender Rule (Bender, 2019), I propose: 11 While a corresponding rule could be helpful in the context of results describing the success of a machine learning system on some evaluation, the asymmetry here is intentional: Successes are likely to be deliberately replicated from one generation of models to the next, while the opposite is true of failures.",
"When describing the failure of a machine learning model on some empirical evaluation, make it clear i.",
"what kind of model has failed, ii.",
"whether the model is significantly less capable than the current state of the art in the domain, and iii.",
"whether the evaluation was deliberately set up to trick that model or another model like it.",
"Better Evaluation The pervasiveness of underclaiming can likely be attributed in part to the ineffectiveness of current evaluation practices in many areas of NLP.",
"When impressive numbers on widely-used benchmarks are usually followed by disappointment, suggesting that good evaluation numbers don't translate to effective systems, it is rational to treat new encouraging results with extreme skepticism.",
"Better benchmarks and evaluation practices could help mitigate this by providing a firmer ground on which to make positive claims about system capacities.",
"12 In practice, research into more effective crowdsourcing and benchmark design and research into better statistical reporting and publication norms (Dodge et al., 2019; Card et al., 2020; Rogers and Augenstein, 2020; van Miltenburg et al., 2021) seem especially high-impact under this lens.",
"Better Analysis We can help address the time-lag issue discussed in Section 2.1 by building tooling to make it easier to adapt existing analysis techniques to new models seamlessly.",
"Leaderboards that integrate conventional benchmarking with analysis can be especially helpful by making this largely automatic (Wang et al., 2018; Dua et al., 2019; Gehrmann et al., 2021; Ma et al., 2021).",
"More broadly, careful analysis work, targeted at broadly understanding the capacities of capable models, will be valuable in helping to forecast and mitigate the worst risks from future systems (Elhage et al., 2021; Ganguli et al., 2022).",
"Better Forecasting Scaling laws results in NLP (Hestness et al., 2017; Kaplan et al., 2020; Brown et al., 2020; Zhang et al., 2021) offer the promise that we can predict the performance of future larger-scale machine learning models on at least some 12 Though Raji et al. (2021) point out ways in which better benchmarking alone is unlikely to be fully satisfactory.",
"metrics.",
"This line of work is still nascent, and successes to date have largely focused on loss values rather than more interpretable measures of capability.",
"Further developing these methods, as well as others that allow us to better forecast near future progress, should be helpful.",
"Better forecasting will provide a useful way to sanity-check future claims (DellaVigna et al., 2019) and will help improve the responsiveness of model analysis by enabling us to prepare analysis methods and datasets that anticipate future capabilities.",
"While much of this paper discusses the state of the NLP literature, a few related works warrant emphasis as starting points for further reading:",
"Bender and Koller (2020), Bender et al. (2021), and Raji et al. (2021) discuss the role of hype in driving bad outcomes from the development of language technology.",
"Jin et al. (2021) and Rogers (2021) offer broader discussion of how to ensure that the net impact of near-future NLP deployments on the world is positive.",
"Morris et al. (2020) and Hauser et al. (2021) highlight overly strong negative claims in papers analyzing models' robustness to synonym substitution.",
"Looking to the longer term, Bommasani et al. (2021, 4.9) provides an introduction to the AI risk and AI alignment literature from a perspective that emphasizes NLP and language.",
"Welty et al. (2019), Linzen (2020), Ribeiro et al. (2020), Raji et al. (2021), Bowman and Dahl (2021), and De-hghani et al. (2021), among many others, discuss the challenges involved in designing evaluations that yield trustworthy and accurate depictions of the capabilities of ML models.",
"Like many research fields that have a tight connection to technological practice, NLP has long struggled to avoid inflated expectations about the capabilities of state-of-the-art tools.",
"This remains a serious issue.",
"However, this paper argues that our attempts to avoid hype often overshoot: Instead of merely correcting overly optimistic claims about our capabilities, we replace them with overly pessimistic claims.",
"Making misleading claims is generally a bad sign for the health and credibility of a scientific field, and the stakes are high: NLP technologies are implicated in a range of serious real-world harms, and plausible future elaborations of these technologies are potentially much more dangerous still.",
"Our ability to mitigate existing harms will depend on our ability to make reliably credible claims about the limitations of our systems.",
"Our ability to mitigate future harms will depend on our ability to accurately anticipate, recognize, agree upon, and report upon emerging capabilities.",
"Both of these goals are seriously hampered by claims that current technologies are less capable than they in fact are.",
"Better evaluation, better tooling for model analysis, and better mechanisms for technical forecasting should all contribute to making these pessimistic claims easier to avoid or debunk.",
"However, this problem is ultimately one of scientific communication, and to solve it fully, we will need to use the tools and norms of science to better police false or misleading claims.",
"The stakes are high.",
"The ideas and arguments in this work were developed through conversation (live or on Twitter) with more researchers than I can name, including Emiel van Miltenburg, Deb Raji, Paul Christiano, Geoffrey Irving, Rohin Shah, Jacob Steinhardt, Catherine Olsson, Nick Beckstead, Alex Tamkin, Daniel Dewey, Alex Ray, and Robin Jia, as well as audiences at UT Austin, Georgia Tech, UPenn, University College London, CMU, Bar Ilan University, Tel Aviv University, Technion, Hebrew University, Unbabel, Instituto Superior Tcnico, UChicago, and TTI-C, and contributors to the AI Alignment Forum.",
"Anna Rogers, Owain Evans, Alex Wang, Jacob Steinhardt, Jason Phang, Jared Kaplan, Alex Cristia, Tal Linzen, Jonathan Uesato, and four anonymous ARR reviewers provided feedback on drafts.",
"Any errors or dubious rhetoric are my own.",
"This project has benefited from financial support by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program) and Apple.",
"This material is based upon work supported by the National Science Foundation under Grant Nos. 1922658 and 2046556.",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation."
]
| [
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other"
]
|
[
"The named concepts and compositional operators present in natural language provide a rich source of information about the abstractions humans use to navigate the world.",
"Can this linguistic background knowledge improve the generality and efficiency of learned classifiers and control policies?",
"This paper aims to show that using the space of natural language strings as a parameter space is an effective way to capture natural task structure.",
"In a pretraining phase, we learn a language interpretation model that transforms inputs (e.g. images) into outputs (e.g. labels) given natural language descriptions.",
"To learn a new concept (e.g. a classifier), we search directly in the space of descriptions to minimize the interpreter's loss on training examples.",
"Crucially, our models do not require language data to learn these concepts: language is used only in pretraining to impose structure on subsequent learning.",
"Results on image classification, text editing, and reinforcement learning show that, in all settings, models with a linguistic parameterization outperform those without.",
"1 1 Introduction The structure of natural language reflects the structure of the world.",
"For example, the fact that it is easy for humans to communicate the concept left of the circle but comparatively difficult to communicate mean saturation of the first five pixels in the third column reveals something about the abstractions we find useful for interpreting and navigating our environment (Gopnik and Meltzoff, 1987).",
"In machine learning, efficient automatic discovery of reusable abstract structure remains a major challenge.",
"This paper investigates whether 1 Code and data are available at https://github.com/ jacobandreas/l3 .",
"background knowledge from language can provide a useful scaffold for acquiring it.",
"We specifically propose to use language as a latent parameter space for few-shot learning problems of all kinds, including classification, transduction and policy search.",
"We aim to show that this linguistic parameterization produces models that are both more accurate and more interpretable than direct approaches to few-shot learning.",
"Like many recent frameworks for multitask-and meta-learning, our approach consists of three phases: a pretraining phase, a concept-learning phase, and an evaluation phase.",
"Here, the product of pretraining is a language interpretation model that maps from descriptions to predictors (e.g. image classifiers or reinforcement learners).",
"Our thesis is that language learning is a powerful, general-purpose kind of pretraining, even for tasks that do not directly involve language.",
"New concepts are learned by searching directly in the space of natural language strings to mini-2166 ?",
"mize the loss incurred by the language interpretation model (Figure 1).",
"Especially on tasks that require the learner to model high-level compositional structure shared by training examples, natural language hypotheses serve a threefold purpose: they make it easier to discover these compositional concepts, harder to overfit to few examples, and easier for humans to understand inferred patterns.",
"Our approach can be implemented using a standard kit of neural components, and is simple and general.",
"In a variety of settings, we find that the structure imposed by a natural-language parameterization is helpful for efficient learning and exploration.",
"The approach outperforms both multitaskand meta-learning approaches that map directly from training examples to outputs by way of a real-valued parameterization, as well as approaches that make use of natural language annotations as an additional supervisory signal rather than an explicit latent parameter.",
"The natural language concept descriptions inferred by our approach often agree with human annotations when they are correct, and provide an interpretable debugging signal when incorrect.",
"In short, by equipping models with the ability to think out loud when learning, they become both more comprehensible and more accurate.",
"Suppose we wish to solve an image classification problem like the one shown in Figure 2bc, mapping from images x to binary labels y .",
"One straightforward way to do this is to solve a learning problem of the following form: arg min HX ( x,y ) L ( f ( x ; ) , y ) , (1) where L is a loss function and f is a richly-parameterized class of models (e.g. convolutional networks) indexed by (e.g. weight matrices) that map from images to labels.",
"Given a new image x 0 , f ( x 0 ; ) can be used to predict its label.",
"In the present work, we are particularly interested in few-shot learning problems where the number of ( x, y ) pairs is smallon the order of five or ten examples.",
"Under these conditions, directly solving Equation 1 is a risky proposition any model class powerful enough to capture the true relation between inputs and outputs is also likely to overfit.",
"For few-shot learning to be successful, extra structure must be supplied to the learner.",
"Existing approaches obtain this structure by either carefully structuring the hypothesis space or providing the learner with alternative training data.",
"The approach we present in this paper combines elements of both, so we begin with a review of existing work.",
"(Inductive) program synthesis approaches (e.g. Gulwani, 2011) reduce the effective size of the hypothesis class H by moving the optimization problem out of the continuous space of weight vectors and into a discrete space of formal program descriptors (e.g. regular expressions or Pro-log queries).",
"Domain-specific structure like version space algebras (Lau et al., 2003) or type systems (Kitzelmann and Schmid, 2006) can be brought to bear on the search problem, and the bias inherent in the syntax of the formal language provides a strong prior.",
"But while program synthesis techniques are powerful, they are also limited in their application: a human designer must hand-engineer the computational primitives necessary to compactly describe every learnable hypothesis.",
"While reasonable for some applications (like string editing), this is challenging or impossible for others (like computer vision).",
"An alternative class of multitask learning approaches (Caruana, 1998) import the relevant structure from other learning problems rather than defining it manually (Figure 2a, top).",
"Since we may not know a priori what set of learning problems we ultimately wish to evaluate on, it is useful to think of learning as taking places in three phases: 2167 1. a pretraining (or meta-training) phase that makes use of various different datasets i with examples { ( x ( i ) 1 , y ( i ) 1 ) , . . . , ( x ( i ) n , y ( i ) n ) } (Figure 2a) 2. a concept-learning phase in which the pretrained model is adapted to fit data { ( x ( c ) 1 , y ( c ) 1 ) , . . . , ( x ( c ) n , y ( c ) n ) } for a specific new task (Figure 2b) 3. an evaluation phase in which the learned concept is applied to a new input x ( e ) to predict y ( e ) (Figure 2c) In these approaches, learning operates over two collections of parameters: shared parameters and task-specific parameters . In pretraining, multitask approaches find: arg min R a , ( i ) R b X i,j L (cid:0) f ( x ( i ) j ; , ( i ) ) , y ( i ) j (cid:1) . (2) At concept learning time, they solve for: arg min ( c ) R b X j L (cid:0) f ( x ( c ) j ; , ( c ) ) , y ( c ) j (cid:1) (3) on the new dataset, then make predictions for new inputs using f ( x ( e ) ; , ( c ) ) . Closely related meta-learning approaches (e.g. Schmidhuber, 1987; Santoro et al., 2016; Vinyals et al., 2016) make use of the same data, but collapse the inner optimization over ( c ) and prediction of y ( e ) into a single learned model. 3 Learning with Language In this work, we are interested in developing a learning method that enjoys the benefits of both approaches. In particular, we seek an intermediate language of task representations that, like in program synthesis, is both expressive and compact, but like in multitask approaches is learnable directly from training data without domain engineering. We propose to use natural language as this intermediate representation. We call our approach learning with latent language (L 3 ). Natural language shares many structural advantages with the formal languages used in synthesis approaches: it is discrete, has a rich set of compositional operators, and comes equipped with a natural description length prior. But it also has a considerably more flexible semantics. And crucially, plentiful annotated data exists for learning this semantics: we cannot hand-write a computer program to recognize a small dog , but we can learn how to do it from image captions. More basically, the set of primitive operators available in language provides a strong prior about the kinds of abstractions that are useful for natural learning problems. Concretely, we replace the pretraining phase above with a language-learning phase. We assume that at language-learning time we have access to natural-language descriptions w ( i ) (Fig-ure 2a, bottom). We use these w as parameters , in place of the task-specific parameters that is, we learn a language interpretation model f ( x ; , w ) that uses shared parameters to turn a description w into a function from inputs to outputs. For the example in Figure 2, f might be an image rating model (Socher et al., 2014) that outputs a scalar judgment y of how well an image x matches a caption w . Because these natural language parameters are observed at language-learning time, we need only learn the real-valued shared parameters used for their interpretation (e.g. the weights of a neural network that implements the image rating model): arg min R a X i,j L (cid:0) f ( x ( i ) j ; , w ( i ) ) , y ( i ) j (cid:1) . (4) At concept-learning time, conversely, we solve only the part of the optimization problem over natural language strings: arg min w ( c ) X j L (cid:0) f ( x ( c ) j ; , w ( c ) ) , y ( c ) j (cid:1) . (5) This last step presents something of a challenge. When solving the corresponding optimization problem, synthesis techniques can exploit the algebraic structure of the formal language, while end-to-end learning approaches take advantage of differentiability. Here we can't do eitherthe language of strings is discrete, and any structure in the interpretation function is wrapped up inside the black box of f .",
"Inspired by related techniques aimed at making synthesis more efficient (Devlin et al., 2017), we use learning to help us develop an effective optimization procedure for natural language parameters.",
"In particular, we simply use the language-learning datasets, consisting of pairs ( x ( i ) j , y ( i ) j ) and descriptions w i , to fit a reverse proposal model, estimating: arg max P i log q ( w i | x ( i ) 1 , y ( i ) 1 , . . . , x ( i ) n , y ( i ) n ; ) 2168 true a white shape is left of a yellow semicircle true true true true Figure 3: The few-shot image classification task.",
"where q provides a (suitably normalized) approximation to the distribution of descriptions given task data.",
"In the running example, this proposal distribution is essentially an image captioning model (Donahue et al., 2015).",
"By sampling from q , we expect to obtain candidate descriptions that are likely to obtain small loss.",
"But our ultimate inference criterion is still the true model f : at evaluation time we perform the minimization in Equation 5 by drawing a fixed number of samples, selecting the hypothesis w ( c ) that obtains the lowest loss, and using f ( x ( e ) ; , w ( c ) ) to make predictions.",
"What we have described so far is a generic procedure for equipping collections of related learning problems with a natural language hypothesis space.",
"In Sections 4 and 5, we describe how this procedure can be turned into a concrete algorithm for supervised classification and sequence prediction.",
"In Section 6, we describe how to extend these techniques to reinforcement learning.",
"We begin by investigating whether natural language can be used to support high-dimensional few-shot classification.",
"Our focus is on visual reasoning tasks like the one shown in Figure 3. In these problems, the learner is presented with four images, all positive examples of some visual concept like a blue shape near a yellow triangle , and must decide whether a fifth, held-out image matches the same concept.",
"These kinds of reasoning problems have been well-studied in visual question answering settings (Johnson et al., 2017; Suhr et al., 2017).",
"Our version of the problem, where the input and output feature no text data, but an explanation must be inferred, is similar to the visual reasoning problems proposed by Raven (1936) and Bongard (1968).",
"To apply the recipe in Section 2, we need to specify an implementation of the interpretation model f and the proposal model q .",
"We begin by computing representations of input images x .",
"We start with a pre-trained 16-layer VGGNet (Si-monyan and Zisserman, 2014).",
"Because spatial information is important for these tasks, we extract a feature representation from the final convolutional layer of the network.",
"This initial featurization is passed through two fully-connected layers to form a final image representation, as follows: x VGG-16 FC ReLU FC rep( ) x We define interpretation and proposal models: 2 f ( x ; w ) = (cid:0) rnn-encode ( w ) > rep ( x ) (cid:1) q ( w | { x j } ) = rnn-decode (cid:0) w | 1 n P j rep ( x j ) (cid:1) The interpretation model f outputs the probability that x is assigned a positive class label, and is trained to maximize log-likelihood.",
"Because only positive examples are provided in each language learning set, the proposal model q can be defined in terms of inputs alone.",
"Details regarding training hyperparameters, RNN implementations, etc. may be found in Appendix A. Our evaluation aims to answer two questions.",
"First, does the addition of language to the learning process provide any benefit over ordinary multitask or meta-learning?",
"Second, is it specifically better to use language as a hypothesis space for concept learning rather than just an additional signal for pretraining?",
"We use several baselines to answer these questions: 1. Multitask : a multitask baseline in which the definition of f above is replaced by ( > i rep ( x )) for task-specific parameters i that are optimized during both pretraining and concept-learning.",
"2. Meta : a meta-learning baseline in which f is defined by ([ 1 n P j rep ( x j )] > rep ( x )) .",
"3 2 Suppressing shared parameters and for clarity.",
"3 Many state-of-the-art approaches to meta-learning for classification (e.g. Snell et al., 2017) are not well-defined for possibly-overlapping evaluation classes with only positive examples provideded.",
"Here we have attempted to provide a robust implementation that is as close as possible to the other systems under evaluation.",
"3. Meta+Joint : as in Meta , but the pretraining objective includes an additional term for predicting q (discarded for concept learning).",
"We report results on a dataset derived from the ShapeWorld corpus of Kuhnle and Copestake (2017).",
"In this dataset the held-out image matches the target concept 50% of the time.",
"In the validation and test folds, half of learning problems feature a concept that also appears in the language learning set (but with different exemplar images), while the other half feature both new images and a new concept.",
"Images contain two or three dis-tractor shapes unrelated to the objects that define the target concept.",
"Captions in this dataset were generated from DMRS representations using an HPS grammar (Copestake et al., 2016).",
"(Our remaining experiments use human annotators.)",
"The dataset contains a total of 9000 pretraining tasks and 1000 of each validation and test tasks.",
"More dataset statistics are provided in Appendix B. Results are shown in Table 1. It can be seen that L 3 provides consistent improvements over the baselines, and that these improvements are present both when identifying new instances of previously-learned concepts and when discovering new ones.",
"Some example model predictions are shown in Figure 4.",
"The model often succeeds in making correct predictions, even though its inferred descriptions rarely match the ground truth.",
"Sometimes this is because of inherent ambiguity in the description language (Figure 4a), and sometimes because the model is able to rule out candidates on the basis of partial captions alone (Fig-ure 4b, where it is sufficient to recognize that the Model Val (old) Val (new) Val Test Random 50 50 50 50 Multitask 64 49 57 59 Meta 63 62 62 64 Meta+Joint 63 69 66 64 L 3 (ours) 70 72 71 70 L 3 (oracle) 77 80 79 78 Table 1: Evaluation on image classification.",
"target concept involves a circle ).",
"More examples are provided in Appendix C. 5 Programming by Demonstration Next we explore whether the same technique can be applied to tasks that involve more than binary similarity judgments.",
"We focus on structured prediction: specifically a family of string processing tasks.",
"In these tasks, the model is presented with examples of five strings transformed according to some rule; it must then apply an appropriate transformation to a sixth (Figure 5).",
"Learning proceeds as in the previous section, with: rep ( x, y ) = rnn-encode ([ x, y ]) f ( y | x ; w ) = rnn-decode (cid:0) y | [ rnn-encode ( x ) , rnn-encode ( w )] (cid:1) q ( w | { ( x j , y j ) } ) = rnn-decode (cid:0) w | 1 n P j rep ( x j , y j ) (cid:1) Baselines are analogous to those for classification.",
"While string editing tasks of the kind shown in Figure 5 are popular in both the programming by demonstration literature (Singh and Gulwani, 2012) and the semantic parsing literature (Kush-man and Barzilay, 2013), we are unaware of any datasets that support both learning paradigms at the same time.",
"We have thus created a new dataset of string editing tasks by (1) sampling random regular transducers, (2) applying these transducers to collections of dictionary words, and (3) showing the collected examples to Mechanical Turk users a blue cross is above a pentagon a cyan pentagon is to the right of a magenta shape false true",
"(c) examples true description true label pred.",
"label a square is above a red cross a red cross is below a square true true a circle is above a yellow circle a cyan circle is to the left of a rectangle false false Figure 4: Example predictions for image classification.",
"The model achieves high accuracy even though predicted descriptions rarely match the ground truth.",
"High-level structure like the presence of certain shapes or spatial relations is consistently recovered.",
"and asking them to provide a natural language explanation with their best guess about the underlying rule.",
"The dataset thus features both multi-example learning problems, as well as structured and unstructured annotations for each target concept.",
"There are 3000 tasks for language learning and 500 tasks for each of validation and testing (Appendix B).",
"Annotations are included in the code release for this paper.",
"Results are shown in Table 2. In these experiments, all models that use descriptions have been trained on the natural language supplied by human annotators.",
"While we did find that the Meta+Joint model converges considerably faster than all the others, its final performance is somewhat lower than the baseline Meta model.",
"As before, L 3 outperforms alternative approaches for learning directly from examples with or without descriptions.",
"Because all of the transduction rules in this dataset were generated from known formal descriptors, these tasks provide an opportunity to perform additional analysis comparing natural language to more structured forms of annotation (since we have access to ground-truth regular expressions) and more conventional synthesis-based methods (since we have access to a ground-truth regular expression execution engine).",
"We additionally investigate the effect of the number of warding curved uranium pedaled drum warying curved uranium peyaled drum replace d before a vowel with y s/d([aeiou])/y\\1/g chided chiyed Figure 5: Example string editing task.",
"plummest bereaving eddied plummesti bereavinti eddieti replace the last letter of the word with t i mistrials Figure 6: Example predictions for string editing.",
"samples drawn from the proposal model.",
"These results are shown in Table 3. A few interesting facts stand out.",
"Under the ordinary evaluation condition (with no ground-truth annotations provided), language-learning with natural language data is actually better than language-learning with regular expressions.",
"This might be because the extra diversity helps the model determine the relevant axes of variation and avoid overfitting to individual strings.",
"Allowing the model to do its own inference is also better than providing ground-truth natural language descriptions, suggesting that it is actually better at generalizing from the relevant concepts than our human annotators (who occasionally write things like I have no idea for the inferred rule).",
"Unsurprisingly, with ground truth REs (which unlike the human data are always correct) we can do better than any of the models that require inference.",
"Coupling our inference procedure with an oracle RE evaluator, we essentially recover the synthesis-based approach of Devlin et al. (2017).",
"Our find-ings are consistent with theirs: when an exact execution engine is available, there is no reason not to use it.",
"But we can get almost 90% of the way there Annotations Samples Oracle 1 100 Ann.",
"The previous two sections examined supervised settings where the learning signal comes from few examples but is readily accessible.",
"In this section, we move to a set of reinforcement learning problems, where the learning signal is instead sparse and time-consuming to obtain.",
"We evaluate on a collection of 2-D treasure hunting tasks.",
"These tasks require the agent to discover a rule that determines the location of buried treasure in a large collection of environments of the kind shown in Figure 7.",
"To recover the treasure, the agent must navigate (while avoiding water) to its goal location, then perform a DIG action.",
"At this point the episode ends; if the treasure is located in the agent's current position, it receives a reward, otherwise it does not.",
"In every task, the treasure has consistently been buried at a fixed position relative to some landmark (in Figure 7 a heart).",
"Both the offset and the identity of the target landmark are unknown to the agent, and the location of the landmark varies across maps.",
"Indeed, there is nothing about the agent's observations or action space to suggest that landmarks and offsets are even the relevant axes of variation across tasks: only the language reveals this structure.",
"The interaction between language and learning in these tasks is rather different from the supervised settings.",
"In the supervised case, language serves mostly as a guard against overfitting, and Figure 7: Example treasure hunting task: the agent is placed in a random environment and must collect a reward that has been hidden at a consistent offset with respect to some landmark.",
"can be generated conditioned on a set of preprovided concept-learning observations.",
"Here, agents are free to interact with the environment as much as they need, but receive observations only during interaction.",
"Thus our goal here will be to build agents that can adapt quickly to new environments, rather than requiring them to immediately perform well on held-out data.",
"Why should we expect L 3 to help in this setting?",
"In reinforcement learning, we typically encourage our models to explore by injecting randomness into either the agent's action space or its underlying parameterization.",
"But most random policies exhibit nonsensical behaviors; as a result, it is in-efficient both to sample in the space of network weights and to perform policy optimization from a random starting point.",
"Our hope is that when parameters are chosen from within a structured family, a stochastic search in this structured space will only ever consider behaviors corresponding to a reasonable final policy, and in this way discover good behavior faster than ordinary RL.",
"Here the interpretation model f describes a policy that chooses actions conditioned on the current environment state and a linguistic parameterization.",
"As the agent initially has no observations at all, we simply design the proposal model to generate unconditional samples from a prior over descriptions.",
"Taking x to be an agent's current observation of the environment state, we define a state representation network and models: x FC tanh FC rep( ) x tanh f ( a | x ; w ) rnn-encode ( w ) > W a rep ( x ) q ( w ) = rnn-decode ( w ) This parameterization assumes a discrete action space, and assigns to each action a probability proportional to a bilinear function of the encoded description and world state.",
"f is an instruction following model of a kind well-studied in natural language processing (Branavan et al., 2009); the proposal model allows it to generate its own instructions without external direction.",
"To learn, we sample a fixed number of descriptions w from q .",
"For each description, we sample multiple rollouts of the policy it induces to obtain an estimate of its average reward.",
"Finally, we take the highest-scoring description and fine-tune its induced policy.",
"The multitask model we compare to replaces these descriptions with trainable task embeddings.",
"4 The learner is trained from task-specific expert policies using DAgger (Ross et al., 2011) during the language-learning phase, and adapts to individual environments using vanilla policy gradient (Williams, 1992) during the concept learning phase.",
"The environment implementation and linguistic annotations are in this case adapted from a natural language navigation dataset originally introduced by Janner et al. (2017).",
"In our version of the problem (Figure 7), the agent begins each episode in a random position on a randomly-chosen map and must attempt to obtain the treasure.",
"Relational concepts describing target locations are reused between language learning and concept-learning phases, but the environments themselves are distinct.",
"For language learning the agent has access to 250 tasks, and is evaluated on an additional 50.",
"Averaged learning curves for held-out tasks are shown in Figure 8.",
"As expected, reward for the L 3 model remains low during the initial exploration period, but once a description is chosen the score 4 In RL, the contribution of L 3 is orthogonal to that of meta-learningone could use a technique like RL 2 (Duan et al., 2016) to generate candidate descriptions more efficiently, or MAML (Finn et al., 2017) rather than zero-shot reward as the training criterion for the interpretation model.",
"improves rapidly.",
"Immediately L 3 achieves better reward than the multitask baseline, though it is not perfect; this suggests that the interpretation model is somewhat overfit to the pretraining environments.",
"After fine-tuning even better results are rapidly obtained.",
"Example rollouts are visualized in Appendix E. These results show that the model has used the structure provided by language to learn a better representation space for policies one that facilitates sampling from a distribution over interesting and meaningful behaviors.",
"This is the first approach we are aware of to frame a general learning problem as optimization over a space of natural language strings.",
"However, many closely related ideas have been explored in the literature.",
"String-valued latent variables are widely used in language processing tasks ranging from morphological analysis (Dreyer and Eisner, 2009) to sentence compression (Miao and Blunsom, 2016).",
"Natural language annotations have been used in conjunction with training examples to guide the discovery of logical descriptions of concepts (Ling et al., 2017; Srivastava et al., 2017), and used as an auxiliary loss for training (Frome et al., 2013), analogously to the Meta+Joint baseline in this paper.",
"Structured language-like annotations have been used to improve learning of generalizable structured policies (Oh et al., 2017; Andreas et al., 2017; Denil et al., 2017).",
"Finally, natural language instructions available at concept-learning time (rather than language-learning time) have been used to provide side information to reinforcement learners about high-level strategy (Branavan et al., 2011), environments (Narasimhan et al., 2017) and exploration (Harrison et al., 2017).",
"We have presented an approach for learning in a space parameterized by natural language.",
"Using simple models for representation and search in this space, we demonstrated that our approach outperforms standard baselines on classification, structured prediction and reinforcement learning tasks.",
"We believe that these results suggest the following general conclusions: Language encourages compositional generalization .",
"Standard deep learning architectures are good at recognizing new instances of familiar 2173 concepts, but not always at generalizing to new ones.",
"By forcing decisions to pass through a linguistic bottleneck in which the underlying compositional structure of concepts is explicitly expressed, stronger generalization becomes possible.",
"Language simplifies structured exploration .",
"Natural language scaffolding provides dramatic advantages in problems like reinforcement learning that require exploration: models with latent linguistic parameterizations can limit exploration to a class of behaviors that are likely a priori to be goal-directed and interpretable.",
"And generally, language can help learning .",
"In multitask settings, it can even improve learning on tasks for which no language data is available at training or test time.",
"While some of these advantages are also provided by techniques built on top of formal languages, natural language is at once more expressive and easier to obtain than formal supervision.",
"We believe this work hints at broader opportunities for using naturally-occurring language data to improve machine learning for tasks of all kinds.",
"JA is supported by a Facebook graduate fellowship."
]
| [
"abstain",
"abstain",
"objective",
"method",
"objective",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"abstain",
"objective",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"method",
"other",
"other",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other"
]
|
[
"Word embedding-based similarity measures are currently among the top-performing methods on unsupervised semantic textual similarity (STS) tasks.",
"Recent work has increasingly adopted a statistical view on these embeddings, with some of the top approaches being essentially various correlations (which include the famous cosine similarity).",
"Another excellent candidate for a similarity measure is mutual information (MI), which can capture arbitrary dependencies between the variables and has a simple and intuitive expression.",
"Unfortunately, its use in the context of dense word embeddings has so far been avoided due to difficulties with estimating MI for continuous data.",
"In this work we go through a vast literature on estimating MI in such cases and single out the most promising methods, yielding a simple and elegant similarity measure for word embeddings.",
"We show that mutual information is a viable alternative to correlations, gives an excellent signal that correlates well with human judgements of similarity and rivals existing state-of-the-art unsupervised methods.",
"Neural text embeddings learned from unlabeled data are a key component of modern approaches to semantic textual similarity (STS).",
"Despite the impressive performance of large pretrained models (Kiros et al., 2015; Conneau et al., 2017; Subra-manian et al., 2018; Cer et al., 2018; Peters et al., 2018; Radford, 2018; Devlin et al., 2018; Dai et al., 2019; Yang et al., 2019a) on a a plethora of hard NLP tasks, deep models do not currently offer a clear advantage over much simpler static word embeddings (Bengio et al., 2003; Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017; Joulin et al., 2017) on standard unsupervised STS benchmarks (Hill et al., 2016; Arora et al., 2017; Wieting et al., 2016; Wieting and Gimpel, 2018; Zhelezniak et al., 2019b,a,c).",
"Instead, the main sources of improvement here have come from training on supervised paraphrastic corpora (Wieting et al., 2015, 2016; Wieting and Gimpel, 2018), designing better composition functions (Mitchell and Lapata, 2008; De Boom et al., 2016; Arora et al., 2017; Zhao and Mao, 2017; Ruckle et al., 2018; Zhelezniak et al., 2019b,c; Yang et al., 2019b) and exploring novel similarity measures between word embeddings, in particular those inspired by optimal transport (Kusner et al., 2015; Huang et al., 2016), soft and fuzzy sets (Jimenez et al., 2010, 2015; Zhelezniak et al., 2019b), and statistics (Lev et al., 2015; Nikolentzos et al., 2017; Torki, 2018; Zhelezniak et al., 2019a,c).",
"Recently, Zhelezniak et al. (2019a,c) advocated for a new statistical perspective on word embeddings where each word embedding itself is viewed as a sample of (e.g. 300) observations from some scalar random variable.",
"They conducted a statistical analysis of several popular pretrained word embeddings and their compositions and established that the ubiquitous cosine similarity is practically equivalent to Pearson correlation.",
"They also demonstrated significant gains in performance when one instead uses non-parametric rank correlation coefficients (Spearman's , Kendall's ) and cross-covariance operators between reproducing kernel Hilbert spaces (Hilbert-Schmidt inde-pendence criterion (HSIC) (Gretton et al., 2005), Centered Kernel Alignment (CKA)) (Cortes et al., 2012; Kornblith et al., 2019).",
"One prominent alternative to those correlation-based approaches is mutual information (MI), which is of great importance in information theory and statistics.",
"In some sense, mutual information is an excellent candidate for a similarity measure between word embeddings as it can capture arbitrary dependencies between the variables and has a simple and intuitive expression.",
"Unfortunately, its use in the context of continuous dense word representations has so far been avoided due to the difficulties in estimating MI for continuous random variables (joint and marginal densities are not known in practice).",
"In this work we make the first steps towards the adoption of MI as a measure of semantic similarity between dense word embeddings.",
"We begin our discussion with how to apply MI for this purpose in principle.",
"Next we carefully summarise the vast literature on estimation of MI for continuous random variables and identify approaches most suitable for our use case.",
"Our chief goal here is to identify the estimators that yield elegant, almost closed-form expressions for the resulting similarity measure as opposed to complicated estimation procedures.",
"Finally, we show that such estimators of mutual information give an excellent signal that correlates very well with human judgements and comfortably rivals existing state-of-the-art unsupervised STS approaches.",
"Suppose we are given a word embedding matrix W RN D , where N is the vocabulary size and D is the embedding dimension (commonly D = 300 ).",
"Ultimately, the matrix W is simply a table of some numbers and just like any dataset, it is subject to a statistical analysis.",
"There are essentially two ways we can proceed: we can either choose to view W as N observations from D random variables or we can instead consider WT and view it as D observations from N random variables.",
"The first approach allows us to study global' properties of the word embedding space (e.g. via PCA, clustering, etc.) and defines global' similarity structures, such as Mahalanobis distance, Fisher kernel (Lev et al., 2015), etc.",
"In the second approach we study the distribution P ( W 1 , W 2 , . . . , WN ) , where a word embedding w i is a sample of D ( = 300 ) observations from some scalar random variable W i corresponding to the word w i (Zhelezniak et al., 2019a,c).",
"The local' similarity between two words w i and w j is then encoded in the dependencies between the corresponding random variables W i , W j .",
"Since the distribution P ( W i , W j ) is unknown, we estimate these dependencies based on the sample w i , w j .",
"Certain dependencies can be captured by Pearson, Spearman and Kendall correlation coefficients between word embeddings (cid:98) ( w i , w j ) , where the choice of the coefficient depends on the statistics of each word embedding model (Zhelezniak et al., 2019a).",
"Conveniently, correlations can also be used to measure semantic similarity between two sets of words (e.g. phrases and sentences) if one considers the correlations between random vectors X = ( X 1 , X 2 , . . . , X l x ) and Y = ( Y 1 , Y 2 , . . . , Y l y ) , where scalar random variables X i correspond to the words in the first sentence and Y j to the words in the second sentence.",
"This, for example, can be done by first pooling (e.g. meanor max-pooling) random vectors into scalar variables X pool and Y pool and then estimating univariate correlations corr ( X pool , Y pool ) as before.",
"Alternatively, we can measure correlations between random vectors directly using norms of cross-covariance ma-trices/operators (e.g. the Hilbert-Schmidt inde-pendence criterion (Gretton et al., 2005)).",
"Both approaches are known to give excellent results on standard STS benchmarks (Zhelezniak et al., 2019c).",
"A viable alternative to correlations is mutual information (MI), which can detect any kind of dependence between random variables, but which has so far not been explored for this problem.",
"We operate within the previous setting where we consider two sentences x = x 1 x 2 . . . x l x and y = y 1 y 2 . . . y l y .",
"Our goal now is to estimate the mutual information I ( X ; Y ) between the corresponding random vectors X = ( X 1 , X 2 , . . . , X l x ) and Y = ( Y 1 , Y 2 , . . . , Y l y ) I ( X ; Y ) = (cid:90)(cid:90) p XY ( x, y ) log p XY ( x, y ) p X ( x ) p Y ( y ) d x d y, (1) where p XY ( x, y ) is the joint density of X and Y and p X ( x ) = (cid:82) Y p XY ( x, y ) d y and p Y ( y ) = (cid:82) X p XY ( x, y ) d x are the marginal densities.",
"Unfortunately, these theoretical quantities are not available to us and we must somehow estimate (cid:98) I ( X ; Y ) directly from the word embeddings (cid:98) X = ( x (1) , x (2) , . . . , x ( l x ) ) and (cid:98) Y = ( y (1) , y (2) , . . . , y ( l y ) ) .",
"Luckily, there is a vast literature on how to estimate mutual information between continuous random variables based on the sample.",
"The first class of methods partitions the supports X , Y into a finite number of bins of equal or unequal (adaptive) size and estimates (cid:98) I ( X ; Y ) based on discrete counts in each bin (Moddemei-jer, 1989; Fraser and Swinney, 1986; Darbellay and Vajda, 1999; Reshef et al., 2011; Ince et al., 2016).",
"While such methods are easy to understand conceptually, they might suffer from the curse of dimensionality (especially when sentences are long) and in some sense violate our desire for an elegant closed-form similarity measure.",
"The next class of methods constructs kernel density estimates (KDE) and then numerically integrates such approximate densities to obtain MI (Moon et al., 1995; Steuer et al., 2002).",
"These methods might require a careful choice of kernels and the bandwidth parameters and also violate our simplicity requirement.",
"The third class of methods that has recently gained popularity in the deep learning community is based on neural-network-based estimation of various bounds on mutual information (e.g. by training a critic to estimate the density ratio in (1)) (Suzuki et al., 2008; Alemi et al., 2017; Belghazi et al., 2018; Hjelm et al., 2019; Poole et al., 2019).",
"Such estimators are usually differentiable and scale well to high dimensions and large sample sizes (Belghazi et al., 2018).",
"However, in our case the sample size (e.g. 300) and dimensionality are not too large (at least for short phrases and sentences), and thus training a separate neural network for a simple similarity computation is hardly justified.",
"This leaves us with the last class of methods that estimates mutual information from the k -nearest neighbour statistics (Kraskov et al., 2004; Ver Steeg and Galstyan, 2013; Ver Steeg, 2014; Ross, 2014; Gao et al., 2015; Gao et al., 2018).",
"These approaches are not without problems (Gao et al., 2015) and inherit the weaknesses of k NN in large dimensions but are very simple to implement.",
"In particular, we focus on the Kraskov StogbauerGrassberger (KSG) estimator (Kraskov et al., 2004) which admits a particularly elegant expression for the resulting similarity measure.",
"It can be verified that the mutual information is given by I ( X ; Y ) = H ( X ) + H ( Y ) H ( X , Y ) , i.e. the difference between the sum of marginal entropies and the joint entropy.",
"Thus, in order to estimate MI, it is sufficient to be able to estimate various entropies in the above equation.",
"In their seminal work, Kozachenko and Leonenko (1987) show how to estimate such differential entropies based on the nearest neighbour statistics.",
"Concretely, these methods approximate the log-density Algorithm 1 KraskovStogbauerGrassberger (KSG) Similarity Measure Require: Word embeddings for the first sentence X R l x D Require: Word embeddings for the second sentence Y R l y D Require: The number of nearest neighbours k < D (default k = 3 ) Ensure: Similarity measure KSG Z STACK ROWS ( X , Y ) || z i z j || Z max( || x i x j || X , || y i y j || Y ) i, j = 1 , . . . , D # set cardinality for z d , d = 1 , . . . , D do (cid:15) [ d ] || z d z d k || , z d k = k -NN of z d n x [ d ] # { x d (cid:48) : || x d x d (cid:48) || X < (cid:15) [ d ] } n y [ d ] # { y d (cid:48) : || y d y d (cid:48) || Y < (cid:15) [ d ] } d (cid:48) { 1 , . . . D } \\ { d } end for ( x ) digamma function S (cid:80) Dd =1 ( ( n x [ d ] + 1) + ( n y [ d ] + 1)) KSG ( D ) + ( k ) S at a point by a uniform density in a e.g. Euclidean or Chebyshev norm ball containing its k -nearest neighbours.",
"Kraskov et al. (2004) modify this idea to construct their famous KSG estimator of mutual information given by KSG ( X ; Y ) = ( D ) + ( k ) D (cid:88) d =1 ( ( n x [ d ] + 1) + ( n y [ d ] + 1)) , (2) where D is the embedding dimension, k is the number of nearest neighbours, ( x ) = (cid:48) ( x ) / ( x ) is the digamma function and n x [ d ] , n y [ d ] are certain nearest neighbour statistics.",
"These statistics are obtained by counting the number of neighbours that fall within less than (cid:15) [ d ] from x d and y d in the marginal spaces X and Y respectively, where (cid:15) [ d ] is the distance from z d = ( x d , y d ) to its k nearest neighbour in the joint space ( X , Y ) .",
"We illustrate how the estimator can be applied to measure similarity between sets of word embeddings in Algorithm 1 and refer the reader to Kraskov et al. (2004) for its full derivation and justification as well as an alternative version.",
"We now explore the empirical performance of the KSG similarity measure on a standard suite of Semantic Textual Similarity (STS) benchmarks (Agirre et al., 2012, 2013, 2014, 2015, 2016) and report Spearman correlation between the system and human scores.",
"Our focus here is on fastText vectors (Bojanowski et al., 2017) trained on Common Crawl (600B tokens), as previous literature suggests that among unsupervised vectors fastText yields the best performance for all tasks and similarity measures (Conneau et al., 2017; Perone et al., 2018; Zhelezniak et al., 2019a,b,c).",
"We defer evaluations and significance analysis on all 24 STS subtasks for other word vectors (word2vec and GloVe) to the Appendix.",
"Our evaluations are run in the SentEval toolkit (Conneau and Kiela, 2018) and our code is available on GitHub 1 .",
"Note that we do not report results on the STS13 SMT subtask as it is no longer publicly available.",
"The number of nearest neighbours for KSG that is known to work well in practice on a variety of datasets is k = 3 (Kraskov et al., 2004; Khan et al., 2007).",
"This value seems to strike a good balance between the bias and variance of the estimator.",
"We also run experiments for k = 10 to show that KSG is not very sensitive to this hy-perparameter, at least in our setting.",
"As an interesting addition, we also run KSG ( k = 10 ) for max-pooled scalar random variables (Max-Pool+KSG 10).",
"We compare KSG to the following approaches from the literature: Universal Sentence Encoder (Transformer) (Cer et al., 2018), BERT (penultimate layer, mean-pooling) (Devlin et al., 2018), Word Mover's Distance (WMD) (Kus-ner et al., 2015), soft cardinality (Jimenez et al., 2010, 2015) with cosine similarity and the softness parameter p = 1 , DynaMax-Jaccard (Zhelez-niak et al., 2019b), mean-pooling with cosine similarity (MeanPool+COS) and Smooth Inverse Frequency (SIF) + PCA (Arora et al., 2017).",
"Next we compare KSG with the following top-performing correlations: max-pooling with Spearman correlation (MaxPool+SPR), Centered Kernel Alignment (Gaussian kernel with median estimation for 2 ) and distance correlation (Zhelezniak et al., 2019c).",
"The evaluation results are given in Table 1.",
"In summary, we can see that similarity measures based on mutual information (KSG) perform on par with top correlation-based measures and other leading methods from the literature.",
"Moreover, KSG between pooled variables (MaxPool) is faster and performs only slightly worse than multivariate KSG.",
"In this work we explored how to apply mutual information (MI) as a semantic similarity measure for continuous dense word embeddings.",
"We have summarised the vast literature on estimating MI for continuous random variables from the sample and singled out a simple and elegant KSG estimator which is based on elementary nearest-neighbour statistics.",
"We showed empirically that this estimator and mutual information in general can be an excellent candidate for a similarity measure between dense word embeddings.",
"We would like to thank Adam Bozson and the four anonymous reviewers for their useful feedback and suggestions."
]
| [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"result",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"result",
"other"
]
|
[
"In this paper, we propose a new adversarial augmentation method for Neural Machine Translation (NMT).",
"The main idea is to minimize the vicinal risk over virtual sentences sampled from two vicinity distributions, of which the crucial one is a novel vicinity distribution for adversarial sentences that describes a smooth interpolated embedding space centered around observed training sentence pairs.",
"We then discuss our approach, AdvAug , to train NMT models using the embeddings of virtual sentences in sequence-to-sequence learning.",
"Experiments on Chinese-English, English-French, and English-German translation benchmarks show that AdvAug achieves significant improvements over the Transformer (up to 4.9 BLEU points), and substantially outperforms other data augmentation techniques ( e.g. back-translation) without using extra corpora.",
"Recent work in neural machine translation (Bah-danau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017) has led to dramatic improvements in both research and commercial systems (Wu et al., 2016).",
"However, a key weakness of contemporary systems is that performance can drop dramatically when they are exposed to input perturbations (Belinkov and Bisk, 2018; Cheng et al., 2019), even when these perturbations are not strong enough to alter the meaning of the input sentence.",
"Consider a Chinese sentence, zhejia feiji meiyou zhuangshang zhujia huo yiyuan, shizai shi qiji.",
"If we change the word huo ( ) to its syn-onym ji ( ), the Transformer model will generate contradictory results of It was indeed a miracle that the plane did not touch down at home or hospital. versus It was a miracle that the plane landed at home and hospital.",
"Such perturbations can readily be found in many public benchmarks and real-world applications.",
"This lack of stability not only lowers translation quality but also inhibits applications in more sensitive scenarios.",
"At the root of this problem are two interrelated issues: first, machine translation training sets are insufficiently diverse, and second, NMT architectures are powerful enough to overfit and, in extreme cases, memorize the observed training examples, without learning to generalize to unseen perturbed examples.",
"One potential solution is data augmentation which introduces noise to make the NMT model training more robust.",
"In general, two types of noise can be distinguished: (1) continuous noise which is modeled as a real-valued vector applied to word embeddings (Miyato et al., 2016, 2017; Cheng et al., 2018; Sato et al., 2019), and (2) discrete noise which adds, deletes, and/or replaces characters or words in the observed sentences (Belinkov and Bisk, 2018; Sperber et al., 2017; Ebrahimi et al., 2018; Michel et al., 2019; Cheng et al., 2019; Karpukhin et al., 2019).",
"In both cases, the challenge is to ensure that the noisy examples are still semantically valid translation pairs.",
"In the case of continuous noise, it only ensures that the noise vector lies within an L 2 -norm ball but does not guarantee to maintain semantics.",
"While constructing semantics-preserving continuous noise in a high-dimensional space proves to be non-trivial, state-of-the-art NMT models are currently based on adversarial examples of discrete noise.",
"For instance, Cheng et al. (2019) generate adversarial sentences using discrete word replacements in both the source and target, guided by the NMT loss.",
"This approach achieves significant improvements over the Transformer on several standard NMT benchmarks.",
"Despite this promising result, we find that the generated adversarial sentences are unnatural, and, as we will show, suboptimal for learning robust NMT models.",
"In this paper, we propose AdvAug , a new adversarial augmentation technique for sequence-to-sequence learning.",
"We introduce a novel vicinity distribution to describe the space of adversarial examples centered around each training example.",
"Unlike prior work (Cheng et al., 2019), we first generate adversarial sentences in the discrete data space and then sample virtual adversarial sentences from the vicinity distribution according to their interpolated embeddings.",
"Our intuition is that the introduced vicinity distribution may increase the sample diversity for adversarial sentences.",
"Our idea is partially inspired by mixup (Zhang et al., 2018), a technique for data augmentation in computer vision, and we also use a similar vicinity distribution as in mixup to augment the authentic training data.",
"Our AdvAug approach finally trains on the embeddings sampled from the above two vicinity distributions.",
"As a result, we augment the training using virtual sentences in the feature space as opposed to in the data space.",
"The novelty of our paper is the new vicinity distribution for adversarial examples and the augmentation algorithm for sequence-to-sequence learning.",
"Extensive experimental results on three translation benchmarks (NIST Chinese-English, IWSLT English-French, and WMT English-German) show that our approach achieves significant improvements of up to 4 .",
"9 BLEU points over the Transformer (Vaswani et al., 2017), outperforming the former state-of-the-art in adversarial learning (Cheng et al., 2019) by up to 3 .",
"3 BLEU points.",
"When compared with widely-used data augmentation methods (Sennrich et al., 2016a; Edunov et al., 2018), we find that our approach yields better performance even without using extra corpora.",
"We conduct ablation studies to gain further insights into which parts of our approach matter most.",
"In summary, our contributions are as follows:",
"1. We propose to sample adversarial examples from a new vicinity distribution and utilize their embeddings, instead of their data points, to augment the model training.",
"2. We design an effective augmentation algorithm for learning sequence-to-sequence NMT models via mini-batches.",
"3. Our approach achieves significant improvements over the Transformer and prior state-of-the-art models on three translation benchmarks.",
"Neural Machine Translation.",
"Generally, NMT (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017) models the translation probability P ( y | x ; ) based on the encoder-decoder paradigm where x is a source-language sentence, y is a target-language sentence, and is a set of model parameters.",
"The decoder in the NMT model acts as a conditional language model that operates on a shifted copy of y , i.e., (cid:104) sos (cid:105) , y 0 , ..., y | y | 1 where (cid:104) sos (cid:105) is a start symbol of a sentence and representations of x learned by the encoder.",
"For clarity, we use e ( x ) R d | x | to denote the feature vectors (or word embeddings) of the sentence x where d is dimension size.",
"where f ( e ( x ) , e ( y ); ) is a sequence of model predictions f j ( e ( x ) , e ( y ); ) = P ( y | y <j , x ; ) at position j , and y is a sequence of one-hot label vectors for y (with label smoothing in the Trans-former).",
"(cid:96) is the cross entropy loss.",
"The expectation of the loss function is summed over the empirical distribution P ( x , y ) of the training corpus: P ( x , y ) = 1 | S | (cid:88) ( x (cid:48) , y (cid:48) ) S ( x = x (cid:48) , y = y (cid:48) ) , (2) where denotes the Dirac delta function.",
"Generating Adversarial Examples for NMT.",
"To improve NMT's robustness to small perturbations in the input sentences, Cheng et al. (2019) incorporate adversarial examples into the NMT model training.",
"These adversarial sentences x (cid:48) are generated by applying small perturbations that are jointly learned together with the NMT model: x = argmax x : R ( x , x ) (cid:15) (cid:96) ( f ( e ( x ) , e ( y ); ) , y ) , (3) where R ( x , x ) captures the degree of semantic similarity and (cid:15) is an upper bound on the semantic distance between the adversarial example and the original example.",
"Ideally, the adversarial sentences convey only barely perceptible differences to the original input sentence yet result in dramatic distortions of the model output.",
"Cheng et al. (2019) propose the AdvGen algorithm, which greedily replaces words with their top k most probable alternatives, using the gradients of their word embeddings.",
"Adversarial examples are designed to both attack and defend the NMT model.",
"On the encoder side, an adversarial sentence x is constructed from the original input x to attack the NMT model.",
"To defend against adversarial perturbations in the source input x , they use the AdvGen algorithm to find an adversarial target input y from the decoder input y .",
"For notational convenience, let denote this algorithm, the adversarial example s is stochastically induced by as s ( s ; x , y , ) where is the set of parameters used in including the NMT model parameters .",
"For a detailed definition of , we refer to (Cheng et al., 2019).",
"Hence, the set of adversarial examples originating from ( x , y ) S , namely A ( x , y ) , can be written as: A ( x , y ) = { ( x , y ) | x ( x ; x , y , src ) , y ( y ; x , y , tgt ) } , (4) where src and tgt are separate parameters for generating x and y , respectively.",
"Finally, the robustness loss L robust is computed on A ( x , y ) with the loss (cid:96) ( f ( e ( x ) , e ( y ); ) , y ) , and is used together with L clean to train the NMT model.",
"Data Mixup.",
"In image classification, the mixup data augmentation technique involves training on linear interpolations of randomly sampled pairs of examples (Zhang et al., 2018).",
"Given a pair of images ( x (cid:48) , y (cid:48) ) and ( x (cid:48)(cid:48) , y (cid:48)(cid:48) ) , where x (cid:48) denotes the RGB pixels of the input image and y (cid:48) is its one-hot label, mixup minimizes the sample loss from a vicinity distribution (Chapelle et al., 2001) P v ( x , y ) defined in the RGB-pixel (label) space: x = x (cid:48) + (1 ) x (cid:48)(cid:48) , (5) y = y (cid:48) + (1 ) y (cid:48)(cid:48) .",
"is drawn from a Beta distribution Beta ( , ) controlled by the hyperparameter .",
"When 0 , ( x , y ) is close to any one of the images ( x (cid:48) , y (cid:48) ) and ( x (cid:48)(cid:48) , y (cid:48)(cid:48) ) .",
"Conversely, ( x , y ) approaches the middle interpolation point between them when + .",
"The neural networks g parameterized by can be trained over the mixed images ( x , y ) with the loss function L mixup ( ) = (cid:96) ( g ( x ; ) , y ) .",
"In practice, the image pair is randomly sampled from the same mini-batch.",
"In our approach AdvAug , the goal is to reinforce the model over virtual data points surrounding the observed examples in the training set.",
"We approximate the density of P ( x , y ) in the vicinities of the generated adversarial examples and observed training examples.",
"To be specific, we design two vicinity distributions (Chapelle et al., 2001) to estimate the joint distribution of P ( x , y ) : P adv for the (dynamically generated) adversarial examples and P aut for the (observed) authentic examples in S .",
"Given the training set S , we have: P adv ( x , y ) = 1 |S| (cid:88) ( x , y ) S adv ( x , y | A ( x , y ) ) , (7) P aut ( x , y ) = 1 |S| (cid:88) ( x , y ) S aut ( x , y | x , y ) , (8) where A ( x , y ) is the set of adversarial examples originated from ( x , y ) defined in Eq.",
"(4).",
"We will discuss adv and aut in detail which define the probability functions, but first we give some high-level descriptions: P adv is a new vicinity distribution for virtual adversarial sentences of the same origin.",
"It captures the intuition that the convex combination of adversarial sentences should have the same translation.",
"It is the most important factor for improving the translation quality in our experiments.",
"P aut is a distribution to improve the NMT's robustness by mixing up observed sentences of different origins.",
"This distribution is similar to mixup , but it is defined over linear interpolations of the sequence of word embeddings of the source and target sentences.",
"Although P aut by itself yields marginal improvements, we find it is complementary to P adv .",
"We train the NMT model on two vicinity distributions P adv and P aut .",
"Figure 1 illustrates examples sampled from them.",
"As shown, a solid circle stands for an observed training example ( i.e. a sentence-pair) in S and a solid triangle denotes an adversarial example in A ( x , y ) .",
"For P adv , we construct virtual adversarial examples (dashed triangles) to amend the sample diversity by interpolating the word embeddings of solid triangles.",
"Likewise, we interpolate the word embeddings of solid circles to model P aut for the (observed) authentic examples.",
"This results in the dashed circles in Figure",
"1. Unlike prior works on vicinal risk minimization (Chapelle et al., 2001; Zhang et al., 2018), we do not directly observe the virtual sentences in P adv or P aut .",
"This also distinguishes us from Cheng et al. (2019), who generate actual adversarial sentences in the discrete word space.",
"In the remainder of this section, we will discuss the definition of P adv and P aut and how to optimize the translation loss over virtual sentences via mini-batch training.",
"To compute adv , we employ similar as in (Cheng et al., 2019) to generate an adversarial example set A ( x , y ) from each instance ( x , y ) S (see Eq.",
"(4)).",
"Let ( x (cid:48) , y (cid:48) ) and ( x (cid:48)(cid:48) , y (cid:48)(cid:48) ) be two examples randomly sampled from A ( x , y ) .",
"We align the two sequences by padding tokens to the end of the shorter sentence.",
"Note that this operation aims for a general case (particularly for P aut ) although the lengths of y (cid:48) and y (cid:48)(cid:48) in A ( x , y ) are same.",
"To obtain e ( x ) = [ e ( x 1 ) , . . . , e ( x | x | )] , we apply the convex combination m ( x (cid:48) , x (cid:48)(cid:48) ) over the aligned word embeddings, which is: e ( x i )= e ( x (cid:48) i ) + (1 ) e ( x (cid:48)(cid:48) i ) , i [1 , | x | ] , (9) where Beta ( , ) .",
"We use m ( , ) for the interpolation.",
"Similarly, e ( y ) can also be obtained with m ( y (cid:48) , y (cid:48)(cid:48) ) .",
"All adversarial examples in A ( x , y ) are supposed to be translated into the same target sentence, and the convex combination still lies in space of the adversarial search ball defined in Eq.",
"(3).",
"As a result, all virtual sentence pairs ( x , y ) A ( x , y ) of the same origin can be fed into NMT models as source and target inputs which share the same soft target label for ( x , y ) .",
"adv in P adv can be calculated from: adv ( x , y | A ( x , y ) ) = 1 | A ( x , y ) | 2 (cid:88) ( x (cid:48) , y (cid:48) ) A ( x , y ) (cid:88) ( x (cid:48)(cid:48) , y (cid:48)(cid:48) ) A ( x , y ) E [ ( e ( x ) = m ( x (cid:48) , x (cid:48)(cid:48) ) , e ( y ) = m ( y (cid:48) , y (cid:48)(cid:48) )] .",
"(10)",
"where is a sequence of output distributions (de-noted as a sequence of label vectors, e.g. y ) as the soft target for the sentence y .",
"We employ two useful techniques in computing the loss L adv in Eq.",
"(11).",
"First, we minimize the KL-divergence between the model predictions at the word level: | y | (cid:88) j =1 DKL ( f j ( e ( x ) , e ( y ); ) || f j ( e ( x ) , e ( y ); )) , (12) where means a fixed copy of the current parameter set and no gradients are back-propagated through it.",
"Removing constant values from Eq.",
"(12) yields an equivalent solution of: (cid:96) ( f ( e ( x ) , e ( y ); ) , ) = (cid:96) ( f ( e ( x ) , e ( y ); ) , f ( e ( x ) , e ( y ); )) .",
"Eq.",
"(13) indicates that f ( e ( x ) , e ( y ); ) can be used as the soft target in Eq.",
"(11) for virtual adversarial example ( x , y ) .",
"KL-divergence enforces the model on virtual adversarial examples to indirectly learn from the soft target of the observed examples over large vocabularies.",
"This justifies the use of in Eq.",
"(11) and turns out to be more effective than directly learning from the ground-truth label.",
"Besides, Eq.",
"(11) needs to enumerate numerous pairs of adversarial examples in A ( x , y ) while in practice we only sample a pair at a time inside each mini-batch for training efficiency.",
"We hence employ curriculum learning to do the importance sampling.",
"To do so, we re-normalize the translation loss and employ a curriculum from (Jiang et al., 2018) to encourage the model to focus on the difficult training examples.",
"Formally, for a mini-batch of the training losses L = { (cid:96) i } mi =1 , we re-weigh the batch loss using: L = 1 (cid:80) mi =1 I ( (cid:96) i > ) m (cid:88) i =1 I ( (cid:96) i > ) (cid:96) i , (14) where I ( ) is an indicator function and is set by a moving average tracking the p -th percentile of the example losses of every mini-batch.",
"In our experiments, we set the p -th percentile to be 100 (1 r t ) for the training iteration t , and gradually anneal r t using r t = 0 .",
"5 t/ , where is the hyperparameter.",
"We define the aut in the vicinity distribution P aut for authentic examples as follows:",
"aut ( x , y | x , y ) = 1 |S| (cid:88) ( x (cid:48) , y (cid:48) ) S E [ ( e ( x ) = m ( x , x (cid:48) ) , e ( y ) = m ( y , y (cid:48) ) , = m ( , (cid:48) ))] .",
"(15)",
"The translation loss on authentic data is integrated over all examples of the vicinity distribution P aut : L aut ( ) = EP aut ( x , y ) [ (cid:96) ( f ( e ( x ) , e ( y ); ) , )] .",
"(16)",
"In our experiments, we select the value of in Eq.",
"(15) twice for every ( x , y ) : (1) a constant 1 .",
"0 and (2) a sample from the Beta distribution.",
"The former is equivalent to sampling from the empirical distribution P whereas the latter is similar to applying mixup in the embedding space of the sequence model.",
"In other words, L aut ( ) equals the sum of two translation losses, L clean ( ) computed on the original training examples when is 1 .",
"0 and L mixup ( ) computed on virtual examples when is sampled from a Beta distribution.",
"Accordingly, when is 1 .",
"0 we set to be the interpolation of the sequences of one-hot label vectors for y and y (cid:48) , i.e. = y and (cid:48) = y (cid:48) .",
"Otherwise is the interpolation of model output vectors of ( x , y ) and ( x (cid:48) , y (cid:48) ) , that is, = f ( e ( x ) , e ( y ); ) and (cid:48) = f ( e ( x (cid:48) ) , e ( y (cid:48) ); ) .",
"Finally, the training objective in our AdvAug is a combination of the two losses:",
"Here, we omit two bidirectional language model losses for simplicity, which are used to recommend word candidates to maintain semantic similarities (Cheng et al., 2019).",
"In practice, we need to compute the loss via mini-batch training.",
"For the P aut , we follow the pair sampling inside each mini-batch in mixup .",
"It can avoid padding too much tokens because sentences of similar lengths are grouped within a mini-batch (Vaswani et al., 2017).",
"For the P adv , we sample a pair of examples from A ( x , y ) for each ( x , y ) and cover the distribution over multiple training epochs.",
"The entire procedure to calculate the translation losses, L adv ( ) and L aut ( ) , is presented in Algorithm",
"1. In a nutshell, for each batch of training examples, we firstly sample virtual examples from P adv and P aut by interpolating the embeddings of the adversarial or authentic training examples.",
"Then we calculate the translation loss using their interpolated embeddings.",
"We verify our approach on translation tasks for three language pairs: Chinese-English, English-French, and English-German.",
"The performance is evaluated with the 4-gram BLEU score (Papineni et al., 2002) calculated by the multi-bleu.perl script.",
"We report case-sensitive tokenized BLEU scores for English-French and English-German, and case-insensitive tokenized BLEU scores for Chinese-English.",
"Note that all reported BLEU scores in our approach are from a single model rather than averaging multiple models (Vaswani et al., 2017).",
"For the Chinese-English translation task, the training set is the LDC corpus consisting of 1.2M sentence pairs.",
"The NIST 2006 dataset is used as the validation set, and NIST 02, 03, 04, 05, 08 are used as the test sets.",
"We apply byte-pair encoding (BPE) (Sennrich et al., 2016b) with 60K merge operations to build two vocabularies comprising 46K Chinese sub-words and 30K English sub-words.",
"We use the IWSLT 2016 corpus for English-French translation.",
"The training corpus with 0.23M sentence pairs is preprocessed with the BPE script with 20K joint operations.",
"The validation set is test2012 and the test sets are test2013 and test2014.",
"For English-German translation, we use the WMT14 corpus consisting of 4.5M sentence pairs.",
"The validation set is newstest2013 whereas the test set is newstest2014.",
"We build a shared vocabulary of 32K sub-words using the BPE script.",
"We implement our approach on top of the Transformer (Vaswani et al., 2017).",
"The size of the hidden unit is 512 and the other hyperparameters are set following their default settings.",
"There are three important hyperparameters in our approach, in the Beta distribution and the word replacement ratio of src src , and tgt tgt detailed in Eq.",
"(4).",
"Note that src and tgt are not new hyperparameters but inherited from (Cheng et al., 2019).",
"We tune these hyperameters on the validation set via a grid search, i.e. { 0 .",
"2 , 0 .",
"4 , 4 , 8 , 32 } , src { 0 .",
"10 , 0 .",
"15 , 0 .",
"25 } and tgt { 0 .",
"10 , 0 .",
"15 , 0 .",
"30 , 0 .",
"5 } .",
"For the mixup loss L mixup , is fixed to 0 .",
"2 .",
"For the loss L aut and L adv , the optimal value of is 8 .",
"0 .",
"The optimal values of ( src , tgt ) are found to be (0 . 25 , 0 . 50) , (0 . 15 , 0 . 30) and (0 . 15 , 0 . 15) for Chinese-English, English-French and English-German, respectively, while it is set to (0 . 10 , 0 . 10) only for back-translated sentence pairs.",
"in Eq.",
"(14) is set to 250K, 100K, 1M for Chinese-English, English-French and English-German.",
"Unlike Cheng et al. (2019), we remove the learning of target language models to speed up the training.",
"For each training batch, we introduce a batch of augmented adversarial examples and a batch of augmented authentic examples, which costs twice the vanilla training.",
"For constructing adversarial examples, we solely compute the gradients for word embeddings which takes little time.",
"After summing up the time of all steps, our total training time is about 3.3 times the vanilla training.",
"Chinese-English Translation.",
"Table 1 shows results on the Chinese-English translation task, in comparison with the following six baseline methods.",
"For a fair comparison, we implement all these Method Loss Config.",
"1. The seminal Transformer model for NMT (Vaswani et al., 2017).",
"2. Following Miyato et al. (2017), we use adversarial learning to add continuous gradient-based perturbations to source word embeddings and extend it to the Transformer model.",
"3. Sato et al. (2019) leverage Miyato et al. (2017)'s idea into NMT by incorporating gradient-based perturbations to both source and target word embeddings and optimize the model with adversarial training.",
"4. Cheng et al. (2019) generate discrete adversarial examples guided by the gradients of word embeddings.",
"Adversarial examples are used to both attack and defend the NMT model.",
"5. Sennrich et al. (2016a) translate monolingual corpora using an inverse NMT model and then augment the training data with them.",
"6. Based on Sennrich et al. (2016a), Edunov et al. (2018) propose three improved methods to generate back-translated data, which are sampling , top10 and beam+noise .",
"Among those, we choose beam+noise as our baseline method, which can be regarded as an approach to incorporating noise into data.",
"We first verify the importance of different translation losses in our approach.",
"We find that both L aut and L adv are useful in improving the Transformer model.",
"L adv is more important and yields a significant improvement when combined with the standard empirical loss L clean (cf.",
"Eq.",
"(1)).",
"These results validate the effectiveness of augmenting with virtual adversarial examples.",
"When we use both L aut and L adv to train the model, we obtain the best performance (up to 4.92 BLEU points on MT05).",
"We also compare with the mixup loss.",
"However, L mixup is only slightly better than the standard empirical loss L clean .",
"Compared with the baseline methods without using extra corpora, our approach shows significant improvements over the state-of-the-art models.",
"In particular, the superiority of L clean + L adv over both Cheng et al. (2019) and Sato et al. (2019) verifies that we propose a more effective method to address adversarial examples in NMT.",
"We also directly incorporate two adversarial examples to NMT models without interpolating their embeddings, but we do not observe any further gain over Cheng et al. (2019).",
"This substantiates the superior performance of our approach on the standard data sets.",
"To compare with the approaches using extra monolingual corpora, we sample 1.25M English sentences from the Xinhua portion of the GIGA-WORD corpus and list our performance in the last row of Table",
"1. When the back-translated corpus is incorporated, our approach yields further improvements, suggesting our approach complements the back-translation approaches.",
"Translation.",
"Table 2 shows the comparison with the Transformer model (Vaswani et al., 2017), Sato et al. (2019) and Cheng et al. (2019) on English-French and English-German translation tasks.",
"Our approach consistently outperforms all three baseline methods, yielding significant 3 .",
"34 and 2 .",
"27 BLEU point gains over the Transformer on the English-French and English-German translation tasks, respectively.",
"We also conduct similar ablation studies on the translation loss.",
"We still find that the combination of L adv abd L aut performs the best, which is consistent with the findings in the Chinese-English translation task.",
"The substantial gains on these two translation tasks suggest the potential applicability of our approach to more language pairs.",
"The hyperparameter controls the shape of the Beta distribution over interpolation weights.",
"We study its effect on the validation set in Table",
"4. Notable differences occur when < 1 and > 1 , this is because the Beta distribution show two different shapes with = 1 as a critical point.",
"As we see, both L aut and L adv prefer a large and perform better when = 8 .",
"Recall that when is large, m behaves similarly to a simple average function.",
"In L mixup , = 4 performs slightly better, and a large = 32 will fail the model training.",
"Although the result with = 4 appears to be slightly better, it consumes more iterations to train the model to reach the convergence, i.e. , 90 K for = 4 vs. 20 K for = 0 .",
"2 .",
"These indicate the differences between the proposed vicinity distributions and the one used in mixup .",
"To test robustness on noisy inputs, we follow Cheng et al. (2019) to construct a noisy data set by randomly replacing a word in each sentence of the standard validation set with a relevant alternative.",
"The relevance between words is measured by the similarity of word embeddings.",
"100 noisy sentences are generated for each of them and then re-scored to pick the best one with a bidirectional language model.",
"Table 5 shows the results on artificial noisy inputs with different noise levels.",
"Our approach shows higher robustness over all baseline methods across all noise levels.",
"Figure 2 shows the evolution of BLEU scores during training.",
"For L clean , the BLEU score reaches its peak at about 20K iterations, and then the model starts overfitting.",
"In comparison, all of the training losses proposed in this paper are capable of resisting overfitting: in fact, even after 100K iterations, no significant regression is observed (not shown in this figure).",
"At the same iteration, our results are consistently higher than both the empirical risk ( L clean ) and mixup ( L mixup ).",
"As shown in Table 3, the baseline yields an incorrect translation possibly because the word dan-shi( ) seldom occurs in this context in our training data.",
"In contrast, our model incorporates embeddings of virtual sentences that contain dan-shi( ) or its synonym dan( ).",
"This encourages our model to learn to push their embeddings closer during training, and make our model more robust to small perturbations in real sentences.",
"Data Augmentation.",
"Data augmentation is an effective method to improve machine translation performance.",
"Existing methods in NMT may be divided into two categories, based upon extra corpora (Sennrich et al., 2016a; Cheng et al., 2016; Zhang and Zong, 2016; Edunov et al., 2018) or original parallel corpora (Fadaee et al., 2017; Wang et al., 2018; Cheng et al., 2019).",
"Recently, mixup (Zhang et al., 2018) has become a popular data augmentation technique for semi-supervised learning (Berth-elot et al., 2019) and overcoming real-world noisy data (Jiang et al., 2019).",
"Unlike prior works, we introduce a new method to augment the representations of the adversarial examples in sequence-to-sequence training of the NMT model.",
"Even without extra monolingual corpora, our approach substantially outperforms the widely-used back-translation methods (Sennrich et al., 2016a; Edunov et al., 2018).",
"Furthermore, we can obtain even better performance by including additional monolingual corpora.",
"Robust Neural Machine Translation.",
"It is well known that neural networks are sensitive to noisy inputs (Szegedy et al., 2014; Goodfellow et al., 2014), and neural machine translation is no exception.",
"Thus improving the robustness of NMT models has become a popular research topic (e.g., Belinkov and Bisk, 2018; Sperber et al., 2017; Ebrahimi et al., 2018; Cheng et al., 2018, 2019; Karpukhin et al., 2019; Li et al., 2019).",
"Many of these studies focus on augmenting the training data to improve robustness, especially with adversarial examples (Ebrahimi et al., 2018; Cheng et al., 2019; Karpukhin et al., 2019; Michel et al., 2019).",
"Others also tried to deal with this issue by finding better input representations (Durrani et al., 2019), adding adversarial regularization (Sato et al., 2019) and so on.",
"In contrast to those studies, we propose the vicinity distribution defined in a smooth space by interpolating discrete adversarial examples.",
"Experimental results show substantial improvements on both clean and noisy inputs.",
"We have presented an approach to augment the training data of NMT models by introducing a new vicinity distribution defined over the interpolated embeddings of adversarial examples.",
"To further improve the translation quality, we also incorporate an existing vicinity distribution, similar to mixup for observed examples in the training set.",
"We design an augmentation algorithm over the virtual sentences sampled from both of the vicinity distributions in sequence-to-sequence NMT model training.",
"Experimental results on Chinese-English, English-French and English-German translation tasks demonstrate the capability of our approach to improving both translation performance and robustness."
]
| [
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"result",
"abstain",
"abstain",
"result",
"method",
"objective",
"objective",
"method",
"result",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"result",
"method",
"objective"
]
|
[
"Swarnadeep Saha Prateek UNC Chapel",
"Abstract We focus on a type of linguistic formal reasoning where the goal is to reason over explicit knowledge in the form of natural language facts and rules (Clark et al., 2020).",
"A recent work, named PROVER (Saha et al., 2020), performs such reasoning by answering a question and also generating a proof graph that explains the answer.",
"However, compositional reasoning is not always unique and there may be multiple ways of reaching the correct answer.",
"Thus, in our work, we address a new and challenging problem of generating multiple proof graphs for reasoning over natural language rule-bases.",
"Each proof provides a different rationale for the answer, thereby improving the interpretability of such reasoning systems.",
"In order to jointly learn from all proof graphs and exploit the correlations between multiple proofs for a question, we pose this task as a set generation problem over structured output spaces where each proof is represented as a directed graph.",
"We propose two variants of a proof-set generation model, MULTIPROVER .",
"Our first model, Multilabel MULTIPROVER , generates a set of proofs via multi-label classification and implicit conditioning between the proofs; while the second model, Iterative -MULTIPROVER , generates proofs iteratively by explicitly conditioning on the previously generated proofs.",
"Experiments on multiple synthetic, zero-shot, and human-paraphrased datasets reveal that both MULTIPROVER models significantly outperform PROVER on datasets containing multiple gold proofs.",
"Iterative -MULTIPROVER obtains state-of-the-art proof F1 in zero-shot scenarios where all examples have single correct proofs.",
"It also generalizes better to questions requiring higher depths of reasoning where multiple proofs are more frequent.",
"Formal reasoning over explicit multi-sentence knowledge (Newell and Simon, 1956) has often",
"proved to be challenging (Musen and Van Der Lei, 1988), owing to the difficulty in creating logical forms from such sentences, thereby restricting the application of semantic parsers (Zettlemoyer and Collins, 2005; Berant et al., 2013; Berant and Liang, 2014).",
"Thus, in a recent work, Clark et al. (2020) bypass the creation of intermediate logical forms and show that transformers (Vaswani et al., 2017) can act as soft theorem provers\" by answering questions over natural language (English) rule-bases, consisting of facts and rules. In order to reliably interpret these predicted answers, Saha et al. (2020) propose PROVER , a transformer-based model that generates the corresponding proof graph, thus emulating formal reasoning closely. Consider the two example rule-bases with two questions and corresponding proofs in Figure 1, where a proof is a directed graph consisting of the relevant facts and rules from the corresponding rule-base. PROVER shows good single-proof generation accuracy but is designed and trained in a way to generate only a single proof for each question. This is not ideal because formal proofs are not always unique and there may be multiple correct ways of arriving at the answer. For example, Q 1 and Q 2 in Figure 1 have three and four correct proofs respectively. Hence, in order to enhance the human-interpretability of linguistic formal reasoning systems, it is desirable to develop methods that can generate multiple proofs, each providing a different rationale for the predicted answer. Such interpretable methods, while possessing the flexibility of operating over natural language, can also aid in verifying claims when constructing proofs from scratch is tedious or infeasible. We find that PROVER (Saha et al., 2020), when trained on all proofs as independent training examples (Eq. 2) and extended to generate topp proofs during inference (Eq. 3), fails drastically, achieving a low proof precision of 34%. The subsequent proofs are often incorrect because it is not Facts: F1: Bob is big. F2: Bob is blue. F3: Bob is furry. F4: Bob is young. F5: Dave is red. F6: Fiona is white. F7: Harry is big. F8: Harry is red. F9: Harry is round. F10: Harry is white. Rules: R1: White, round things are furry. R2: All blue, young things are big. R3: If something is white and young, then it is blue. R4: If Dave is round then Dave is white. R5: If something is blue and white then it is round. R6: If Harry is big and Harry is white then Harry is red. R7: All furry, red things are young. R8: Red things are round. R9: If something is blue then it is red. Q1: Harry is furry. [Answer : T ] Rules: R1: Round things are nice. R2: Nice things are young. R3: If something is big and not young then it is not white. R4: If something is young and smart then it is round. R5: All big things are young. R6: If Bob is not white then Bob is big. R7: Young, nice things are quiet. R8: If something is not big then it is nice. R9: All white things are not quiet. Facts: F1: Anne is round. F2: Bob is smart. F3: Fionna is nice. F4: Fiona is round. F5: Harry is nice. F6: Harry is quiet. F7: Harry is smart. Q2: Anne is not quiet. [Answer : F ] NAF NAF NAFNAF Figure 1: Diagram showing two rule-bases with rules, facts, questions, answers and all possible proofs. The first question has three correct proofs while the second question has four correct proofs. MULTIPROVER answers both questions correctly and also generates all the corresponding proofs accurately for each question. trained jointly with all proofs and hence, is unable to exploit the inter-proof correlations and also does not learn the correct number of proofs for a question. Thus, we propose MULTIPROVER , a transformer-based model that can generate a set of proof graphs with appropriate cardinality for a given question. Since multiple proofs can be generated in any arbitrary order, we pose this task as a set generation problem over graphs and train MULTIPROVER jointly with a permutation-invariant Hungarian Loss (Zhang et al., 2019a,b) over all proofs. A proof graph is generated through a node module which selects the relevant facts and rules as part of the proof and an edge module which determines the edges between the chosen nodes. Similar to PRover, we first enforce multiple structural constraints during training and inference to ensure that a generated proof is valid. Next, in order to generate a set of proofs jointly, we propose our first model, Multilabel -MULTIPROVER , a multi-label classification framework which performs implicit conditioning among the proofs and predicts p binary labels for each node and edge, denoting its presence or absence in each of the p proofs that we want to generate. It is efficient in terms of number of parameters and training time and also achieves a better proof F1 than PROVER . However, the lack of explicit conditioning between the proofs is not ideal because a question with multiple proofs often has certain common sub-graphs across the proofs. E.g., all the 3 proofs for Q 1 in Figure 1 have the sub-graph { F 10 R 1 } common. Thus, in order to exploit these correlations which Multilabel -MULTIPROVER cannot capture explicitly, we further propose an improved variant of MULTIPROVER , named Iterative -MULTIPROVER , which generates appropriate number of proofs by stacking multiple node and edge encoders, each of which generates one proof at each time step by conditioning on the previously generated proofs. This enables the model to better learn the correlations between multiple proofs for a given question. To capture the set-based nature of the task, we train MULTIPROVER using a permutation-invariant Hungarian Loss (Sec. 3.5), which solves an assignment problem between a set of predicted and gold proofs. Empirical evaluation on synthetic and human paraphrased QA rule-bases (Clark et al., 2020) show that both of our MULTIPROVER models achieve a significantly higher proof F1 compared to PROVER while retaining the QA accuracy. Further, on a challenging hand-authored zero-shot dataset, where all examples have single gold proofs, Iterative -MULTIPROVER achieves state-of-the-art proof F1. It also generalizes better to questions requiring higher depths of reasoning with more multiple proofs. Overall, our contributions are: We address a new and challenging problem of generating a set of multiple logical proof graphs for reasoning over natural language rule-bases by proposing two set-based joint models, Multilabel MULTIPROVER and Iterative -MULTIPROVER . 1 Iterative -MULTIPROVER 's joint training and explicit conditioning helps it to better learn the relative importance of rules and facts for a particular question and uncover common subgraphs across multiple proofs. Thus, compared to Multilabel MULTIPROVER and PROVER , it is able to transfer well in zero-shot settings because it learns to assign a soft prior over the rule-base. Iterative -MULTIPROVER 's conditional generation also enables it to generalize better to questions requiring higher depths of reasoning where the presence of multiple proofs is frequent. 1 Our code and models are publicly available at https: //github.com/swarnaHub/multiPRover . 2 Related Work The task of rule reasoning (Clark et al., 2020) is related to other recently proposed tasks on QA (We-ston et al., 2015; Yang et al., 2018; Lin et al., 2019; Tafjord et al., 2019; Richardson et al., 2020) and NLI (MacCartney and Manning, 2014). However, most of these tasks require implicit reasoning rules as opposed to explicit ones and the focus is either on broad language understanding or on single rule application. Below we discuss MULTIPROVER 's relation to multiple areas of NLP and ML. Structured Explanations: There is useful previous work on developing interpretable and explainable models (Doshi-Velez and Kim, 2017; Rudin, 2019; Hase and Bansal, 2020; Jacovi and Goldberg, 2020) for NLP. Explanations in NLP take three major forms (1) extractive rationales or highlights (Zaidan et al., 2007; Lei et al., 2016; Yu et al., 2019; DeYoung et al., 2020) where a subset of the input text explain a prediction, (2) free-form or natural language explanations (Camburu et al., 2018; Ra-jani et al., 2019; Zhang et al., 2020; Kumar and Talukdar, 2020) that are not constrained to the input, and (3) structured explanations that range from semi-structured text (Ye et al., 2020) to chain of facts (Khot et al., 2020; Jhamtani and Clark, 2020; Gontier et al., 2020) to explanation graphs (based on edges between chains of facts) (Jansen et al., 2018; Jansen and Ustalov, 2019; Xie et al., 2020). Generating Multiple Outputs: Generating a set of proofs can be viewed as a task of generating multiple structured outputs (Prasad et al., 2014). Multiple prior studies focus on generating diverse unstructured texts (Gimpel et al., 2013; Dai et al., 2017; Xu et al., 2018; Raffel et al., 2020). which broadly span two categories (1) using improved decoding techniques like beam search with inter-sibling ranking penalty (Li et al., 2016), iterative beam search (Kulikov et al., 2018), diverse beam search (Vijayakumar et al., 2018), and sentence codes (Shu et al., 2019), (2) varying the hidden representations or using multiple decoders (Dai et al., 2017; Jain et al., 2017; Shen et al., 2019). Our baseline, PROVER -topp , which extends PROVER to generate topp proofs during inference falls in the first category while MULTIPROVER falls in the second category, where the multiple node and edge encoders vary the node and edge representations for generating multiple proofs. Machine Learning over Sets: Set-based ML models (Zaheer et al., 2017; Lee et al., 2018; Zhang et al., 2019a; Kosiorek et al., 2020) have a wide range of applications including generating multiple image captions (Vinyals et al., 2015), generating diverse translations (Cho et al., 2014; Bahdanau et al., 2015), enumerating rules in a logical inference system (Gao et al., 2019). Set problems are challenging because the number of valid solutions for a set of size n are n ! , which increases faster than exponential in n and ignoring the set structure produces sub-optimal solutions (Zhang et al., 2019a). Thus, we use a set-based Hungarian Loss (Zhang et al., 2019a,b) for capturing the permutation-invariant nature of generating a set of proofs. 3 Method 3.1 Task Description and Notations The input to our task is a tuple of the form ( C , Q ) , where C is a rule-base context and Q is the question. We want to predict a binary answer A { T rue, F alse } for the question and generate a set of proof graphs P = {P 1 , . . . , P p } , each of which provides a diverse rationale for the answer (see Figure 1). The context C consists of a set of facts and rules, denoted by F and R respectively. Facts F = { F 1 , . . . F f } are unambiguous statements, while rules R = { R 1 , . . . R r } are logical statements, which can be used in conjunction with the facts to arrive at a logical conclusion. Each proof P i = ( V i , E i ) is a directed graph, with a set of nodes V i N and a set of edges E i V i V i , where N = F R { NAF } and k = |N | . If a statement (E.g. Anne is big) cannot be deduced from the context, then Negation as Failure ( NAF ) contains the negation of that statement (E.g. Anne is not big), which is considered true in a closed-world assumption.",
"See appendix for more details of the syntax of proof graphs.",
"PROVER (Saha et al., 2020) builds on top of RoBERTa (Liu et al., 2019) and consists of a question answering (QA) module, a node module and an edge module where the node and edge modules are used to predict a single proof graph.",
"The input to RoBERTa is the concatenation of the facts, rules and the question.",
"The QA module takes in the representation of the [ CLS ] token and predicts a binary label for the question.",
"The node module computes the node embeddings N R k d p : Figure 2: Plot showing the percentage of samples with p > 1 proofs for different training datasets, DU0-DU5.",
"consisting of the representations of each fact, rule and NAF where d is the embedding dimension.",
"The i th row n i of N denotes the embedding of node i .",
"A node classifier takes in these embeddings to output the node probabilities np i R k for each fact, rule and NAF being present in the proof.",
"The edge module computes the edge embeddings E R k 2 3 d for every edge ( i, j ) through the function ( i, j ) = [ n i ; n j ; ( n i n j )] where ; is the concatenation operation and outputs probabilities ep i,j R k 2 of each edge being present in the proof.",
"PROVER is trained using the joint cross-entropy loss over the QA, node and edge modules.",
"The authors pose inference as a Integer Linear Program (ILP).",
"Given a set of nodes and the edge probabilities from the trained model, the following global score over the edge probabilities is maximized, subject to multiple structural constraints S that ensure the validity of a proof graph (like checking for graph connectivity).",
"Extending PROVER to Generate Proof-Sets: Since Saha et al. (2020) focus on generating one proof per question, they also train their model with one gold proof per question.",
"For multiple proof generation, an obvious extension is to treat each proof for a question as a separate training example.",
"Formally, for each sample l , given a context C l , a question Q l , an answer A l and a set of gold proofs P li , where i { 1 , . . . , p l } , the extended training dataset can be defined as: D = L (cid:91) l =1 (cid:110)(cid:16) Q l , C l , A l , P li (cid:17) p l i =1 (cid:111) l (2) Once PROVER is trained with this dataset, during inference, we generate topp proofs by first selecting the topp node sets according to Eqn.",
"3 and then choosing the corresponding edge sets usE dge E m bedd i ng M odu l e RoBERTa F 1 F f R 1 R r CLS Q Tokens Tokens Embeddings N ode E m bedd i ng M odu l e Node & Edge Embeddings H unga r i an P r oo f Lo ss Cross Entropy Loss FinalLoss QAC l a ss i fi e r M u l t i Labe l N ode C l a ss i fi e r M u l t i Labe l E dge C l a ss i fi e r QA Lo ss Base PRover Figure 3: Multilabel -MULTIPROVER .",
"ing the optimization function in Eqn.",
"1.",
"The topp solutions of Eqn.",
"3 are v 1 , . . . , v p which indicate a node's presence or absence in the proofs.",
"Although simple, this approach has two major issues.",
"First, the lack of coupling between the proofs can potentially confuse the model as there are multiple possible proofs for the same (question, context) pair.",
"Second, inference is inflexible and always generates a fixed number of proofs for every example, thus leading to the generation of many incorrect proofs (Section 5.1).",
"As shown in Figure 1, certain questions can have multiple possible proofs.",
"Figure 2 demonstrates this phenomenon statistically the datasets we experiment with (Clark et al., 2020) contain up to 13% of the samples with > 1 correct proof.",
"Thus, in the light of PROVER 's limitations, we propose two novel architectures of a proof-set generation model, MULTIPROVER .",
"As described in the previous section, a desired property for generating a set of proofs is to have the proofs conditioned on each other as opposed to treating them independently.",
"Thus, we propose Multilabel -MULTIPROVER (see Figure 3), which poses the problem of generating a set of proofs as a multi-label classification task over all the nodes and edges corresponding to the set of p proofs.",
"Each training example is a tuple (cid:0) Q l , C l , A l , {P li } p l i =1 (cid:1) , BasePRover F 1 F f R 1 R r CLS QH unga r i an P r oo f Lo ss QA Lo ss E dge C l a ss i fi e r N ode C l a ss i fi e r T r an s f o r m e r N ode E n c ode r T r an s f o r m e r E dge E n c ode r QAC l a ss i fi e r E dge C l a ss i fi e r N ode C l a ss i fi e r T r an s f o r m e r N ode E n c ode r E dge C l a ss i fi e r N ode C l a ss i fi e r T r an s f o r m e r N ode E n c ode r Second Proof Layer p-1 Proof Layer Node & Edge Embeddings T r an s f o r m e r E dge E n c ode r T r an s f o r m e r E dge E n c ode r N ode C l a ss i fi e r E dge C l a ss i fi e r Cross Entropy Loss FinalLoss Tokens Figure 4: Iterative -MULTIPROVER .",
"consisting of a set of gold proofs {P l i } p l i =1 per example.",
"It consists of a QA module, a node module and an edge module.",
"Following PROVER (Section 3.2), we obtain the node representations N R k d by mean-pooling over the constituent RoBERTa representations.",
"These are then passed through a multilabel node classifier , which consists of two linear layers and produces the probabilities np i R p of a node being present in the p proofs.",
"The node embeddings n i and n j for a pair of nodes are transformed by the function ( i, j ) , described in Section 3.2, to output the edge embeddings E R k 2 3 d .",
"We also have a multi-label edge classifier , which takes in the edge embeddings to generate the probabilities ep i,j R p of an edge ( i, j ) being present in the p proofs.",
"Lastly, a question answering module predicts a binary answer for the question.",
"Following PROVER , during training, we mask certain impossible edges like fact to fact, rule to fact and non-nodes.",
"Given the outputs from the three modules, we train our model jointly over all proofs using a set-based Hungarian Loss.",
"This model is advantageous because there is implicit conditioning between the proofs as all the proofs are generated in parallel from the same node embeddings and edge embeddings.",
"Thus, it has no additional time or memory overhead while also generating proof-sets better than PROVER (Section 5.1).",
"However, it suffers from two major drawbacks.",
"First, since the proofs are generated in parallel, the model is trained by padding empty proof graphs.",
"Hence for higher values of p , the model has to learn more empty proofs, which makes the Figure 5: Plot showing the percentage of samples in DU5 with at least one common node, common edge or both between the proofs for varying number of proofs.",
"learning problem harder.",
"Second, the proofs are not explicitly conditioned on each other.",
"This motivates us to propose Iterative -MULTIPROVER .",
"As a motivating example for why explicit conditioning among proofs is necessary, consider the proofs for Q 1 in Figure 1 where the sub-graph { F 10 R 1 } is common across all the proofs.",
"F 10 and R 1 are essential for answering the question and hence conditioning on the previously generated proofs will help the model adjust the relevance of nodes and edges in the subsequent proofs.",
"Quantitatively, we find that about 75% of the samples with 4 proofs have at least one node and one edge common across all the proofs (see Figure 5).",
"Thus, we propose Iterative -MULTIPROVER (see Figure 4), which broadly consists of a base PROVER architecture, as in Figure 3 and an additional p node and edge encoders for generating a maximum of p proofs.",
"The proofs are generated iteratively until an empty graph is generated to denote the end.",
"Base PROVER architecture computes the first level of node embeddings N 1 R k d and edge embeddings E 1 R k 2 d .",
"These are passed respectively through a node and edge classifier to generate the node probabilities np 1 R k and edge probabilities ep 1 R k 2 , corresponding to the first proof.",
"In the next iteration, two transformer encoders generate the node and edge embeddings corresponding to the second proof.",
"Specifically, we condition the generation of the next node embeddings N 2 on the previous node ( N 1 ) and edge ( E 1 ) embeddings simultaneously.",
"Conditioning on both is crucial because N 1 captures the relevance of nodes for the first proof, while E 1 contains information about the strength of the connections between these nodes.",
"We condition E 2 only on E 1 , because the edge embeddings corresponding to the nodes predicted by N 1 are already updated in E 1 .",
"Formally, T 1 = W (1) E 1 W (2) , W (1) R k k 2 , W (2) R 3 d d N (cid:48) = [ N 1 ; T 1 ] W (3) , W (3) R 2 d d N 2 = Transformer ( N (cid:48) ); E 2 = Transformer ( E 1 ) These next set of embeddings, when passed through the respective node and edge classifiers, predict the node probabilities np 2 R k and edge probabilities ep 2 R k 2 , denoting the likelihood of their presence in the second proof.",
"We repeat this process of stacking up node and edge encoders for generating a maximum of p proofs.",
"Given the node and edge probabilities corresponding to each proof and a QA probability from the QA module, we train Iterative -MULTIPROVER jointly with all proofs using the Hungarian Loss, described below.",
"Unlike words in text generation, proofs can be generated in any arbitrary order.",
"Consequently, computing cross-entropy loss between the i th predicted proof and the i th gold proof, i { 1 , ..., p } will be sub-optimal.",
"Thus, we use a permutation-invariant Hungarian Loss (Zhang et al., 2019a,b) which finds the most optimal assignment between the predicted proofs and the gold proofs such that the overall loss is minimized.",
"Formally, the Hungarian loss LH and total loss L are denoted as follows: LH = min p (cid:88) i =1 CE ( np i , y ( i ) n ) + CE ( ep i , y ( i ) e ) L = LQA + LH where CE ( ., . ) is the cross entropy loss, np i and ep i are the respective node and edge probabilities for the i th predicted proof while y ( i ) n { 0 , 1 } k and y ( i ) e { 0 , 1 } k 2 are the respective true node and edge labels for the gold proof ( i ) , where is the most optimal permutation.",
"The Hungarian Loss is implemented by first summing the node and edge cross-entropy loss matrices L n R p p and L e R p p respectively, each entry ( i, j ) of which corresponds to the proof loss between the i th predicted proof and j th gold proof (see Figures 3 and 4).",
"Then we find the best assignment between the gold and predicted proofs through the Hungarian algorithm (Kuhn and Yaw, 1955).",
"Our final loss sums the Hungarian proof loss and the QA loss.",
"Following PROVER , we generate valid proofs during inference using an ILP, subject to multiple",
"global constraints (see Saha et al. (2020)).",
"For each predicted proof, the predicted nodes and edge probabilities from MULTIPROVER , we obtain the corresponding predicted edges using Eqn.",
"1.",
"Datasets: The five synthetic datasets DU0-DU5 consist of 100k questions with their own train, validation and test splits (70/10/20) and reasoning depths up to D = 0 , 1 , 2 , 3 , 5 .",
"Each example in these datasets is annotated with all possible proofs.",
"The second dataset is a Birds-Electricity dataset, consisting of 5k hand-authored samples aimed at evaluating the zero-shot performance of the models.",
"Unlike the previous datasets, all examples in this dataset have a unique gold proof.",
"Third, ParaRules is a human-paraphrased dataset, consisting of 40k examples with all possible proofs, where the facts and rules are paraphrased by crowd-workers.",
"Further details of the datasets and model's hyperparameters can be found in the appendix.",
"Evaluation Metrics: Following PROVER , QA evaluation is done through accuracy.",
"For proofs, we compute the following metrics: (1) Node Precision, Recall, F1 (2) Edge Precision, Recall, F1 , (3) Proof Precision, Recall, F1 , and (4) Full Accuracy (FA) .",
"For each sample, given a set of gold proofs and predicted proofs, node precision is computed as the fraction of predicted proofs where the predicted node set matches exactly with a gold proof's node set.",
"Similarly, node recall for each sample is computed as the fraction of gold proofs where the corresponding node sets match exactly.",
"The overall node precision, recall and F1 are the respective sample-wise precision, recall and F1 scores averaged over all the samples.",
"Edge metrics are computed similarly but with respect to the edges only and the proof metrics consider both nodes and edges in conjunction.",
"Our final metric, full accuracy evaluates a sample as a whole and is given by the fraction of samples where the answer and all corresponding proofs are exactly correct.",
"(1) PROVER , as introduced in Saha et al. (2020), trained with one proof per example and also generates a single proof, (2) PROVER -all, trained with all proofs as separate examples and generates a single proof per example, (3) PROVER -topp , an extension of PROVER -all, generating topp proofs for all examples, (4) PROVER -topp -classifier, an improvement over the vanilla topp model, where we first predict the number of proofs by training a RoBERTa classifier with concatenated question and context and then generate those many top proof graphs, and (5) PROVER -topp -threshold, another improved model over vanilla topp , where we use the optimization score from Equation 3 to predict the number of proofs to generate, i.e., we stop generating proofs when the score difference between two consecutive proofs exceeds a certain threshold (tuned on the validation set).",
"All models are trained on the DU5 train set and tested on the corresponding test set.",
"Based on Figure 2 which shows that 98% of the dataset contains samples with 3 proofs, we set max-proofs, p = 3 .",
"87% of the examples in the dataset have a single gold proof, thereby making PROVER a strong baseline.",
"We observe that PROVER -all has a slightly lower proof F1 than PROVER , because the model likely gets confused with multiple possible proofs for the same context and question.",
"PROVER -topp 's huge drop in precision is unsurprising because the subsequent non-empty proofs are always incorrect, causing full accuracy to drop to 0%.",
"When we perform careful inference over PROVER either by predicting the number of proofs or by thresholding and do not generate a fixed p number of proofs for all examples, we observe a boost in precision over the vanilla topp model, with very little drop in recall.",
"However, PROVER continues to be a stronger baseline than all the topp variants because of a lot of single-proof examples in the dataset.",
"Both MULTIPROVER models improve significantly on the state-of-the-art proof F1, while retaining a near perfect QA accuracy.",
"IT-MULTIPROVER is a significantly stronger model because of its explicit conditioning mechanism and obtains up to a statistically significant 2 ( p < 0 . 001 ) 4% improvement on proof F1 and full accuracy.",
"While our model is expected to improve the proof recall compared to PROVER and PROVER -all because of the generation of multiple proofs, the improvement in precision is particularly important as it shows that the subsequently generated proofs by IT-MULTIPROVER are mostly correct.",
"Similarly, its improvement in proof recall compared to PROVER topp also shows the strength of the model considering that PROVER -topp generates the maximum number of proofs for every sample.",
"Overall, IT-MULTIPROVER outperforms all other models in all metrics.",
"In summary, careful inference strategies over a single-proof generation model like PROVER are largely ineffective for generating multiple proofs and an effective proof-set generation model needs to exploit and learn the inter-proof correlations during the training phase itself.",
"Our experiments on the ParaRules dataset demonstrate similar findings, details of which and the effect of varying p for MULTIPROVER is in the appendix.",
"Iterative -MULTIPROVER performs equally well on the subset of questions where the context has negations , achieving a high proof F1 of 90 .",
"8 .",
"As part of error analysis, we find that 58% of Iterative MULTIPROVER 's wrongly predicted proofs have more nodes and edges than those in the gold proof, suggesting that our model tends to overestimate the essential rules and facts and their inter-connections.",
"In the following subsections, we analyze MULTIPROVER 's generalization capabilities in three dif-2 We use bootstrap test (Efron and Tibshirani, 1994) for calculating the statistical significance score.",
"The Birds-Electricity test-only dataset evaluates the zero-shot performance.",
"It contains examples with single gold proofs; hence, if a multiple-proof generation model like MULTIPROVER transfers well to it, this indicates strong generalization capabilities because along with generating correct proofs, it also needs to infer the correct number of proofs.",
"With that motivation, in Table 2, we compare PROVER and PROVER -all, both trained on DU5 to generate a single proof, with our MULTIPROVER models, also trained on DU5 and find that IT-MULTIPROVER obtains state-of-the-art result on all proof-related metrics, while retaining the QA performance.",
"Note that IT-MULTIPROVER has two important design choices which explains its good performance on out-of-domain transfer (1) it trains on all proofs jointly, (2) explicit proof conditioning.",
"Both of these, when combined, enable it to learn the correlations between the proofs to identify the degree of relevance of facts and rules, ranging from essential to sometimes useful to irrelevant, for a given question.",
"Thus, on out-of-domain test data, it assigns soft prior relevance scores to the context which helps it to better learn the significantly smaller space of correct proofs and be more accurate even for a single-proof dataset.",
"The DU5 dataset consists of questions requiring reasoning up to a maximum depth of 5.",
"Thus, we test the generalization capabilities of the MULTIPROVER models on higher depth questions.",
"Specifically, in Table 3, we compare the DU5-trained models of PROVER -all, ML-MULTIPROVER and IT-MULTIPROVER on the subset of DU5 test examples with varying depths of reasoning ( d ).",
"Each row also shows the percentage of examples with multiple gold proofs (MP) which, unsurprisingly, increases as the depth increases.",
"We observe that much of IT-MULTIPROVER 's improvement compared to ML-MULTIPROVER comes at higher depths where the presence of multiple proofs is a more frequent phenomenon.",
"At depth-5, where 23% of the examples have > 1 correct proof, IT-MULTIPROVER obtains a 6% improvement over ML-MULTIPROVER .",
"This shows that joint training with all proofs and explicit conditioning between them leads to better generalization at higher depths.",
"Collecting proofs for supervised training is expensive in most real-world scenarios.",
"Hence, on top of the zero-shot and depth generalization results presented so far, we ask if our MULTIPROVER models can learn from less training data.",
"Table 4 shows that these models obtain near perfect QA accuracy with only 40% of the training data (30k exam-ples).",
"However, proof generation proves to be challenging and only improves with sufficient training Facts: F1: Bob is quiet.",
"data.",
"Another interesting observation is that while both MULTIPROVER models perform comparably with less training data, IT-MULTIPROVER starts to outperform ML-MULTIPROVER upon training with more examples.",
"IT-MULTIPROVER consists of more trainable parameters because of its multiple node and edge encoders, which get learned better with more data.",
"See appendix for runtime and parameter space of these models.",
"We find that an ideal (skyline) single-proof generation model's proof recall for the DU5 dataset is upper-bounded by 92% as it contains about 87% of single-proof examples.",
"This is computed by considering exactly 1 correct proof per question.",
"Hence, we ask how well our MULTIPROVER models compare with this ideal performance (Figure 7).",
"Our results are encouraging, not only because IT-MULTIPROVER generates more correct proofs than all other models but also because it almost matches the performance of the skyline single-proof generation model.",
"The PROVER model is 9 .",
"2% worse as compared to the skyline single-proof generation model while IT-MULTIPROVER reduces this gap to 3% .",
"Given the dataset mostly contains single-proof examples, the skyline is a strong upper-bound on proof generation performance and IT-MULTIPROVER significantly reduces the gap.",
"See appendix for ablations of IT-MULTIPROVER , including the effect of Hungarian Loss.",
"Fig. 6 shows the sets of proofs correctly generated by Iterative -MULTIPROVER for two randomly chosen questions.",
"For Q 1 , it generates all the possible proofs by identifying the common subgraph 92% 9.2% 3% Figure 7: Comparison of proof recall for all models with that of the skyline single-proof generation model.",
"F 6 R 9 .",
"Q 2 is interesting, because",
"(i) the single-node proof F 2 is significantly different from the other proofs in both structure and size, and",
"(ii) the two larger proofs have two distinct common subgraphs.",
"Here, PROVER performs simple lookup in the rule-base to generate the proof F 2 , thereby limiting our understanding of its reasoning capabilities.",
"However, MULTIPROVER , through its ability to also generate the larger and more complex proofs enhances the transparency and verification of its reasoning abilities, and hence is a crucial step towards bridging the gap between neural and symbolic approaches.",
"We proposed Multilabel -MULTIPROVER and Iterative -MULTIPROVER , two variants of a proof-set generation model where the former performs implicit conditioning between the proofs to generate them in parallel while the latter generates a proof-set through explicit conditioning on the previously generated proofs.",
"Both models obtain strong proof F1 improvements on synthetic and human-paraphrased datasets and Iterative -MULTIPROVER also obtains state-of-the-art proof F1 on a zero-shot dataset with single proofs.",
"MULTIPROVER 's modeling is fairly generic and similar methods can be used in generating a set of structured explanations for other NLP tasks like multi-hop QA.",
"Despite the overwhelming success of pre-trained language models for various NLP tasks, a common criticism is their lack of interpretability.",
"Generating structured proofs from such models allows us to explain their reasoning capabilities and also bridges the gap between neural and symbolic systems.",
"In this work, we take a step closer towards improving the interpretability of rule-based reasoning by generating a set of multiple proofs, each providing a diverse rationale for the reasoning process.",
"We experiment with a wide variety of rule-bases ranging from synthetic to hand-authored to human-paraphrased rule-bases.",
"Our results show good generalization performance of our models across three different aspects (1) zero-shot settings, (2) questions requiring higher depths of reasoning, and (3) availability of less training data.",
"We hope our models and findings will inspire future work on generating multiple structured explanations for different compositional reasoning tasks in NLP.",
"We thank the reviewers and Peter Hase for their helpful feedback.",
"This work was supported by DARPA MCS Grant N66001-19-2-4031, NSF-CAREER Award 1846185, DARPA YFA17-D17AP00022, ONR Grant N00014-18-1-2871, Mi-crosoft Investigator Fellowship, and Munroe & Rebecca Cobey Fellowship.",
"The views in this article are those of the authors and not the funding agency."
]
| [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"other",
"other"
]
|
[
"Pretrained multilingual models enable zero-shot learning even for unseen languages, and that performance can be further improved via adaptation prior to finetuning.",
"However, it is unclear how the number of pretraining languages influences a model's zero-shot learning for languages unseen during pretraining.",
"To fill this gap, we ask the following research questions: (1) How does the number of pretraining languages influence zero-shot performance on unseen target languages?",
"(2) Does the answer to that question change with model adaptation?",
"(3) Do the findings for our first question change if the languages used for pretraining are all related?",
"Our experiments on pretraining with related languages indicate that choosing a diverse set of languages is crucial.",
"Without model adaptation, surprisingly, increasing the number of pretraining languages yields better results up to adding related languages, after which performance plateaus.",
"In contrast, with model adaptation via continued pretraining, pretraining on a larger number of languages often gives further improvement, suggesting that model adaptation is crucial to exploit additional pretraining languages.",
"1 1 Introduction Pretrained multilingual language models (Devlin et al., 2019; Conneau et al., 2020) are now a standard approach for cross-lingual transfer in natural language processing (NLP).",
"However, there are multiple, potentially related issues on pretraining multilingual models.",
"Conneau et al. (2020) find the curse of multilinguality: for a fixed model size, zero-shot performance on target languages seen during pretraining increases with additional pretraining languages only until a certain point, after This work was done while the first author was a student at University of Colorado Boulder.",
"which performance decreases.",
"Wang et al. (2020b) also report negative interference, where monolingual models achieve better results than multilingual models, both on subsets of highand low-resource languages.",
"However, those findings are limited to target languages seen during pretraining.",
"Current multilingual models cover only a small subset of the world's languages.",
"Furthermore, due to data sparsity, monolingual pretrained models are not likely to obtain good results for many low-resource languages.",
"In those cases, multilingual models can zero-shot learn for unseen languages with an above-chance performance, which can be further improved via model adaptation with target-language text (Wang et al., 2020a), even for limited amounts (Ebrahimi and Kann, 2021).",
"However, it is poorly understood how the number of pretraining languages influences performance in those cases.",
"Does the curse of multilinguality or negative interference also impact performance on unseen target languages?",
"And, if we want a model to be applicable to as many unseen languages as possible, how many languages should it be trained on?",
"Specifically, we ask the following research questions: (1) How does pretraining on an increasing number of languages impact zero-shot performance on unseen target languages?",
"(2) Does the effect of the number of pretraining languages change with model adaptation to target languages?",
"(3) Does the answer to the first research question change if the pretraining languages are all related to each other?",
"We pretrain a variety of monolingual and multilingual models, which we then finetune on English and apply to three zero-shot cross-lingual downstream tasks in unseen target languages: part-of-speech ( POS ) tagging, named entity recognition ( NER ), and natural language inference ( NLI ).",
"Experimental results suggest that choosing a diverse set of pretraining languages is crucial for effective transfer.",
"Without model adaptation, increasing the number of pretraining languages im-1500 proves accuracy on unrelated unseen target languages at first and plateaus thereafter.",
"Last, with model adaptation, additional pretraining languages beyond English generally help.",
"We are aware of the intense computational cost of pretraining and its environmental impact (Strubell et al., 2019).",
"Thus, our experiments in Section 4 are on a relatively small scale with a fixed computational budget for each model and on relatively simple NLP tasks ( POS tagging, NER , and NLI ), but validate our most central findings in Section 5 on large publicly available pretrained models.",
"Pretrained multilingual models are a straightforward cross-lingual transfer approach: a model pretrained on multiple languages is then fine-tuned on target-task data in the source language.",
"Subsequently, the model is applied to target-task data in the target language.",
"Most commonly, the target language is part of the model's pretraining data.",
"However, cross-lingual transfer is possible even if this is not the case, though performance tends to be lower.",
"This paper extends prior work exploring the cross-lingual transfer abilities of pretrained models for seen target languages depending on the number of pretraining languages to unseen target languages.",
"We now transfer via pretrained multilingual models and introduce the models and methods vetted in our experiments.",
"Pretrained Language Models Contextual representations such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) are not just useful for monolingual representations.",
"Multilingual BERT (Devlin et al., 2019, m BERT ), XLM (Lample and Conneau, 2019), and XLM-RoBERTa (Con-neau et al., 2020, XLM-R ) have surprisingly high cross-lingual transfer performance compared to the previous best practice: static cross-lingual word embeddings (Pires et al., 2019; Wu and Dredze, 2019).",
"Multilingual models are also practical why have hundreds of separate models for each language when you could do better with just one?",
"Furthermore, Wu and Dredze (2020) report that models pretrained on 100+ languages are better than bilingual or monolingual language models in zero-shot cross-lingual transfer.",
"Model Adaptation to Unseen Languages Adapting pretrained multilingual models such as m BERT and XLM-R to unseen languages is one way to use such models beyond the languages covered during pretraining time.",
"Several methods for adapting pretrained multilingual language models to unseen languages have been proposed, including continuing masked language model ( MLM ) training (Chau et al., 2020; Mller et al., 2020), optionally adding Adapter modules (Pfeiffer et al., 2020), or extending the vocabulary of the pretrained models (Artetxe et al., 2020; Wang et al., 2020a).",
"However, such adaptation methods assume the existence of sufficient monolingual corpora in the target languages.",
"Some spoken languages, dialects, or extinct languages lack monolingual corpora to conduct model adaptation, which motivates us to look into languages unseen during pretraining.",
"We leave investigation on the effect of target language-specific processing, e.g., transliteration into Latin scripts (Muller et al., 2021), for future work.",
"A single pretrained model that can be applied to any language, including those unseen during pretraining, is both more efficient and more practical than pretraining one model per language.",
"Moreover, it is the only practical option for unknown target languages or for languages without enough resources for pretraining.",
"Thus, models that can be applied or at least easily adapted to unseen languages are an important research focus.",
"This work addresses the following research questions ( RQ ), using English as the source language for finetuning.",
"RQ1 : How does the number of pretraining languages influence zero-shot cross-lingual transfer of simple NLP tasks on unseen target languages?",
"We first explore how many languages a model should be pretrained on if the target language is unknown at test time or has too limited monolingual resources for model adaptation.",
"On one hand, we hypothesize that increasing the number of pretraining languages will improve performance, as the model sees a more diverse set of scripts and linguistic phenomena.",
"Also, the more pretraining languages, the better chance of having a related language to the target language.",
"However, multilingual training can cause interference: other languages could distract from English, the finetuning source language, and thus, lower performance.",
"This question is concerned with settings in which we have enough monolingual data to adapt a pretrained model to the target language.",
"Like our hypothesis for RQ 1, we expect that having seen more pretraining languages should make adaptation to unseen target languages easier.",
"However, another possibility is that adapting the model makes any languages other than the finetuning source language unnecessary; performance stays the same or decreases when adding more pretraining languages.",
"RQ3 : Do the answers to RQ1 change if all pretraining languages are related to each other?",
"We use a diverse set of pretraining languages when exploring RQ 1, since we expect that to be maximally beneficial.",
"However, the results might change depending on the exact languages.",
"Thus, as a case study, we repeat all experiments using a set of closely related languages.",
"On the one hand, we hypothesize that benefits due to adding more pretraining languages (if any) will be smaller with related languages, as we reduce the diversity of linguistic phenomena in the pretraining data.",
"However, on the other hand, if English is all we use during fine-tuning, performance might increase with related languages, as this will approximate training on more English data more closely.",
"Pretraining Corpora All our models are pretrained on the C o NLL 2017 Wikipedia dump (Gin-ter et al., 2017).",
"To use equal amounts of data for all pretraining languages, we downsample all Wikipedia datasets to an equal number of sequences.",
"We standardize to the smallest corpus, Hindi.",
"The resulting pretraining corpus size is around 200MB per language.",
"2 We hold out 1K sequences with around 512 tokens per sequence after preprocessing as a development set to track the models' performance during pretraining.",
"Corpora for Model Adaptation For model adaptation (RQ2), we select unseen target languages contained in both XNLI (Conneau et al., 2018b) and Universal Dependencies 2.5 (Nivre et al., 2019): Farsi ( FA ), Hebrew ( HE ), French ( FR ), Vietnamese ( VI ), Tamil ( TA ), and Bulgarian ( BG ).",
"Model adaptation is typically done for low-resource languages not seen during pretraining 2 Micheli et al. (2020) show that corpora of at least 100MB are reasonable for pretraining.",
"because monolingual corpora are too small (Wang et al., 2020a).",
"Therefore, we use the Johns Hopkins University Bible corpus by McCarthy et al. (2020) following Ebrahimi and Kann (2021).",
"3 Tasks We evaluate our pretrained models on the following downstream tasks from the XTREME dataset (Hu et al., 2020): POS tagging and NLI .",
"For the former, we select 29 languages from Universal Dependencies v2.5 (Nivre et al., 2019).",
"For the latter, we use all fifteen languages in XNLI (Con-neau et al., 2018b).",
"We follow the default train, validation, and test split in XTREME .",
"Models and Hyperparameters Following Conneau et al. (2020)'s XLM-R Base model, we train transformers (Vaswani et al., 2017) with 12 layers, 768 units, 12 attention heads, and a maximum of 512 tokens per sequence.",
"To accommodate all 3 In cases where multiple versions of the Bible are available in the target language, we select the largest one.",
"languages and facilitate comparability between all pretraining setups, we use XLM-R 's vocabulary and the SentencePiece (Kudo and Richardson, 2018) tokenizer by Conneau et al. (2020).",
"We use masked language modeling ( MLM ) as our pretraining objective and, like Devlin et al. (2019), mask 15% of the tokens.",
"We pretrain all models for 150K steps, using Adam W (Loshchilov and Hutter, 2019) with a learning rate of 1 10 4 and a batch size of two on either NVIDIA RTX2080Ti or GTX1080Ti 12GB, on which it approximately took four days to train each model.",
"When pretraining, we preprocess sentences together to generate sequences of approximately 512 tokens.",
"For continued pretraining, we use a learning rate of 2 10 5 and train for forty epochs, otherwise following the setup for pretraining.",
"For finetuning, we use a learning rate of 2 10 5 and train for an additional ten epochs for both POS tagging and NER , and an additional five epochs for NLI , following Hu et al. (2020).",
"Languages Table 1 shows the languages used in our experiments.",
"English is part of the pretraining data of all models.",
"It is also the finetuning source language for all tasks, following Hu et al. (2020).",
"We use two different sets of pretraining languages: Diverse (Div) and Related (Rel) (Table 2).",
"We mainly focus on pretraining on up to five languages, except for POS tagging where the trend is not clear and we further experiment on up to ten.",
"For POS tagging and NER , we regard seventeen of the twenty-nine languages available in XTREME as unseen , while the remaining twelve languages are pretraining languages for at least one model.",
"For NLI , six languages are seen and the rest are unseen .",
"The order in which we add pretraining languages follows the size of their original C o NLL 2017 Wikipedia dumps, with larger sizes being added first.",
"POS Tagging Figure 1 shows the POS tagging accuracy averaged over the 17 languages unseen during pretraining.",
"On average, models pretrained on multiple languages have higher accuracy on unseen languages than the model pretrained exclusively on English, showing that the model benefits from a more diverse set of pretraining data.",
"However, the average accuracy only increases up to six languages.",
"This indicates that our initial hypothesis \"the more languages the better\" might not be true.",
"Figure 2 provides a more detailed picture, showing the accuracy for different numbers of pretraining languages for all seen and unseen target languages.",
"As expected, accuracy jumps when a language itself is added as a pretraining language.",
"Furthermore, accuracy rises if a pretraining language from the same language family as a target language is added: for example, the accuracy of Marathi goes up by 9 .",
"3% after adding Hindi during pretraining, and the accuracy of Bulgarian increases by 31 .",
"2% after adding Russian.",
"This shows that related languages are indeed beneficial for transfer learning.",
"Also, (partially) sharing the same script with a pretraining language (e.g., ES and ET , AR and FA ) helps with zero-shot cross-lingual transfer even for languages which are not from the same 1503 IE: Germanic 0.0 0.5 1.0 af de en nl IE: Slavic ru bg SinoTibetan zh AfroAsiatic ar he IE: IndoAryan 0.0 0.5 1.0 hi mr ur IE: Romance es fr it pt IE: Greek el Uralic et fi hu Austronesian 0.0 0.5 1.0 id Turkic tr Basque eu Japonic ja AustroAsiatic en + r u + z h + a r + h i + e s + e l + f i + i d + t r 0.0 0.5 1.0 vi Koreanic en + r u + z h + a r + h i + e s + e l + f i + i d + t r ko Dravidian en + r u + z h + a r + h i + e s + e l + f i + i d + t r ta te IE: Iranian en + r u + z h + a r + h i + e s + e l + f i + i d + t r fa Figure 2: POS tagging accuracy using models pretrained on a diverse set of languages ( EN , RU , ZH , AR , HI , ES , EL , FI , ID , TR ) grouped by families of target languages, with Indo-European ( IE ) languages further divided into subgroups following XTREME .",
"family.",
"These results are consistent with the outcome of Mller et al. (2020) and partially support the hypothesis by Pires et al. (2019) that shared scripts are effective on unseen languages.",
"But how important are the scripts compared to other features?",
"To quantify the importance of it, we conduct a linear regression analysis on the POS tagging result.",
"Table 3 shows the linear regression analysis results using typological features among target and pretraining languages.",
"For the script and family features, we follow Xu et al. (2019) and encoded them into binary values set to one if a language with the same script or from the same family is included as one of the pretraining languages.",
"For syntax and phonology features, we derive those vectors from the URIEL database using lang2vec (Littell et al., 2017) following Lauscher et al. (2020).",
"We take the maximum cosine similarity between the target language and any of the pretraining languages.",
"Table 3 further confirms that having a pretraining language which shares the same script contributes the most to positive cross-lingual transfer.",
"We sadly cannot give a definitive optimal number of pretraining languages.",
"One consistent find-Features Coef.",
"ing is that, for the large majority of languages, using only English yields the worst results for unseen languages.",
"However, adding pretraining languages does not necessarily improve accuracy (Figure 1).",
"This indicates that, while we want more than one pretraining language, using a smaller number than the 100 commonly used pretraining languages is likely sufficient unless we expect them to be closely related to one of the potential target languages.",
"NER Our NER results show a similar trend.",
"Therefore, we only report the average performance in the main part of this paper (Figure 3), and full 1504 en Div-2(+ru) Div-3(+zh) Div-4(+ar) Div-5(+hi) Div-6(+es) Div-7(+el) Div-8(+fi) Div-9(+id)Div-10(+tr) Pretraining Languages 0 0.1 0.2 NERF 1 S c o r e o n U n s ee n L a n g u a g e s Figure 3: NER F1 score after pretraining on a diverse set of up to 10 languages and finetuning on English.",
"details are available in Appendix A. For NER , transfer to unseen languages is more limited, likely due to the small subset of tokens which are labeled as entities when compared to POS tags.",
"NLI Our NLI results in Figure 4 show a similar trend: accuracy on unseen languages plateaus at a relatively small number of pretraining languages.",
"Specifically, Div-4 has the highest accuracy for 8 target languages, while Div-5 is best only for two target languages.",
"Accuracy again increases with related languages, such as an improvement of 3 .",
"7% accuracy for Bulgarian after adding Russian as a pretraining language.",
"Full results are available in Appendix B. 4.2 Findings for RQ2 POS Tagging Figure 5a shows the POS tagging results for six languages after adaptation of the pretrained models via continued pretraining.",
"As expected, accuracy is overall higher than in Figure 2.",
"Importantly, there are accuracy gains in Farsi when adding Turkish ( +9 . 8% ) and in Hebrew when adding Greek ( +7 . 7% ), which are not observed before adapting models.",
"We further investigate it in Section 5.",
"NERNER results in Figure 5b show similarities between POS tagging (e.g., improvement on Bulgarian after adding Russian).",
"However, there is limited improvement on Farsi after adding Arabic despite partially shared scripts between the two languages.",
"This indicates that the effect of adding related pretraining languages is partially task-dependent.",
"NLI For NLI , accuracy increases slightly after adding a second pretraining language.",
"Results for two to five pretraining languages are similar for all target languages and, for Greek and Turkish, still similar to the English-only model.",
"This indicates that, similar to our findings for POS tagging, a few pretraining languages could be sufficient for model adaptation.",
"Full results are available in Appendix B. Finally, our NLI results are low overall.",
"This is likely due to the size of the pretraining corpus being one of the top correlated features for NLI (Lauscher 1505 IE: Germanic 0.0 0.5 1.0 af de en nl IE: Slavic ru bg SinoTibetan zh AfroAsiatic ar he IE: IndoAryan 0.0 0.5 1.0 hi mr ur IE: Romance es fr it pt IE: Greek el Uralic et fi hu Austronesian 0.0 0.5 1.0 id Turkic tr Basque eu Japonic ja AustroAsiatic en + de + sv + n l + da 0.0 0.5 1.0 vi Koreanic en + de + sv + n l + da ko Dravidian en + de + sv + n l + da ta te IE: Iranian en + de + sv + n l + da fa Figure 6: POS tagging accuracy using related pretraining languages ( EN , DE , SV , NL , DA ) grouped by families of target languages, with Indo-European ( IE ) languages further divided into subgroups following the XTREME dataset.",
"et al., 2020), unlike for POS tagging (Hu et al., 2020).",
"POS Tagging In contrast to RQ 1, POS tagging accuracy changes for most languages are limited when increasing the number of pretraining languages (Figure 6).",
"The unseen languages on which we observe gains belong to the Germanic, Romance, and Uralic language families, which are relatively (as compared to the other language families) close to English.",
"The accuracy on languages from other language families changes by < 10% , which is smaller than the change for a diverse set of pretraining languages.",
"This indicates that the models pretrained on similar languages struggle to transfer to unrelated languages.",
"NER F1 scores of EN , Rel-2, Rel-3, Rel-4, and Rel-5 are .218, .219, .227, .236, and .237 respectively.",
"Compared to Div-X, pretraining on related languages also improves up to adding five languages.",
"However, these models bring a smaller improvement, similar to POS tagging.",
"NLI Figure 7 shows a similar trend for NLI : when adding related pretraining languages, accuracy on languages far from English either does not change much or decreases.",
"In fact, for nine out of thirteen unseen target languages, Rel-5 is the worst.",
"Our main takeaways from the last section are: ( RQ 1) without model adaptation, increasing the number of pretraining languages does not improve accuracy on unrelated unseen target languages; ( RQ 2) model adaptation largely helps exploiting models pretrained on more languages; and ( RQ 3)",
"when using more than one pretraining language, diversity is important.",
"However, there are limitations in the experimental settings in Section 4.",
"We assume the following: (1) relatively small pretraining corpora; (2) the target languages are included when building the model's vocabulary; (3) fixed computational resources; and (4) only up to ten pretraining languages.",
"We now explore if our findings for RQ 1 and RQ 2 hold without such limitations.",
"For this, we use two publicly available pretrained XLM models (Lample and Conneau, 2019), which have been pretrained on full size Wikipedia in 17 ( XLM -17) and 100 ( XLM -100) languages, and XLM-R base model trained on a larger Common Crawl corpus (Con-neau et al., 2020) in 100 languages.",
"We conduct a case study on low-resource languages unseen for all models, including unseen vocabularies: Maltese ( MT ), Wolof ( WO ), Yoruba ( YO ), Erzya ( MYV ), and Northern Sami ( SME ).",
"All pretraining languages used in Div-X are included in XLM -17 except for Finnish, and all 17 pretraining languages for XLM 17 are a subset of the pretraining languages for XLM -100.",
"We report the averages with standard deviations from three random seeds.",
"RQ1 For models without adaptation, accuracy does not improve for increasing numbers of source languages (Figure 8a).",
"Indeed, the accuracy on both XLM -17 and XLM -100 are on par even though the former uses 17 pretraining languages and the latter uses 100.",
"One exception is Northern Sami (Uralic language with Latin script) due to XLM 17 not seeing any Uralic languages, but XLM -100 does during pretraining.",
"When further comparing Div-10 and XLM -17, increase in accuracy by additional pretraining languages is limited.",
"Erzya remains constant from five to 100 languages (ex-cept for XLM-R ), even when increasing the pretraining corpus size from downsampled (Div-X) to full Wikipedia ( XLM -17 and XLM -100).",
"RQ2 For the models with adaptation (Figure 8b), there is a significant gap between XLM -17 and XLM 100.",
"This confirms our findings in the last section: more pretraining languages is beneficial if the pretrained models are adapted to the target languages.",
"Thus, a possible explanation is that one or more of XLM -100's pretraining languages is similar to our target languages and such languages can only be exploited through continued pretraining (e.g., Ukrainian included in XLM -100 but not in Div-X).",
"Therefore, having the model see more languages during pretraining is better when the models can be adapted to each target language.",
"Static Cross-lingual Word Embeddings Static cross-lingual word embeddings (Mikolov et al., 2013; Conneau et al., 2018a) embed and align words from multiple languages for downstream NLP tasks (Lample et al., 2018; Gu et al., 2018), including a massive one trained on 50+ languages (Ammar et al., 2016).",
"Static cross-lingual embedding methods can be classified into two groups: supervised and unsupervised.",
"Supervised methods use bilingual lexica as the cross-lingual supervision signal.",
"On the other hand, pretrained multilingual language models and unsupervised 1507 cross-lingual embeddings are similar because they do not use a bilingual lexicon.",
"Lin et al. (2019) explore the selection of transfer language using both data-independent (e.g., typological) features, and data-dependent features (e.g., lexical overlap).",
"Their work is on static supervised cross-lingual word embeddings, whereas this paper explores pretrained language models.",
"Analysis of Pretrained Multilingual Models on Seen Languages Starting from Pires et al. (2019), analysis of the cross-lingual transferability of pretrained multilingual language models has been a topic of interest.",
"Pires et al. (2019) hypothesize that cross-lingual transfer occurs due to shared tokens across languages, but Artetxe et al. (2020) show that cross-lingual transfer can be successful even among languages without shared scripts.",
"Other work investigates the relationship between zero-shot cross-lingual learning and typological features (Lauscher et al., 2020), encoding language-specific features (Libovick et al., 2020), and m BERT 's multilinguality (Dufter and Schtze, 2020).",
"However, the majority of analyses have either been limited to large public models (e.g., m BERT , XLM-R ), to up to two pretraining languages (K et al., 2020; Wu and Dredze, 2020), or to target languages seen during pretraining.",
"One exception is the concurrent work by de Vries et al. (2022) on analyzing the choice of language for the task-specific training data on unseen languages.",
"Here, we analyze the ability of models to benefit from an increasing number of pretraining languages.",
"This paper explores the effect which pretraining on different numbers of languages has on unseen target languages after finetuning on English.",
"We find: (1) if not adapting the pretrained multilingual language models to target languages, a set of diverse pretraining languages which covers the script and family of unseen target languages (e.g., 17 languages used for XLM -17) is likely sufficient; and (2) if adapting the pretrained multilingual language model to target languages, then one should pretrain on as many languages as possible up to at least 100.",
"Future directions include analyzing the effect of multilingual pretraining from different perspectives such as different pretraining tasks and architectures, e.g., mT5 (Xue et al., 2021), and more complex tasks beyond classification or sequence tagging.",
"We sincerely thank the reviewers for their constructive and detailed feedback.",
"We also thank the members of University of Colorado Boulder's NALA group, especially Abteen Ebrahimi for providing the code and Stphane Aroca-Ouellette for giving feedback on an early draft.",
"Boyd-Graber is supported by ODNI, IARPA, via the BETTER Program contract 2019-19051600005.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein."
]
| [
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other"
]
|
[
"Paraphrasing natural language sentences is a multifaceted process: it might involve replacing individual words or short phrases, local rearrangement of content, or high-level restructuring like topicalization or passivization.",
"Past approaches struggle to cover this space of paraphrase possibilities in an interpretable manner.",
"Our work, inspired by pre-ordering literature in machine translation, uses syntactic transformations to softly reorder the source sentence and guide our neural paraphrasing model.",
"First, given an input sentence, we derive a set of feasible syntactic rearrangements using an encoder-decoder model.",
"This model operates over a partially lexical, partially syntactic view of the sentence and can reorder big chunks.",
"Next, we use each proposed rearrangement to produce a sequence of position embeddings, which encourages our final encoder-decoder paraphrase model to attend to the source words in a particular order.",
"Our evaluation, both automatic and human, shows that the proposed system retains the quality of the baseline approaches while giving a substantial increase in the diversity of the generated paraphrases.",
"1 1 Introduction Paraphrase generation (McKeown, 1983; Barzilay and Lee, 2003) has seen a recent surge of interest, both with large-scale dataset collection and curation (Lan et al., 2017; Wieting and Gimpel, 2018) and with modeling advances such as deep generative models (Gupta et al., 2018; Li et al., 2019).",
"Paraphrasing models have proven to be especially useful if they expose control mechanisms that can be manipulated to produce diverse paraphrases (Iyyer et al., 2018; Chen et al., 2019b; Park et al., 2019), which allows these models to be employed for data augmentation (Yu et al., 2018) and 1 Data and code are available at https://github.",
"adversarial example generation (Iyyer et al., 2018).",
"However, prior methods involving syntactic control mechanisms do not effectively cover the space of paraphrase possibilities.",
"Using syntactic templates covering the top of the parse tree (Iyyer et al., 2018) is inflexible, and using fully-specified exemplar sentences (Chen et al., 2019b) poses the problem of how to effectively retrieve such sentences.",
"For a particular input sentence, it is challenging to use these past approaches to enumerate the set of reorderings that make sense for that sentence.",
"In this paper, we propose a two-stage approach to address these limitations, outlined in Figure 1. First, we use an encoder-decoder model (SOW , for Source Order reWriting) to apply transduction operations over various abstracted versions of the input sentence.",
"These transductions yield possible reorderings of the words and constituents, which can be combined to obtain multiple feasible rearrangements of the input sentence.",
"Each rearrangement specifies an order that we should visit words of the source sentence; note that such orderings could encourage a model to passivize (visit the object before the subject), topicalize, or reorder clauses.",
"These orderings are encoded for our encoder-decoder paraphrase model (REAP , for REarrangement Aware Paraphrasing) by way of position embeddings, which are added to the source sentence encoding to specify the desired order of generation (see Figure 2).",
"This overall workflow is inspired by the pre-ordering literature in machine translation (Xia and McCord, 2004; Collins et al., 2005); however, our setting explicitly requires entertaining a diverse set of possible orderings corresponding to different paraphrasing phenomena.",
"We train and evaluate our approach on the large-scale English paraphrase dataset PARA NMT-50M (Wieting and Gimpel, 2018).",
"Results show that our approach generates considerably more diverse paraphrases while retaining the quality exhibited by strong baseline models.",
"We further demonstrate that the proposed syntax-based transduction procedure generates a feasible set of rearrangements for the input sentence.",
"Finally, we show that position embeddings provide a simple yet effective way to encode reordering information, and that the generated paraphrases exhibit high compliance with the desired reordering input.",
"Given an input sentence x = { x 1 , x 2 , . . . , x n } , our goal is to generate a set of structurally distinct paraphrases Y = { y 1 , y 2 , . . . , y k } .",
"We achieve this by first producing k diverse reorderings for the input sentence, R = { r 1 , r 2 , . . . , r k } , that guide the generation order of each corresponding y .",
"Each reordering is represented as a permutation of the source sentence indices.",
"Our method centers around a sequence-to-sequence model which can generate a paraphrase roughly respecting a particular ordering of the input tokens.",
"Formally, this is a model P ( y | x , r ) .",
"First, we assume access to the set of target reorderings R and describe this rearrangement aware paraphrasing model (REAP ) in Section 2.2.",
"Then, in Section 2.3, we outline our reordering approach, including the source order rewriting (SOW ) model, which produces the set of reorderings appropriate for a given input sentence x during inference ( x R ) .",
"The models discussed in this work build on a standard sequence-to-sequence transformer model (Vaswani et al., 2017) that uses stacked layers of self-attention to both encode the input tokens x and decode the corresponding target sequence y .",
"This model is pictured in the gray block of Figure 2. Throughout this work, we use byte pair encoding Encoder Decoder + + + + Input tokens x Original Order Token embeddings Encoder Output EM Target Order r New Encoder Output E 4 3 1 2 1 2 3 4 Output tokens y Clippers won the game BOS The The game Figure 2: Rearrangement aware paraphrasing (REAP ) model.",
"(BPE) (Sennrich et al., 2016) to tokenize our input and output sentences.",
"These models are trained in the standard way, maximizing the log likelihood of the target sequence using teacher forcing.",
"Additionally, in order to ensure that the decoder does not attend to the same input tokens repeatedly at each step of the decoding process, we include a coverage loss term, as proposed in See et al. (2017).",
"Note that since the architecture of the transformer model is non-recurrent, it adds position embeddings to the input word embeddings in order to indicate the correct sequence of the words in both x and y (see Figure 2).",
"In this work, we propose using an additional set of position embeddings to indicate the desired order of words during generation, described next.",
"Let r = { r 1 , r 2 , . . . , r n } indicate the target reordering corresponding to the input tokens x .",
"We want the model to approximately attend to tokens in this specified order when generating the final output paraphrase.",
"For instance, in the example in Figure 1, the reordering specifies that when producing the paraphrase, the model should generate content related to the game before content related to Clippers in the output.",
"In this case, based on the rearrangement being applied, the model will most likely use passivization in its generation, although this is not strictly enforced.",
"The architecture for our model P ( y | x , r ) is outlined in Figure 2. Consider an encoder-decoder architecture with a stack of M layers in the encoder R EORDER(S0): Recursively reorder constituents to get final ordering S0 SBAR PRP VP IN S PRP VP1 If it continues to rain I will carry an umbrella MD VP2 VB NPSOW Input SOW Output If S I will VP 4 5 1 2 3 SBAR I will carry NP 1 3 4 5 2 SELECTSEGMENTPAIRS : Choose constituents to abstract REORDERPHRASE : Use seq2seq model to reorder phrase SBAR I will carry NP If S I will VP I will VP if S SBAR NP I carry Source reordering Final paraphrases I will carry an umbrella if rain continues.",
"and N layers in the decoder.",
"We make the target reordering r accessible to this transformer model through an additional set of positional embeddings P E r .",
"We use the sinusoidal function to construct these following Vaswani et al. (2017).",
"Let EM = encoder M ( x ) be the output of the M th (last) layer of the encoder.",
"The special-purpose position embeddings are added to the output of this layer (see Figure 2): E = EM + P E r .",
"Note that these are separate from standard position embeddings added at the input layer; such embeddings are also used in our model to encode the original order of the source sentence.",
"The transformer decoder model attends over E while computing attention and the presence of the position embeddings should encourage the generation to obey the desired ordering r , while still conforming to the decoder language model.",
"Our experiments in Section 4.3 show that this position embedding method is able to successfully guide the generation of paraphrases, conditioning on both the input sentence semantics as well as the desired ordering.",
"We now outline our approach for generating these desired reorderings r .",
"We do this by predicting phrasal rearrangements with the SOW model at various levels of syntactic abstraction of the sentence.",
"We combine multiple such phrase-level rearrangements to obtain a set R of sentence-level rearrangements.",
"This is done using a top-down approach, starting at the root node of the parse tree.",
"The overall recursive procedure is outlined in Algorithm 1. One step of the recursive algorithm has three Algorithm 1 REORDER ( t ) Input: Sub-tree t of the input parse tree Output: Topk list of reorderings for t 's yield T = SELECTSEGMENTPAIRS ( t ) // Step 1 R = INITIALIZEBEAM ( size = k ) for ( A, B ) in T do z = REORDERPHRASE ( t, A, B ) // Step 2 RA (1 , . . . , k ) = REORDER ( t A ) // k orderings RB (1 , . . . , k ) = REORDER ( t B ) // k orderings for r a , r b in RA RB do r = COMBINE ( z, r a , r b ) // Step 3 score ( r ) = score ( z )+ score ( r a )+ score ( r b ) R .",
"major steps: Figure 3 shows the overall workflow for one iteration (here, the root node of the sentence is selected for illustration).",
"First, we select sub-phrase pairs of the input phrase that respect parse-tree boundaries, where each pair consists of non-overlapping phrases (Step 1).",
"Since the aim is to learn generic syntax-governed rearrangements, we abstract out the two sub-phrases, and replace them with non-terminal symbols, retaining only the constituent tag information.",
"For example, we show three phrase pairs in Figure 3 that can be abstracted away to yield the reduced forms of the sentences.",
"We then use a seq2seq model to obtain rearrangements for each abstracted phrase (Step 2).",
"Finally, this top-level rearrangement is combined with recursively-constructed phrase rearrangements within the abstracted phrases to obtain sentence-level rearrangements (Step 3).",
"We begin by selecting phrase tuples that form the input to our seq2seq model.",
"A phrase tuple ( t, A, B ) consists of a sub-tree t with the constituents A and B abstracted out (replaced by their syntactic cat-egories).",
"For instance, in Figure 3, the S 0 , S, and VP 2 nodes circled in red form a phrase tuple.",
"Multiple distinct combinations of A and B are possible.",
"2 Step 2: REORDERPHRASE Next, we obtain rearrangements for each phrase tuple ( t, A, B ) .",
"We first form an input consisting of the yield of t with A and B abstracted out; e.g. If S I will VP , shown in red in Figure 3. We use a sequence-to-sequence model (the SOW model) that takes this string as input and produces a corresponding output sequence.",
"We then perform word-level alignment between the input and generated output sequences (using cosine similarity between GloVe embeddings) to obtain the rearrangement that must be applied to the input sequence.",
"3 The log probability of the output sequence serves as a score for this rearrangement.",
"SOW model The SOW model is a sequence-to-sequence model P ( y (cid:48) | x (cid:48) , o ) , following the transformer framework in Section 2.1.",
"4 Both x (cid:48) and y (cid:48) are encoded using the word pieces vocabulary; additionally, embeddings corresponding to the POS tags and constituent labels (for non-terminals) are added to the input embeddings.",
"For instance, in Figure 3, If S I will VP and I will VP if S is an example of an ( x (cid:48) , y (cid:48) ) , pair.",
"While not formally required, Algorithm 1 ensures that there are always exactly two non-terminal labels in these sequences.",
"o is a variable that takes values MONOTONE or FLIP .",
"This encodes a preference to keep the two abstracted nodes in the same order or to flip them in the output.",
"5 o is encoded in the model with additional positional encodings of the form { . . . 0 , 0 , 1 , 0 , . . . 2 , 0 . . . } for monotone and 2 In order to limit the number of such pairs, we employ a threshold on the fraction of non-abstracted words remaining in the phrase, outlined in more detail in the Appendix.",
"{ . . . 0 , 0 , 2 , 0 , . . . 1 , 0 . . . } for flipped, wherein the non-zero positions correspond to the positions of the abstracted non-terminals in the phrase.",
"These positional embeddings for the SOWMODEL are handled analogously to the r embeddings for the REAP model.",
"During inference, we use both the monotone rearrangement and flip rearrangement to generate two reorderings, one of each type, for each phrase tuple.",
"The previous step gives a rearrangement for the subtree t .",
"To obtain a sentence-level rearrangement from this, we first recursively apply the REORDER algorithm on subtrees t A and t B which returns the top-k rearrangements of each subtree.",
"We iterate over each rearrangement pair ( r a , r b ) , applying these reorderings to the abstracted phrases A and B .",
"This is illustrated on the left side of Figure 3. The sentence-level representations, thus obtained, are scored by taking a mean over all the phrase-level rearrangements involved.",
"We train and evaluate our model on the PARA NMT-50M paraphrase dataset (Wieting and Gimpel, 2018) constructed by backtranslating the Czech sentences of the CzEng (Bojar et al., 2016) corpus.",
"We filter this dataset to remove shorter sentences (less than 8 tokens), low quality paraphrase pairs (quantified by a translation score included with the dataset) and examples that exhibit low reordering (quantified by a reordering score based on the position of each word in the source and its aligned word in the target sentence).",
"This leaves us with over 350k paired paraphrase pairs.",
"To train our REAP model (outlined in Section 2.2), we take existing paraphrase pairs ( x , y ) and derive pseudo-ground truth rearrangements r of the source sentence tokens based on their alignment with the target sentence.",
"To obtain these rearrangements, we first get contextual embeddings (Devlin et al., 2019) for all tokens in the source and target sentences.",
"We follow the strategy outlined in Lerner and Petrov (2013) and perform reorderings as we traverse down the dependency tree.",
"Starting at the root node of the source sentence, we determine the order between the head and its children (independent of other decisions) based on the order If it continues to rain I will carry an umbrella I will carry an umbrella if rain continues Figure 4: Paraphrase sentence pair and its aligned tuples A B, C and A (cid:48) B (cid:48) , C (cid:48) .",
"of the corresponding aligned words in the target sentence.",
"We continue this traversal recursively to get the sentence level-rearrangement.",
"This mirrors the rearrangement strategy from Section 2.3, which operates over constituency parse tree instead of the dependency parse.",
"The PARA NMT-50M dataset contains sentence-level paraphrase pairs.",
"However, in order to train our SOW model (outlined in section 2.3), we need to see phrase-level paraphrases with syntactic abstractions in them.",
"We extract these from the PARA NMT-50M dataset using the following procedure, shown in Figure 4. We follow Zhang et al. (2020) and compute a phrase alignment score between all pairs of constituents in a sentence and its paraphrase.",
"6 From this set of phrase alignment scores, we compute a partial one-to-one mapping between phrases (colored shapes in Figure 4); that is, not all phrases get aligned, but the subset that do are aligned one-to-one.",
"Finally, we extract aligned chunks similar to rule alignment in syntactic translation (Galley et al., 2004): when aligned phrases A and A (cid:48) subsume aligned phrase pairs ( B, C ) and ( B (cid:48) , C (cid:48) ) respectively, we can extract the aligned tuple ( t A , B, C ) and ( t A (cid:48) , B (cid:48) , C (cid:48) ) .",
"The phrases ( B, C ) and ( B (cid:48) , C (cid:48) ) are abstracted out to construct training data for the phrase-level transducer, including supervision of whether o = MONOTONE or FLIP .",
"Using the above alignment strategy, we were able to obtain over 1 million aligned phrase pairs.",
"Setup As our main goal is to evaluate our model's ability to generate diverse paraphrases, we",
"6 The score is computed using a weighted mean of the contextual similarity between individual words in the phrases, where the weights are determined by the corpus-level inverse-document frequency of the words.",
"Details in the Appendix.",
"obtain a set of paraphrases and compare these to sets of paraphrases produced by other methods.",
"To obtain 10 paraphrases, we first compute a set of 10 distinct reorderings r 1 , . . . , r 10 with the SOW method from Section 2.3 and then use the REAP to generate a 1-best paraphrase for each.",
"We use top-k decoding to generate the final set of paraphrases corresponding to the reorderings.",
"Our evaluation is done over 10k examples from PARA NMT-50M.",
"Baselines We compare our model against the Syntactically Controlled Paraphrase Network ( SCPN ) model proposed in prior work (Iyyer et al., 2018).",
"It produces 10 distinct paraphrase outputs conditioned on a pre-enumerated list of syntactic templates.",
"This approach has been shown to outperform other paraphrase approaches that condition on interpretable intermediate structures (Chen et al., 2019b).",
"Additionally, we report results on the following baseline models:",
"i) A copy-input model that outputs the input sentence exactly.",
"ii) A vanilla seq2seq model that uses the same transformer encoder-decoder architecture from Section 2.1 but does not condition on any target rearrangement.",
"We use topk sampling (Fan et al., 2018) to generate 10 paraphrases from this model.",
"diverse-decoding model that uses the above transformer seq2seq model with diverse decoding (Kumar et al., 2019) during generation.",
"Here, the induced diversity is uncontrolled and aimed at maximizing metrics such as distinct n-grams and edit distance between the generated sentences.",
"iv) A LSTM version of our model where the REAP model uses LSTMs with attention (Bahdanau et al., 2014) and copy (See et al., 2017) instead of transformers.",
"We still use the transformer-based phrase transducer to obtain the source sentence reorderings, and still use positional encodings in the LSTM attention.",
"Similar to Cho et al. (2019), we report two types of metrics: 1. Quality : Given k generated paraphrases Y = { y 1 , y 2 . . . y k } for each input sentence in the test set, we select y best that achieves the best (oracle) sentence-level score with the ground truth paraphrase y .",
"The corpus level evaluation is performed using pairs ( y best , y ) .",
"2. Diversity : We calculate BLEU or WER be-7 Prior work (Wang et al., 2019; Li et al., 2019) has shown that such a transformer-based model provides a strong baseline and outperforms previous LSTM-based (Hasan et al., 2016) and VAE-based (Gupta et al., 2018) approaches.",
"In addition to these metrics, we use the paraphrase similarity model proposed by Wieting et al. (2017) to compute a paraphrase score for generated outputs with respect to the input.",
"Similar to Iyyer et al. (2018), we use this score to filter out low quality paraphrases.",
"We report on the rejection rate according to this criterion for all models.",
"Note that our diversity metric is computed after filtering as it is easy to get high diversity by including nonsensical paraphrase candidates that differ semantically.",
"Table 1 outlines the performance of the different models.",
"The results show that our proposed model substantially outperforms the SCPN model across all quality metrics.",
"8 Furthermore, our LSTM model also beats the performance of the SCPN model, demonstrating that the gain in quality cannot completely be attributed to the use of transformers.",
"The quality of our full model (with rearrangements) is also comparable to the quality of the vanilla seq2seq model (without rear-rangements).",
"This demonstrates that the inclusion of rearrangements from the syntax-based neural transducer do not hurt quality, while leading to a substantially improved diversity performance.",
"The SCPN model has a high rejection score of 40 .",
"6% .",
"This demonstrates that out of the 10 templates used to generate paraphrases for each sentence, on average 4 were not appropriate for the given sentence, and therefore get rejected.",
"On the other hand, for our model, only 15 .",
"9% of the generated paraphrases get rejected, implying that the rearrangements produced were generally meaningful.",
"This is comparable to the 12 .",
"7% rejection rate 8 The difference in performance between our proposed model and baseline models is statistically significant according to a paired bootstrap test.",
"exhibited by the vanilla seq2seq model that does not condition on any syntax or rearrangement, and is therefore never obliged to conform to an inappropriate structure.",
"Finally, our model exhibits a much higher diversity within the generated paraphrases compared to the transformer seq2seq baseline.",
"As expected, the SCPN model produces slightly more diverse paraphrases as it explicitly conditions the generations on templates with very different top level structures.",
"However, this is often at the cost of semantic equivalence, as demonstrated by both quantitative and human evaluation (next section).",
"A similar trend was observed with the diverse-decoding scheme.",
"Although it leads to more diverse generations, there is a substantial decrease in quality compared to SOW-REAP and the seq2seq model.",
"Moreover, the paraphrases have a higher rejection rate (21.3%), suggesting that diverse decoding is more likely to produce nonsensical paraphrases.",
"A similar phenomenon is also reported by Iyyer et al. (2018), wherein diverse-decoding resulted in paraphrases with different semantics than the input.",
"Syntactic Exemplars In addition to SCPN, we compare our proposed model against the controllable generation method of Chen et al. (2019b).",
"Their model uses an exemplar sentence as a syntactic guide during generation; the generated paraphrase is trained to incorporate the semantics of the input sentence while emulating the syntactic structure of the exemplar (see Appendix D for exam-ples).",
"However, their proposed approach depends on the availability of such exemplars at test time; they manually constructed these for their test set ( 800 examples).",
"Since we do not have such example sentences available for our test data, we report results of our model's performance on their test data.",
"Note that Chen et al. (2019b) carefully curated the exemplar to be syntactically similar to the actual target paraphrase.",
"Therefore, for fair comparison, we report results using the ground truth ordering (that similarly leverages the target sentence to obtain a source reordering), followed by the REAP model.",
"This model (ground truth order + REAP ) achieves a 1-best BLEU score of 20.9, outperforming both the prior works: Chen et al. (2019b) (13.6 BLEU) and SCPN (17.8 BLEU with template, 19.2 BLEU with full parse).",
"Furthermore, our full SOWREAP model gets an oracle-BLEU (across 10 sentences) score of 23.8.",
"These results show that our proposed formulation outperforms other controllable baselines, while being more flexible.",
"Table 2 provides examples of paraphrase outputs produced by our approach and SCPN.",
"The examples show that our model exhibits syntactic diversity while producing reasonable paraphrases of the input sentence.",
"On the other hand, SCPN tends to generate non-paraphrases in order to conform to a given template, which contributes to increased diversity but at the cost of semantic equivalence.",
"In Table 3, we show the corresponding sequence of rules that apply to an input sentence, and the final generated output according to that input rearrangement.",
"Note that for our model, on average, 1 .",
"8 phrase-level reorderings were combined to produce sentence-level reorderings (we restrict to a maximum of 3 ).",
"More examples along with the input rule sequence (for our model) and syntactic templates (for SCPN) are provided in the Appendix.",
"evalu-Input Sentence : if at any time in the preparation of this",
"ate the quality of the generated paraphrases.",
"We randomly sampled 100 sentences from the development set.",
"For each of these sentences, we obtained 3 generated paraphrases from each of the following models:",
"i) SCPN,",
"ii) vanilla sequence-to-sequence and",
"iii) our proposed SOW-REAP model.",
"We follow earlier work (Kok and Brockett, 2010; Iyyer et al., 2018) and obtain quality annotations on a 3 point scale: 0 denotes not a paraphrase, 1 denotes that the input sentence and the generated sentence are paraphrases, but the generated sentence might contain grammatical errors, 2 indicates that the input and the candidate are paraphrases.",
"To emulate the human evaluation design in Iyyer et al. (2018), we sample paraphrases after filtering using the criterion outlined in the previous section and obtain three judgements per sentence and its 9 paraphrase candidates.",
"Table 4 outlines the results from the human evaluation.",
"As we can see, the results indicate Model 2 1 0 SCPN (Iyyer et al., 2018) 35.9 24.8 39.3 Transformer seq2seq 45.1 20.6 34.3 SOW-REAP 44.5 22.6 32.9 Table 4: Human annotated quality across different models.",
"that the quality of the paraphrases generated from our model is substantially better than the SCPN model.",
"9 Furthermore, similar to quantitative evaluation, the human evaluation also demonstrates that the performance of this model is similar to that of the vanilla sequence-to-sequence model, indicating that the inclusion of target rearrangements do not hurt performance.",
"Next, we intrinsically evaluate the performance of our SOW model (Section 2.3).",
"Specifically, given a budget of 10 reorderings, we want to understand how close our SOW model comes to covering the target ordering.",
"We do this by evaluating the REAP model in terms of oracle perplexity (of the ground truth paraphrase) and oracle BLEU over these 10 orderings.",
"We evaluate our proposed approach against 3 systems:",
"a) Monotone reordering { 1 , 2 , . . . , n } .",
"b) Random permutation, by randomly permuting the children of each node as we traverse down the constituency parse tree.",
"c) Ground Truth , using the pseudo-ground truth rearrangement (outlined in Section 3) between the source and ground-truth target sentence.",
"This serves as an upper bound for the reorderings' performance, as obtained by the recursive phrase-level transducer.",
"9 The difference of our model performance with SCPN is statistically significant, while that with baseline seq2seq is not according to a paired bootstrap test.",
"Table 5 outlines the results for 10 generated paraphrases from each rearrangement strategy.",
"Our proposed approach outperforms the baseline monotone and random reordering strategies.",
"Furthermore, the SOW model's oracle perplexity is close to that of the ground truth reordering's perplexity, showing that the proposed approach is capable of generating a diverse set of rearrangements such that one of them often comes close to the target rearrangement.",
"The comparatively high performance of the ground truth reorderings demonstrates that the positional embeddings are effective at guiding the REAP model's generation.",
"Finally, we evaluate whether the generated paraphrases follow the target reordering r .",
"Note that we do not expect or want our REAP model to be absolutely compliant with this input reordering since the model should be able to correct for the mistakes make by the SOW model and still generate valid paraphrases.",
"Therefore, we perform reordering compliance experiments on only the monotone reordering and the pseudo-ground truth reorderings ( r , construction outlined in Section 3), since these certainly correspond to valid paraphrases.",
"For sentences in the test set, we generate paraphrases using monotone reordering and pseudo-ground truth reordering as inputs to REAP .",
"We get the 1-best paraphrase and compute the degree of rearrangement 10 between the input sentence and 10 Quantified by Kendall's Tau rank correlation between original source order and targeted/generated order.",
"Higher the generated sentence.",
"In Figure 5, we plot this as a function of the target degree of rearrangement, i.e., the rearrangement between the input sentence x and the ground truth sentence y .",
"The dotted line denotes the ideal performance of the model in terms of agreement with the perfect reordering r .",
"The plot shows that the REAP model performs as desired; the monotone generation results in high Kendall's Tau between input and output.",
"Conditioning on the pseudo-ground truth reorderings ( r ) produces rearrangements that exhibit the same amount of reordering as the ideal rearrangement.",
"Paraphrase Generation Compared to prior seq2seq approaches for paraphrasing (Hasan et al., 2016; Gupta et al., 2018; Li et al., 2018), our model is able to achieve much stronger controllability with an interpretable control mechanism.",
"Like these approaches, we can leverage a wide variety of resources to train on, including backtranslation (Pavlick et al., 2015; Wieting and Gimpel, 2018; Hu et al., 2019) or other curated data sources (Fader et al., 2013; Lan et al., 2017).",
"Controlled Generation Recent work on controlled generation aims at controlling attributes such as sentiment (Shen et al., 2017), gender or political slant (Prabhumoye et al., 2018), topic (Wang et al., 2017), etc.",
"However, these methods cannot achieve fine-grained control over a property like syntax.",
"Prior work on diverse paraphrase generation can be divided into three groups: diverse decoding, latent variable modeling, and syntax-based.",
"The first group uses heuristics such as Hamming distance or distinct n -grams to preserve diverse options during beam search decoding (Vijayaku-mar et al., 2018; Kumar et al., 2019).",
"The second group includes approaches that use uninterpretable latent variables to separate syntax and semantics (Chen et al., 2019a), perturb latent representations to enforce diversity (Gupta et al., 2018; Park et al., 2019) or condition on latent codes used to represent different re-writing patterns (Xu et al., 2018; An and Liu, 2019).",
"Qian et al. (2019) uses distinct generators to output diverse paraphrases.",
"These methods achieve some diversity, but do not control generation in an interpretable manner.",
"Finally, methods that use explicit syntactic structures (Iyyer et al., 2018; Chen et al., 2019b) may try to force a Kendall's Tau indicates lower rearrangement and vice-versa.",
"sentence to conform to unsuitable syntax.",
"Phrase-level approaches (Li et al., 2019) are inherently less flexible than our approach.",
"Machine Translation Our work is inspired by pre-ordering literature in machine translation.",
"These systems either use hand-crafted rules designed for specific languages (Collins et al., 2005; Wang et al., 2007) or automatically learn rewriting patterns based on syntax (Xia and McCord, 2004; Dyer and Resnik, 2010; Genzel, 2010; Khalilov and Simaan, 2011; Lerner and Petrov, 2013).",
"There also exist approaches that do not rely on syntactic parsers, but induce hierarchical representations to leverage for pre-ordering (Tromble and Eisner, 2009; DeNero and Uszkoreit, 2011).",
"In the context of translation, there is often a canonical reordering that should be applied to align better with the target language; for instance, head-final languages like Japanese exhibit highly regular syntax-governed reorderings compared to English.",
"However, in diverse paraphrase generation, there doesn't exist a single canonical reordering, making our problem quite different.",
"In concurrent work, Chen et al. (2020) similarly use an additional set of position embeddings to guide the order of generated words for machine translation.",
"This demonstrates that the REAP technique is effective for other tasks also.",
"However, they do not tackle the problem of generating plausible reorderings and therefore their technique is less flexible than our full SOW-REAP model.",
"In this work, we propose a two-step framework for paraphrase generation: construction of diverse syntactic guides in the form of target reorderings followed by actual paraphrase generation that respects these reorderings.",
"Our experiments show that this approach can be used to produce paraphrases that achieve a better quality-diversity trade-off compared to previous methods and strong baselines.",
"This work was partially supported by NSF Grant IIS-1814522, a gift from Arm, and an equipment grant from NVIDIA.",
"The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources used to conduct this research.",
"Thanks as well to the anonymous reviewers for their helpful comments."
]
| [
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"abstain",
"method",
"result",
"objective",
"result",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"objective",
"result",
"other",
"other",
"other"
]
|
[
"In domain adaptation for neural machine translation, translation performance can benefit from separating features into domain-specific features and common features.",
"In this paper, we propose a method to explicitly model the two kinds of information in the encoder-decoder framework so as to exploit out-of-domain data in in-domain training.",
"In our method, we maintain a private encoder and a private decoder for each domain which are used to model domain-specific information.",
"In the meantime, we introduce a common encoder and a common decoder shared by all the domains which can only have domain-independent information flow through.",
"Besides, we add a discriminator to the shared encoder and employ adversarial training for the whole model to reinforce the performance of information separation and machine translation simultaneously.",
"Experiment results show that our method can outperform competitive baselines greatly on multiple data sets.",
"Neural machine translation (NMT) (Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2014; Gehring et al., 2017) has made great progress and drawn much attention recently.",
"Most NMT models are based on the encoder-decoder architecture, where all the sentence pairs share the same set of parameters for the encoder and decoder which makes NMT models have a tendency towards overfitting to frequent observations (e.g., words, word co-occurrences, translation patterns), but overlooking special cases that are not frequently observed.",
"However, in practical applications, NMT models usually need to perform translation for some specific domain with only a small quantity of in-*Corresponding Author domain training data but a large amount of out-of-domain data.",
"Simply combining in-domain training data with out-of-domain data will lead to overfitting to the out-of-domain data.",
"Therefore, some domain adaptation technique should be adopted to improve in-domain translation.",
"Fortunately, out-of-domain data still embodies common knowledge shared between domains.",
"And incorporating the common knowledge from out-of-domain data can help in-domain translation.",
"Britz et al. (2017) have done this kind of attempts and managed to improve in-domain translation.",
"The common architecture of this method is to share a single encoder and decoder among all the domains and add a discriminator to the encoder to distinguish the domains of the input sentences.",
"The training is based on adversarial learning between the discriminator and the translation , ensuring the encoder can learn common knowledge across domains that can help to generate target translation.",
"Zeng et al. (2018) extend this line of work by introducing a private encoder to learn some domain specific knowledge.",
"They have proven that domain specific knowledge is a complement to domain invariant knowledge and indispensable for domain adaptation.",
"Intuitively, besides the encoder, the knowledge inferred by the decoder can also be divided into domain specific and domain invariant and further improvement will be achieved by employing private decoders.",
"In this paper, in order to produce in-domain translation with not only common knowledge but in-domain knowledge, we employ a common encoder and decoder among all the domains and also a private encoder and decoder for each domain separately.",
"The differences between our method and the above methods are in two points: first, we employ multiple private encoders rather where all the domains only have one private encoder; second, we also introduce multiple private decoders contrast to no private decoder.",
"This architecture is based on the consideration that out-of-domain data is far more than in-domain data and only using one private encoder and/or decoder has the risk of overfitting.",
"Under the framework of our method, the translation of each domain is predicted on the output of both the common decoder and its private decoder.",
"In this way, the in-domain private decoder has direct influence to the generation of in-domain translation and the out-of-domain decoder is used to help train the common encoder and decoder better which can also help in-domain translation.",
"We conducted experiments on English Chinese and English German domain adaptation tasks for machine translation under the framework of RNNSearch (Bahdanau et al., 2014) and Transformer (Vaswani et al., 2017) and get consistently significant improvements over several strong baselines.",
"The task of domain adaptation for NMT is to translate a text in-domain for which only a small number of parallel sentences is available.",
"The main idea of the work for domain adaptation is to introduce external information to help in-domain translation which may include in-domain monolingual data, meta information or out-of-domain parallel data.",
"To exploit in-domain monolingual data, Glehre et al. (2015) train a RNNLM on the target side monolingual data first and then use it in decoding.",
"Domhan and Hieber (2017) further extend this work by training the RNNLM part and translation part jointly.",
"Sennrich et al. (2015a) propose to conduct back translation for the monolingual target data so as to generate the corresponding parallel data.",
"Zhang and Zong (2016) employs the self-learning algorithm to generate the synthetic large-scale parallel data for NMT training.",
"To introduce meta information, Chen et al. (2016) use the topic or category information of the input text to assistant the decoder and Kobus et al. (2017) extend the generic NMT models, which are trained on a diverse set of data to, specific domains with the specialized terminology and style.",
"To make use of out-of-domain parallel data, Luong and Manning (2015) first train an NMT model with a large amount of out-of-domain data, then fine tune the model with in-domain data.",
"Wang et al. (2017a) select sentence pairs from the out-of-domain data set according to their similarity to the in-domain data and then add them to the in-domain training data.",
"Chu et al. (2017) construct the training data set for the NMT model by combining out-of-domain data with the over-sampled in-domain data.",
"Wang et al. (2017b) combine the in-domain and out-of-domain data together as the training data but apply instance weighting to get a weight for each sentence pair in the out-of-domain data which is used in the parameter updating during back propagation.",
"Britz et al. (2017) employ a common encoder to encode the sentences from both the in-domain and out-of-domain data and meanwhile add a discriminator to the encoder to make sure that only domain-invariant information is transferred to the decoder.",
"They focus on the situation that the quantity of the out-of-domain data is almost the same as the in-domain data while our method can handle more generic situations and there is no specific demand for the ratio of the quantity between the in-domain and out-of-domain data.",
"Besides, our method employs a private encoder-decoder for each domain which can hold the domain-specific features.",
"In addition to the common encoder, Zeng et al. (2018) further introduce a domain-specific encoder to each domain together with a domain-specific classifier to ensure the features extracted by the domain-specific encoder is proper.",
"Compared to our method, they focus on the encoder and do not distinguish the information in the decoder.",
"Adversarial Networks have achieved great success in some areas (Ganin et al., 2016; Goodfellow et al., 2014).",
"Inspired by these work, we also employ a domain discriminator to extract some domain invariant features which has already shown its effectiveness in some related NLP tasks.",
"Chen et al. (2017) use a classifier to exploit the shared information between different Chinese word segment criteria.",
"Gui et al. (2017) tries to learn common features of the out-domain data and in-domain data through adversarial discriminator for the part-of-speech tagging problem.",
"Kim et al. (2017) train a cross-lingual model with language-adversarial training to generate the general information across different languages for the POS tagging problem.",
"All these work try to utilize a discriminator to distinguish invariant features across the divergence.",
"Our method can be applied to both the RNN-based NMT model (Bahdanau et al., 2014) and self-attention-based NMT model (Vaswani et al., 2017).",
"In this paper, we will introduce our method under the RNN-based framework and the application to the self-attention-based framework can be implemented in a similar way.",
"Before introducing our method, we will first briefly describe the RNN-based NMT model with attention shown in Figure 1. The encoder uses two GRUs to go through source words bidirectionally to get two hidden states h i and h i for the source word x i , which are then concatenated to produce the final hidden states for x i as follows h i = [ h i ; h i ] (1) The attention layer aims to extract the source information which is most related to the generation of each target word.",
"First it evaluates the correlation between the previous decoder hidden state s j 1 and each source hidden state h i by e ij = v T tanh ( W s j 1 + U h i ) , (2) next calculates ij which is the correlation degree to each target hidden state h i , and then gets the attention c j .",
"The formulation is as follows ij = exp( e ij ) (cid:80) l s i (cid:48) =1 exp( e i (cid:48) j ); c j = l s (cid:88) i =1 ij h i (3) The decoder also employs a GRU to get the hidden state s j for the target word y j as s j = g ( y j 1 , s j 1 , c j ) .",
"Then the probability of the target word y j is defined as follows p ( y j | s j , y j 1 , c j ) exp( y T j W o t j ) (5) where t j is computed by t j = U o s j 1 + V o E y j 1 + C o c j (6) 4 The Proposed Method Assume that we have two kinds of training data: out-of-domain and in-domain, and we want to get the translation for the in-domain input.",
"The out-of-domain and in-domain data can be represented as out = { ( x k , y k ) } N out k =1 D out ; in = { ( x k , y k ) } N in k =1 D in (7) The main idea of our method is to extract domain invariant information from the out-of-domain data to improve in-domain translation.",
"To this end, we employ a common encoder and a common decoder shared by both of the domains, and a private encoder and a private decoder for each domain.",
"The main architecture given in Figure 2. The working scenario of our method is as follows.",
"When a sentence comes, it is inputted into the shared encoder and the private encoder of the corresponding domain simultaneously.",
"Then the output of the shared encoder is fed into the shared decoder and the output of the private encoder into its corresponding private decoder.",
"Finally, the shared decoder and the private decoder collaborate together to generate the current target word with a gate to decide the contribution ratio.",
"In addition, our method also introduce a discriminator to distinguish the domain of the input sentence based on the output of the shared encoder.",
"When the discriminator cannot predict the domain of the input sentence, we can think the knowledge encoded in the shared encoder is domain invariant.",
"This is achieved with a gradient reversal layer (GRL) so that the gradients are reversed during back-propagation.",
"In this way, the adversarial training is performed between the translation and the discriminator.",
"Our model has a shared encoder, an in-domain private encoder and an out-of-domain private encoder, where the shared encoder accepts input from the two domains.",
"Given a sentence of domain p ( p { in , out } ), the shared encoder and the private encoder of domain p will roll the sentence as the encoder shown in Section 3 and the outputs of the shared encoder and the private encoder for word x j are represented as h c j and h p j respectively.",
"The Attention Layer As the output of the shared encoder is only fed to the shared decoder and the output of the private encoder of domain p only flows to the private decoder of domain p , we only need to calculate the attention of the shared decoder over the shared encoder and the attention of the private decoder of domain p over the private encoder of domain p .",
"We calculate these two attentions as in Section 3 and denote them as c c j and c p j for the shared decoder and the private decoder, respectively.",
"The Decoder We also maintain a shared decoder, an in-domain private decoder and an out-of-domain private decoder For a sentence of domain p ( p { in , out } ), the shared decoder and the private decoder of domain p act in the same way as shown in Equation 4 and Equation 6 and then produce the hidden states s c j and t c j for the shared decoder, and s p j and t p j for the private decoder.",
"To predict the target word y j , t c j and t p j are weighted added to get t j as z j = ( W z t c j + U z t p j ); t j = z j t c j + (1 z j ) t p j (8) Where ( ) is the sigmoid function and W z and U z are shared by in-domain and out-of-domain.",
"Finally the probability of the target word y j is computed with P ( y j | . . . ) exp( y T j W o t j ); (9) 4.2 The Domain Discriminator The domain discriminator acts as a classifier to determine the knowledge encoded in the shared encoder is from in-domain or from out-of-domain.",
"When a well trained discriminator can't classify the domain properly, we can think the knowledge in the shared encoder is domain invariant (Ganin et al., 2016).",
"As CNN has shown its effectiveness in some related classification tasks (Zhang et al., 2015; Yu et al., 2017), we construct our discriminator with CNN.",
"First, the input to the CNN is the representation of the whole source sentence which is got by concatenating the sequence of hidden states generated by the shared encoder as 1: I = h 1 h 2 h I (10) where I is the length of the source sentence and h 1 , ..., h I is the hidden state of the corresponding source word.",
"stands for the concatenation operation of the hidden states, and we can get the final source sentence representation 1: I RI m where m is the dimension of the hidden state.",
"We then employ a kernel w R l m to apply a convolutional operation to produce a new feature map: f = ( w 1: I + b f ) (11) where is the ReLU activation function, stands for the convolutional operation of the kernel and b is the bias term.",
"A number of different kinds of kernels with different windows sizes are used in our work to extract different features at different scales.",
"Next, we apply a max-over-time pooling operation over the feature maps to get a new feature map.",
"To further improve the performance of the discriminator, following the work (Yu et al., 2017), we also add the highway architecture (Sri-vastava et al., 2015; Zhang et al., 2018) behind the pooled feature maps where we use a gate to control the information flow between the two layers.",
"Finally, the combined feature map is fed into a fully connected network with a sigmoid activation function to make the final predictions: p ( d ) exp( W d f + b d ); (12) where d is the domain label of in-domain or out-of-domain.",
"Our final loss considers the translation loss and the domain prediction loss.",
"For the translation loss, we employ cross entropy to maximize the translation probability of the ground truth, so we have this loss as follows and the training objective is to minimize the loss.",
"where N in and N out are the number of training sentences for in-domain and out-of-domain data respectively, J k is the length of the k -th ground truth sentence, and p ( y kj ) is the predicted probability of the j -th word for the k -th ground truth sentence.",
"Note that we have three different encoders and three different decoders in total, including the shared encoder and decoder, the in-domain private encoder and decoder, and the out-of-domain private encoder and decoder, and all of them have their own parameters.",
"For the domain prediction loss, we also use cross-entropy to minimize the following loss LD = N in + N out (cid:88) k =1 log p ( d k ) (14) where d k is the ground truth domain label of the k -th input sequence.",
"Then the final loss is defined as L = LMT + LD (15) where is a hyper-parameter to balance the effects of the two parts of loss.",
"We gradually tried from 0.1 to 2.5 and set it to 1.5 in our final experiments.",
"Borrowing ideas from Ganin et al. (2016), we introduce a special gradient reversal layer (GRL) between the shared encoder and the domain discriminator.",
"During forward propagation, the GRL has no influence to the model, while during back-propagation training, it multiplies a certain negative constant to the gradients back propagated from the discriminator to the shared encoder.",
"In this way, an adversarial learning is applied between the translation part and the discriminator.",
"At the beginning of the training, we just use the LMT to train the translation part on the combined data, including the shared encoder-decoder and the in-domain and out-of-domain private encoder-decoder.",
"Then we use LD to only train the domain discriminator until the precision of the discriminator reach 90% while the parameters of the shared encoder keep unupdated.",
"Finally, we train the whole model with the complete loss L with all the parameters updated.",
"In the training process, the sentences in each batch is sampled from in-domain and out-of-domain data at the same rate.",
"During testing, we just use the shared encoder-decoder and the private in-domain encoder-decoder to perform in-domain translation.",
"We evaluated our method on the English Chinese (En-Zh), English German (En-De) and German English (De-En) domain adaptation translation task.",
"English Chinese For this task, out-of-domain data is from the LDC corpus 1 that contains 1.25M sentence pairs.",
"The LDC data is mainly related to the News domain.",
"We chose the parallel sentences with the domain label Laws from the UM-Corpus (Tian et al., 2014) as our in-domain data.",
"We chose 109K, 1K and 1K sentences from the UM-Corpus randomly as our training, development and test data.",
"We tokenized and lowercased the English sentences with Moses 2 scripts.",
"For the Chinese data, we performed word segmentation using Stanford Segmenter 3 .",
"English German For this task, the training data is from the Europarl corpus distributed for the shared domain adaptation task of WMT 2007 (Callison-Burch et al., 2007) where the out-of-domain data is mainly related to the News domain, containing about 1.25M sentence pairs, and in-domain data is mainly related to the News Commentary domain which is more informal compared to the news corpus, containing about 59.1K sentences.",
"We also used the development set of the domain adaptation shared task.",
"Finally, we tested our method on the NC test set of WMT 1 https://www.ldc.upenn.edu/ 2 http://www.statmt.org/moses/ 3 https://nlp.stanford.edu/ 2006 and WMT 2007.",
"We tokenized and lowercased the corpora.",
"German English For this task, out-of-domain corpus is from the WMT 2015 en-de translation task which are mainly News texts (Bojar et al., 2015) containing about 4.2M sentence pairs.",
"For the in-domain corpus, we used the parallel training data from the IWSLT 2015 which is mainly from the the TED talks containing about 190K sentences.",
"In addition, dev2012 and test2013/2014/2015 of IWSLT 2015 were selected as the development and test data, respectively.",
"We tokenized and truecased the corpora.",
"Besides, 16K, 16K and 32K merging operations were performed to learn byte-pair encod-ing(BPE) (Sennrich et al., 2015b) on both sides of the parallel training data and sentences longer than 50, 50 and 80 tokens were removed from the training data, respectively.",
"We implemented the baseline and our model by PyTorch framework 4 .",
"For the En-Zh and En-De translation task, batch size was set to 80 and vocabulary size was set to 25k which covers all the words in the training set.",
"The source and target embedding sizes were both set to 256 and the size of the hidden units in the shared encoder-decoder RNNs was also set to 256.",
"During experiments, we found that the shared encoder-decoder played a major role in the model and the size of the private encoder-decoder didn't influence the results too much.",
"Thus we just set the size of the private encoder-decoder one-quarter of the shared encoder-decoder considering the training and decoding speed.",
"For the De-En translation task, batch size was set to 40 and vocabulary size was set to 35K in the experiment.",
"The source and target embedding sizes were both set to 620 and the size of the hidden units in the shared encoder and decoder RNNs was set to 1000.",
"As mentioned before, the size of the private encoder-decoder was just one-quarter of the shared encoder-decoder.",
"All the parameters were initialized by using uniform distribution over [ 0 . 1 , 0 . 1] .",
"The adadelta algorithm was employed to train the model.",
"We reshuffled the training set between epochs.",
"Besides, the beam size was set to 10.",
"with the following models, namely: In : This model was trained only with the in-domain data.",
"Out + In : This model was trained with both of the in-domain and out-of-domain data.",
"Sampler (Chu et al., 2017) : This method over-sampled the in-domain data and concatenated it with the out-of-domain data.",
"Fine Tune (Luong and Manning, 2015) : This model was trained first on the out-of-domain data and then fine-tuned using the in-domain data.",
"Domain Control (DC) (Kobus et al., 2017) : This method extend word embedding with an arbitrary number of cells to encode domain information.",
"Discriminative Mixing (DM) (Britz et al., 2017) : This method adds a discriminator on top of the encoder which is trained to predict the correct class label of the input data.",
"The discriminator is optimized jointly with the translation part.",
"Target Token Mixing (TTM) (Britz et al., 2017) : This method append a domain token to the target sequence.",
"Adversarial Discriminative Mix-ing(ADM) (Britz et al., 2017) : This method is similar with our model which also add a discriminator to extract common features across domains.",
"The biggest difference is that we add private parts to preserve the domain specific features.",
"Besides we also applied a different training strategy as the section 5 describes so that our method can handle more generic situations.",
"Noting that our model has a private encoder-decoder which brings extra parameters, we just slightly extend the hidden size of the contrast model to make sure that the total parameter number of the contrast model is equal to the number of our model's translation part.",
"The En-Zh Experiments Results are measured using char based 5-gram BLEU score (Papineni et al., 2002) by the multi-bleu.pl script.",
"The main results are shown in Table 1. On both of the development set and test set, our model significantly outperforms the baseline models and other contrast models.",
"Furthermore, we got the following conclusions: First, the baseline model 'In' surpass the 'Out + In' model which shows that the NMT model tends to fit out-of-domain features if we directly include",
"However, we also found that the model will over fit so soon if we only use the in-domain data so it is necessary to make use of the out-of-domain data to improve the translation performance.",
"Second, we found that when the in-domain data is much less than the out-of-domain data, some contrast methods for domain adaptation, such as DC, DM TTM and ADM, didn't perform well.",
"They were worse than the baseline model 'in' and only slightly better than 'out + in'.",
"These methods En-De test06 test07 average In 23.36 25.00 24.18 Out + In 20.69 22.43 21.56 Sampler 26.83 29.01 27.92 Fine Tune 27.02 29.19 28.11 our method 27.97* 30.67** 29.32 Table 2: Results of the WMT 07 en-de translation experiments.",
"all try to take domain information into translation in their own ways which actually brings improvement compared with the 'out + in' model.",
"However, as the out-of-domain data is much more than the in-domain data, the model will still tends to fit out-of-domain data and ignore the in-domain information which will degrade the final performance.",
"Therefore, it is necessary to handle the in-domain data separately in some way.",
"The 'Sampler' and 'Fine Tune' perform better because they receive much more information from the in-domain data compared with other methods, but they don't make use of the domain information when translating.",
"Last, our model achieves the best performance among all the contrast models.",
"The shared encoder extract the domain invariant features of the two domains with the help of the discriminator so that the shared part will be well trained using all the in-domain and out-of-domain data.",
"At the meantime, we also consider the domain specific features and the private encoder-decoder can re-De-En test13 test14 test15 In 25.83 21.97 24.64 Out + In 26.45 23.21 25.85 Sampler 29.70 25.71 28.29 Fine Tune 30.48 26.55 28.62 Sennrich et al. (2015a) 28.20 24.40 26.70 Wang et al. (2017b) 28.58 24.12 our method 30.99 26.94 29.30* Table 3: Results of the IWSLT 15 en-de experiments.",
"ceive enough information from the in-domain data to prevent the whole model from overfitting the out-of-domain features.",
"The En-De Experiments and De-En Experiments results are shown in the Table 2 and Table 3. Results are measured using word based 4-gram BLEU score (Papineni et al., 2002) by the multi-bleu.pl script.",
"In these two experiments, we only compared our method with the baseline model and the competitive contrast methods 'Sampler' (Kobus et al., 2017) and 'Fine Tune' (Lu-ong and Manning, 2015).",
"Similar to the previous experiment results, our method still achieves the best performance compared to all contrast models, which demonstrates again that our model is effective and general to different language pairs and different domains.",
"We made some some detailed analysis to empirically show the effectiveness of our model based on En-Zh translation task.",
"In order to further understand the impact of the components of the proposed model, we performed some further studies by training multiple versions of our model by removing some components: The first model removed the domain discriminator but preserved the private part.",
"The second one removed the private encoder-decoder but kept the domain discriminator.",
"The last one just removed both of those two parts.",
"Results are shown in the Table 4. As expected, the best performance is obtained with the simultaneous use of all the tested elements.",
"When we DCN Private dev test average 36.55 34.84 35.70 35.73 34.09 34.91 35.67 34.22 34.94 35.13 33.36 34.25 Table 4: Results of the ablation study.",
"removed the private encoder-decoder, the result shows that the score was reduced by 0.79, which indicates that our private part can preserve some useful domain specific information which is abandoned by the shared encoder.",
"When we removed the discriminator, the result was reduced by 0.76.",
"This result supports our idea that modeling common features from out-of-domain data can benefit in-domain translation.",
"When we removed both of the two components, we got the lowest score.",
"The total result shows that every component of our model plays an important role in our model.",
"To verify whether the discriminator have learned the domain invariant knowledge, we did the following experiments using model without discriminator and our full model with the domain discriminator of the former subsection.",
"We sampled 3000 sentences randomly from the out-of-domain and 1000 sentences from the in-domain En-Zh parallel sentences as the test data.",
"Then they were fed into the shared encoders of the two models to get the reshaped feature maps as the Equation 11 describes.",
"Next, we used the t-Distributed Stochastic Neighbor Embedding(t-SNE) 5 technique to do a dimensionality reduction to the hidden state.",
"Results are shown in the Figure 3. We also calculate the average value of the coordinates of each domain's hidden state.",
"The results are shown in Table 5 5 http://lvdmaaten.github.io/tsne/ En-Zh test1 test2 test3 Out + In 22.31 18.82 17.59 Sampler 21.60 18.64 16.93 Fine Tune 13.18 11.94 11.55 our method 22.61 19.36 17.78 Table 6: Results of the out-of-domain translation task.",
"From the figure, we can find that here is an obvious separation in the results of the model without discriminator and the numerical analysis also support this point, which indicates that the shared encoder without the help of discriminator will treat the data from different domains differently.",
"All the domain shared features and domain specific features are just mixed together.",
"On the contrary, the output of the shared encoder of our full model is well distributed.",
"This proves that the discriminator can help the shared encoder to extract domain invariant features which then help to improve the translation performance in in-domain.",
"Despite the fact that the purpose of our work is to improve the in-domain translation performance, the domain invariant features extracted from the training data are also beneficial to the out-of-domain translation performance.",
"To prove this, we use the NIST 03 04 and 05 test sets which are mainly related to the News domain as our out-of-domain test set.",
"Noting that the origin set was designed for the Zh-En translation task and each sentence has four English references, we just chose the first reference as the source side sentence for our En-Zh translation task.",
"The results are shown in the Table 6 We can conclude from the results that the \"Fine Tune\" method suffered a catastrophic forgetting caused by parameter shift during the training process.",
"On the contrary, our method can achieve a mild improvement on the out-of-domain compared to the baseline system.",
"Transformer (Vaswani et al., 2017) is an efficient NMT architecture.",
"To test the generality of our method, we also conducted relevant experiments based on the transformer model.",
"We did the ex-En-Zh dev test average In 32.61 30.33 31.47 Sampler 35.84 33.68 34.76 Fine Tune 36.01 34.03 35.02 our method 37.26** 35.39** 36.33 Table 7: Results of the En-Zh experiments based on the transformer model.",
"periment based on the Fairseq code 6 .",
"The implementation on this translation framework is similar with the way on the RNN based models.",
"The encoder and decoder of our final model consist 3 sublayers.",
"The number of the multi-head attention was set to 4 and the embedding dim was set to 256.",
"We also compared with the 'Sampler' and 'Fine Tune' method based on transformer.",
"The results are shown in 7.",
"According to the table, our method still outperforms than other models, which can prove that our method has a good generality across different translation architecture.",
"In this paper, we present a method to make use of out-of-domain data to help in-domain translation.",
"The key idea is to divide the knowledge into domain invariant and domain specific.",
"The realization way is to employ a shared encoder-decoder to process domain invariant knowledge and a private encoder-decoder for each domain to process knowledge of the corresponding domain.",
"In addition, a discriminator is added to the shared encoder and adversarial learning is applied to make sure the shared encoder can learn domain invariant knowledge.",
"We conducted experiments on multiple data sets and get consistent significant improvements.",
"We also verified via experiments that the shared encoder, the domain specific private encoder-decoder and the discriminator all make contribution to the performance improvements.",
"We thank the three anonymous reviewers for their comments, Jinchao Zhang, Wen Zhang for suggestions.",
"This work was supported by the National Natural Science Foundation of China (NSFC) under the project NO.61876174, NO.61662077 and NO.61472428.",
"6 https://fairseq.readthedocs.io/en/latest/index.html References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben-gio."
]
| [
"abstain",
"objective",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"method",
"abstain",
"result",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"abstain",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"other",
"other",
"other"
]
|
[
"Although parsing to Abstract Meaning Representation (AMR) has become very popular and AMR has been shown effective on many sentence-level tasks, little work has studied how to generate AMRs that can represent multi-sentence information.",
"We introduce the first end-to-end AMR coreference resolution model in order to build multi-sentence AMRs.",
"Compared with the previous pipeline and rule-based approaches, our model alleviates error propagation and it is more robust for both in-domain and out-domain situations.",
"Besides, the document-level AMRs obtained by our model can significantly improve over the AMRs generated by a rule-based method (Liu et al., 2015) on text summarization.",
"Abstract Meaning Representation (AMR) (Ba-narescu et al., 2013) is a semantic formalism for natural language understanding.",
"It represents a sentence as a rooted, directed and acyclic graph, where nodes (e.g., Bill in Figure 1) represents concepts and edges (e.g., :arg0 ) are the semantic relations.",
"Encompassing knowledge of named entities, semantic roles and coreference structures, AMR has been proven effective for downstream tasks, including information extraction (Rao et al., 2017), text summarization (Liu et al., 2015; Hardy and Vlachos, 2018; Liao et al., 2018), paraphrase detection (Issa Alaa Aldine et al., 2018), event detection (Li et al., 2015), machine translation (Song et al., 2019b) and dialogue understanding (Bonial et al., 2020).",
"Existing work on AMR mainly focuses on individual sentences (Lyu and Titov, 2018; Naseem et al., 2019; Ge et al., 2019; Zhang et al., 2019; Cai and Lam, 2020a; Zhou et al., 2020).",
"On the other hand, with the advance of neural networks in NLP, tasks involving multiple sentences with leave-11 person name name city Bill Paris Sentence1: Bill left for Paris.",
"Sentence2: He arrived at noon.",
"cross-sentence reasoning (e.g., text summarization, reading comprehension and dialogue response generation) have received increasing research attention.",
"Given the effectiveness of AMR on sentence-level tasks (Pan et al., 2015; Rao et al., 2017; Issa Alaa Aldine et al., 2018; Song et al., 2019b), it is important to extend sentence-level AMRs into the multi-sentence level.",
"To this end, a prerequisite step is AMR coreference resolution, which aims to find the AMR components referring to the same entity.",
"Figure 1 shows the AMR graphs of two consecutive sentences in a document.",
"An AMR coreference resolution model need to identify two coreference cases: he refers to Bill in the first graph, and arrive-01 omits an argument :arg3 that refers to Paris .",
"Relatively little research has been done on AMR coreference resolution.",
"Initial attempts (Liu et al., 2015) merge the nodes that have the same surface string.",
"To minimize noise, only named entities and date entities are considered, and they do not consider merging non-identical nodes (e.g., Bill and he in Figure 1) that are also frequent in real-life situation.",
"Subsequent work considers more co-reference cases by either manually annotating AMR coreference information (O'Gorman et al., 2018) or taking a pipeline system (Anikina et al., 2020) consisting of a textual coreference resolution model (Lee et al., 2018) and an AMR-to-text aligner (Flanigan et al., 2014).",
"Yet there is little research on automatically resolving coreference ambiguities directly on AMR, making use of AMR graph-structural features.",
"In this work, we formulate AMR coreference resolution as a missing-link prediction problem over AMR graphs, where the input consists of multiple sentence-level AMRs, and the goal is to recover the missing coreference links connecting the AMR nodes that represent to the same entity.",
"There are two types of links.",
"The first type corresponds to the standard situation, where the edge connects two entity nodes (e.g., Bill and he in Figure 1) that refer to the same entity.",
"The second type is the implicit role coreference , where one node (e.g., Paris in Figure 1) is a dropped argument ( :arg3 ) of other predicate node ( arrive-01 ).",
"We propose an AMR coreference resolution model by extending an end-to-end text-based coreference resolution model (Lee et al., 2017).",
"In particular, we use a graph neural network to represent input AMRs for inducing expressive features.",
"To enable cross-sentence information exchange, we make connections between sentence-level AMRs by linking their root nodes.",
"Besides, we introduce a concept identification module to distinguish functional graph nodes (non-concept nodes, e.g., person in Figure 1), entity nodes (e.g., Bill ), verbal nodes with implicit role (e.g., arrive-01 ) and other regular nodes (e.g., leave-11 ) to help improve the performance.",
"The final antecedent prediction is conducted between the selected nodes and all their possible antecedent candidates, following previous work on textual coreference resolution (Lee et al., 2017).",
"Experiments on the MS-AMR benchmark 1 (O'Gorman et al., 2018) show that our model outperforms competitive baselines by a large margin.",
"To verify the effectiveness and generalization of our proposed model, we annotate an out-of-domain test set over the gold AMR Little Prince 3.0 data following the guidelines of O'Gorman et al. (2018), and the corresponding results show that our model is consistently more robust than the baselines in domain-transfer scenarios.",
"Finally, results on docu-1 It consists gold coreference links on gold AMRs.",
"ment abstractive summarization show that our document AMRs lead to much better summary quality compared to the document AMRs by Liu et al. (2015).",
"This further verifies the practical value of our approach.",
"Our code and data is available at https://github.com/Sean-Blank/AMRcoref 2 Model Formally, an input instance of AMR coreference resolution consists of multiple sentence-level AMRs G 1 , G 2 , ..., G n , where each G i can be written as G i = (cid:104) V i , E i (cid:105) with V i and E i representing the corresponding nodes and edges for G i .",
"We consider a document-level AMR graph G = [ G 1 , G 2 , ..., G n ; e 1 , e 2 , ..., e m ] , where each e i is a coreference link connecting two nodes from different sentence-level AMRs.",
"The task of AMR coreference resolution aims to recover e 1 , ..., e m , which are missing from the inputs.",
"Figure 2 shows the architecture of our model, which consists of a graph encoder ( 2.1), a concept identifier ( 2.2), and an antecedent prediction module ( 2.3).",
"Given sentence-level AMRs G 1 , ..., G n as the input, randomly initialized word embeddings are adopted to represent each node v k as a dense vector e k .",
"To alleviate data sparsity and to obtain better node representation, character embeddings e chark are computed by using a character-level CNN.",
"We concatenate both e k and e chark embeddings for each concept before using a linear projection to form the initial representation: x k = W node ([ e k ; e chark ]) + b node , (1) where W node and b node are model parameters.",
"To enable global information exchange across different sentence-level AMRs, we construct a draft document-level graph by connecting the root nodes of each AMR subgraph as shown in Figure",
"2. This is important because AMR coreference resolution involves cross-sentence reasoning.",
"We then adopt Graph Recurrent Network (GRN, Song et al., 2018; Zhang et al., 2018; Beck et al., 2018) to obtain rich document-level node representations.",
"GRN is one type of graph neural network that iteratively updates its node representations with the message passing framework (Scarselli et al., 2009).",
"Compared with alternatives such as Graph Convolutional Network (GCN, Kipf and Welling 2017; FFNN & SOFTMAX leave-11 person name Bill arrive-01 he date-entity 11 21 31 41 12 22 32 :name :op1 :arg1 :arg0 :time :CONNECT Input Representation GRN Encoder Concept Identification Antecedent Prediction + = he SOFTMAX (leave-11, he) (Bill, he) (dummy , he) (arrive-01, he) P r e d i c t e d li nk : ( B ill , h e ) : : : : : : : : : dropped Figure 2: Model framework for end-to-end AMR coreference resolution.",
"Bastings et al. 2017) and Graph Attention Network (GAT, Veli ckovi c et al. 2018), GRN has been shown to give competitive results.",
"Message passing In the message passing framework, a node v k receives information from its directly connected neighbor nodes at each layer l .",
"We use a hidden state vector h lk to represent each node, and the initial state h 0 k is defined as a vector of zeros.",
"In the first step at each message passing layer, the concept representation of each neighbor of v k is combined with the corresponding edge representation to make a message x k,j .",
"This is because edges contain semantic information that are important for learning global representation and subsequent reasoning.",
"Formally, a neighbor v j of node v k can be represented as x k,j = W node ([ e j ; e charj ; e labelk,j ]) + b node , (2) where e labelk,j denotes the label embedding of the edge from node v k and to v j .",
"Next, representations of neighboring nodes from the incoming and outgoing directions are aggregated: x ink = (cid:88) i N in ( k ) x li,k x outk = (cid:88) j N out ( k ) x lk,j x lk = [ x ink , x outk ] , (3) where N in ( k ) and N out ( k ) denote the set of incoming and outgoing neighbors of v k , respectively.",
"Similarly, the hidden states from incoming and outgoing neighbors are also summed up: m ink = (cid:88) i N in ( k ) h l 1 i m outk = (cid:88) j N out ( k ) h l 1 j m lk = [ m ink , m outk ] , (4) where h l 1 j denotes the hidden state vector for node v j at the previous ( l 1) layer.",
"Finally, the message passing from layer l 1 to l is conducted following the gated operations of LSTM (Hochreiter and Schmidhuber, 1997): i lk = ( W mi m lk + W xi x lk + b i ) o lk = ( W mo m lk + W xo x lk + b o ) f lk = ( W mf m lk + W xf x lk + b f ) u lk = ( W mu m lk + W xu x lk + b u ) c lk = f lk (cid:12) c l 1 k + i lk (cid:12) u lk h lk = o lk (cid:12) tanh ( c lk ) , (5) where i lk , o lk and f lk are a set of input, output and forget gates to control information flow from different sources, u lk represents the input messages, c lk is the cell vector to record memory, and c 0 k is also initialized as a vector of zeros.",
"W mz , W xz and b z ( z { i, o, f, u } ) are model parameters.",
"We adopt L GRN layers in total, where L is determined by a development experiment.",
"The output h Lk at layer L is adopted as the representation of each node v k for subsequent procedures.",
"Concept identification aims to distinguish the AMR nodes in regard to its concept type.",
"We consider 6 concept types T = { func, ent, ver 0 , ver 1 , ver 2 , reg } , which denotes the functional nodes, entity concepts, verbal concepts ver x with implicit arguments (i.e., : arg x x { 0 , 1 , 2 } 2 ) and other regular nodes (e.g., leave-11 ), respectively.",
"This module is comparable to the mention detection procedure in textual coreference resolution (Lee et al., 2017).",
"Formally, a concept representation h Lk from the top GRN layer is concatenated with a learnable type embedding e typek ( t ) of type t for each concept v k , and the corresponding type score s ktype ( t ) is computed using a feed-forward network: s ktype ( t ) = FFNN type ( W type [ h Lk ; e typek ( t )]) , (6) where W type is a mapping matrix.",
"e typek ( t ) represents a concept-type embedding and is randomly initialized.",
"A probability distribution P ( t | v k ) over all concept types T for each concept v k is calculated as follows using a softmax layer: P ( t | v k ) = e s ktype ( t ) (cid:80) t (cid:48) T e s ktype ( t (cid:48) ) .",
"Finally, we predicate the type t k for each concept",
"and use it to filter the input nodes.",
"In particular, functional concepts are dropped directly and the other concepts (i.e., ent, ver 0 , ver 1 , ver 2 , reg) are selected as candidate nodes for antecedent prediction.",
"Given a selected node v k by the concept identifier, the goal is to predict its antecedent y k from all possible candidate nodes Y k = { (cid:15), y , ..., y k 1 } , where a dummy antecedent (cid:15) is adopted for the nodes that are not coreferent with any previous concepts.",
"= min (1 , k ) , where represents the maximum antecedents considered as candidates.",
"As mentioned by previous work on textual coreference resolution (Lee et al., 2017), considering too many candidates can hurt the final performance.",
"We conduct development experiments to decide the best .",
"The finally predicted coreference links implicitly determine the coreference clusters.",
"2 We do not model other : arg x to avoid long tail issue.",
"Type information in 2.2 can help to guide the antecedent prediction and ensure global type consistency.",
"We combine the node hidden vector and its type representation as the final concept state: h mk = [ h Lk ; e typek ( t )] , (9) where e typek ( t ) denotes the learned embedding of the concept type of node v k .",
"Similar with Lee et al. (2017), the goal of the antecedent prediction module is to learn a distribution Q ( y k ) over the antecedents for each node v k : Q ( y k ) = e s ( k,y k ) (cid:80) y (cid:48) Y ( k ) e s ( k,y (cid:48) ) (10) where s ( k, a ) computes a coreference link score for each concept pair ( v k , v a ): s ( k, a ) = s m ( k ) + s m ( a ) + s an ( k, a ) .",
"Here a < k , and s m ( k ) means whether concept v k is a mention involved in a coreference link.",
"It is calculated by using a feed-forward network: s m ( k ) = FFNN m ( h mk ) .",
"s an ( k, a ) indicates whether mention v a is an antecedent of v k and measures the semantic similarity between v k and v a , computed with rich features using a feed-forward network:",
"where denotes element-wise multiplication of each mention pair ( v k , v a ), and a feature vector ( k, a ) represents the normalized distance between two mentions and the speaker information if available.",
"Following Lee et al. (2017), we also normalize the distance values by grouping them into the following buckets [1, 2, 3, 4, 5-7, 8-15, 16-31, 32-63, 64+].",
"All features (speaker, distance, concept type) are randomly initialized 32-dimensional embeddings jointly learned with the model.",
"Our objective function takes two parts: L type ( ) (i.e., the concept-type identification loss), and L antecedent (i.e., the antecedent prediction loss)",
"where is the weight coefficient (we empirically set = 0 . 1 in this paper).",
"Concept Identification Loss.",
"L type measures whether our model can accurately identify meaningful concepts and learn the correct type representations.",
"Specifically, given the concept set V = { v 1 , ...v N } , the concept identifier is trained to minimize an average cross-entropy loss: L type ( ) = 1 NN (cid:88) k =1 log P ( t k | v k ) , (15) where are the set of model parameters, P ( t k | v k ) denotes the output probability of predicted type t k for each node v k as in Eq.",
"Since the antecedents are latent, the antecedent loss is a marginal log-likelihood of all correct antecedents implied by gold clustering: L antecedent ( ) = N (cid:89) k =1 (cid:88) y Y k GOLD ( k ) log Q ( y ) (16) where GOLD ( k ) = (cid:15) if mention v k does not belong to any gold cluster.",
"Q ( y ) is calculated using Eq.",
"10.",
"7.",
"Antecedent Prediction Loss.",
"Given a training AMR document with gold coreference clusters GOLD ( k ) | Nk =1 and antecedent candidates Y k = { (cid:15), y , ..., y k 1 } for mention v k , L antecedent measures whether mentions are linked to their correct antecedent.",
"We conduct experiments on the MS-AMR dataset 3 (O'Gorman et al., 2018), which is annotated over a previous gold AMR corpus (LDC2017T10).",
"It has 293 annotated documents in total with an average of 27.4 AMRs per document, covering roughly 10% of the total AMR corpus.",
"We split a dev data with the same size as the test set from the training set.",
"Following the annotation guidelines of MS-AMR, we manually annotate the AMR coreference 3 The MS-AMR dataset considers 3 types of coreference links: regular, implicit and part-whole.",
"resolution information over the development and test data of the Little Prince ( LP ) AMR corpus 4 and use it as an out-of-domain test set.",
"For this dataset, we consider each chapter as a document.",
"The data statistics are shown in Table",
"1. 3.1 Setup Evaluation Metrics We use the standard evaluation metrics for coreference resolution evaluation, computed using the official CoNLL-2012 evaluation toolkit.",
"Three measures include: MUC (Vilain et al., 1995), B 3 (Bagga and Baldwin, 1998) and CEAF 4 (Luo, 2005).",
"Following previous studies (Lee et al., 2018), the primary metric AVG-F is the unweighted average of the above three F-scores.",
"Pipeline (Anikina et al., 2020): it uses an off-the-shelf coreference system (Lee et al., 2018) with SpanBERT (Joshi et al., 2020) embeddings, and an AMR-to-text aligner (Flanigan et al., 2014).",
"The former generates coreference from text, and the later projects this information from text to AMRs.",
"Models We study two versions of our model with or without BERT features.",
"AMRcoref-bert : it denotes our model in 2 except that the word embeddings ( e k in Eq. 1) are concatenated with BERT outputs.",
"Specifically, we use a cased BERT-base model with fixed parameters to encode a sentence, taking an AMR-to-text aligner (Flanigan et al., 2014) to project BERT outputs to the corresponding AMR nodes.",
"Hyperparameters We set the dimension of concept embeddings to 256.",
"Characters in the character CNN ( 2.1) are represented as learned embeddings with 32 units and the convolution window sizes include 2, 3, and 4 characters, each consisting of 100 filters.",
"We use Adam (Kingma and Ba, 2015) with a learning rate of 0.005 for optimization.",
"4 https://amr.isi.edu/download/amr-bank-struct-v3.0.txt.",
"GRN Encoder Layers The number of recurrent layers L in GRN defines the amount of message interactions.",
"Large message passing layers may lead to over-smoothing problems, while small layers may result in weak graph representation (Qin et al., 2020; Zhang et al., 2018).",
"Figure 3 shows development experiments of the AMRcoref-base model in this aspect.",
"We observe large improvements when increasing the layers from 1 to 3, but further increase from 3 to 7 does not lead to further improvements.",
"Therefore, we choose 3 layers for our final model.",
"Antecedent Candidates How many antecedents are considered as candidates (denoted as in Section 2.3) for making each coreference decision is another important hyperparameter in a coreference resolution model (Lee et al., 2017).",
"Intuitively, allowing more antecedents gives a higher upper bound, but that also introduces a larger search space.",
"Table 3 shows the statistics of the distance between each mention and its gold antecedent and the devset performance of AMRcoref-base model that uses this distance as the search space.",
"The performance of AMRcoref-base improves when increasing the search space, and the best performance was observed when 250 antecedents are considered as the search space.",
"We choose = 250 in subsequent experiments.",
"Table 2 shows the final in-domain results on the MS-AMR test set and out-domain results on the annotated Little Prince ( LP ) data, where we compare our model ( AMRcoref-base and AMRcoref-bert ) with the Rule-based and Pipeline baselines.",
"In-domain Results The Rule-based method performs the worst, because it only links the identical entity and suffers from low recall.",
"The Pipeline model performs better than the Rule-based model due to better coverage, but it can suffer from error propagation in both textual coreference and inaccurate AMR aligner.",
"In addition, it does not make use of AMR structure features, which is less sparse compared to text cues.",
"Our proposed AMRcoref-base model outperforms the two baselines by a huge margin, gaining at least 9.3% and 13.2% average F1 scores, respectively.",
"This verifies the effectiveness of the end-to-end framework.",
"Out-domain Results On the cross-domain LP data, our model largely outperforms both Rule-based method and the Pipeline model.",
"Compared with the in-domain setting, there is minor drop on the out-of-domain dataset (4.1% and 2.3% F1 score for AMRcoref-base and AMRcoref-bert re-spectively).",
"Neither the performances of Rule-based nor Pipeline change much on this dataset, which is because these systems are not trained on a certain domain.",
"This also reflects the quality of our LP annotations, because of the consistent performance changes of both AMRcoref-base and AMRcoref-bert when switching from MS-AMR to LP.",
"We analyze the effects of mention type, textual embedding and various extra features in this section.",
"Concept Identification As shown in the first group of Table 4, we conduct an ablation study on the concept identification module, which has been shown crucial on the textual coreference resolution (Lee et al., 2017).",
"Removing the concept identifier from the AMRcoref-base model results in a large performance degradation of up to 19.9%, indicating that concept type information of the AMR node can positively guide the prediction of coreference links.",
"On the other hand, when the concept identifier outputs are replaced with gold mentions, the results can be further improved by 19.1%.",
"This indicates that better performances can be expected if concept identification can be further improved.",
"Injecting BERT knowledge As shown in the second group of Table 4, we study the influence of rich features from BERT in our model, which has been proven effective on text-based coreference resolution.",
"Two alternatives of using BERT are studied, concatenate (i.e. AMRcoref-bert ) denotes concatenating the AMR node embeddings with the corresponding textual BERT embedding, and graph means that we construct an AMR-token graph that connects AMR nodes and the corresponding tokens.",
"We find that the AMRcoref-base model can be improved by a similar margin using both approaches.",
"This is consistent with existing observations from other structured prediction tasks, such as constituent parsing (Kitaev et al., 2019) and dependency parsing (Li et al., 2019).",
"Due to the limited scale of our training data, we expect the gain to be less with more training data.",
"Features Ablation As shown by the last group in Table 4, we investigate the impacts of each component in our proposed model on the development set of MS-AMR.",
"We have the following observations.",
"First , consistent with findings of Lee et al. (2017), Figure 4: Testing results of AMRcoref-base regarding different ratios of training data used.",
"the distance between a pair of AMR concepts is an important feature.",
"The final model performance drops by 2.1% when removing the distance feature (Eq. 13).",
"Second , the speaker indicator features (Eq. 13) contribute to our model by a 1.9% improvement.",
"Intuitively, speaker information is helpful for pronoun coreference resolution in dialogues.",
"For example, my package in one sentence may represent identical entity with your package in the next utterance.",
"Third , the character CNN provides morphological information and a way to back off for out-of-vocabulary tokens.",
"For AMR node representations, we see a modest contribution of 1.2% F1 score.",
"Finally , we exploit the necessity of cross-sentence AMR connections.",
"Compared to encoding each AMR graph individually, global information exchange across sentences can help to achieve a significant performance improvement.",
"Data Hunger Similar to other results, it is important to study how much data is necessary to obtain a strong performance (at least be better than the baseline).",
"Figure 4 shows the performances when training the AMRcoref-base model on different portions of data.",
"As the number of training samples increases, the performance of our model continuously improves.",
"This shows that our model has room for further improvement with more training data.",
"Moreover, our model even outperforms the Pipeline baseline when trained on only 20% data.",
"This confirms the robustness of our end-to-end framework.",
"Effect of Document Length Figure 5 shows the performance on different MS-AMR document lengths (i.e., the number of AMR graphs in the document).",
"We can see that both our model and the Pipeline model show performance decrease 38.5 37.8 37.5 38.6 45.6 42.8 40.9 38.7 59.6 57.9 49.5 38.2 10 20 30 40 50 60 70 0-10 10-20 20-30 30up A v e r a g e F 1 Document Length Performance on document length Rule-based Pipeline AMRcoref-base Figure 5: Testing results regarding document length.",
"when increasing input document length.",
"This is likely because a longer document usually involves more complex coreference situations and brings more challenge for the encoder.",
"Insufficient information interaction for distant nodes further leads to weaker inference performance.",
"As expected, the Rule-based approach (Liu et al., 2015) is not significantly affected, but its result is still pretty low.",
"When the document contains more than 30 sentences, the AMRcoref-base model slightly under-performs both the Rule-based method and the Pipeline baseline.",
"One reason is that only a few training instances have a long document length, so we expect that the performance of our model can be further improved given more long documents.",
"Table 5 compares the summarization performances using the document-level AMRs generated by various methods on the LDC2015E86 benchmark (Knight et al., 2014).",
"Following Liu et al. (2015), Rouge scores (R-1/2/L Lin 2004) are used as the metrics.",
"To consume each document AMR and the corresponding text, we take a popular dual-to-sequence model (D2S, Song et al. 2019b), which extends the standard sequence-to-sequence framework with an additional graph encoder and a dual attention mechanism for extracting both text and graph contexts during decoding.",
"For previous work, summarization using AMR was first explored by Liu et al. (2015).",
"They first use a rule-based method to build document AMRs and then take a statistic model to generate summaries.",
"Dohare et al. (2017) improves this approach by selecting important sentences before building a document AMR.",
"The D2S-Rule-based can be considered as a fair comparison with Liu et al. (2015) on the same summerization platform.",
"The overall performance of the D2S models outperform the previous approaches, indicating that our experiments are conducted on a stronger baseline.",
"Though Pipeline is better than Rule-based on AMR coreference resolution, D2S-Pipeline is comparable with D2S-Rule-based on the downstream summerization task.",
"This shows that the error propagation issue of Pipeline can introduce further negative effects to a downstream application.",
"On the other hand, both D2S-AMRcoref-base and D2S-AMRcoref-bert show much better results than the baselines across all Rouge metrics.",
"This demonstrates that the improvements made by our end-to-end model is solid and can transfer to a downstream application.",
"D2S-AMRcoref-bert achieves the best performance, which is consistent with the above experiments.",
"Multi-sentence AMR Although some previous work (Szubert et al., 2020; Van Noord and Bos, 2017) explore the coreference phenomena of AMR, they mainly focus on the situation within a sentence.",
"On the other hand, previous work on multi-sentence AMR primarily focuses on data annotation.",
"Song et al. (2019a) annotate dropped pronouns over Chinese AMR but only deals with implicit roles in specific constructions.",
"Gerber and Chai (2012) provide implicit role annotations, but the resources were limited to a small inventory of 5-10 predicate types rather than all implicit arguments.",
"O'Gorman et al. (2018) annotated the MS-AMR dataset by simultaneously considering coreference, implicit role coreference and bridging relations.",
"We consider coreference resolution as the prerequisite for creating multi-sentence AMRs, proposing the first end-to-end model for this task.",
"Coreference Resolution Coreference resolution is a fundamental problem in natural language processing.",
"Neural network models have shown promising results over the years.",
"Recent work (Lee et al., 2017, 2018; Kantor and Globerson, 2019) tackled the problem end-to-end by jointly detecting mentions and predicting coreference.",
"Lee et al. (2018) build a complete end-to-end system with the span-ranking architecture and higher-order inference technique.",
"While previous work considers only text-level coreference, we investigate AMR co-reference resolution.",
"AMR Representation using GNN To encode AMR graphs, many variants of GNNs such as GRNs (Song et al., 2018; Beck et al., 2018), GCNs (Zhou et al., 2020; Zhang et al., 2020) and GATs (Damonte and Cohen, 2019; Cai and Lam, 2020b; Wang et al., 2020) have been introduced.",
"We choose a classic GRN model following Song et al. (2018) to represent our document-level AMR graph and leave the exploiting on a more efficient GNN structure for future work.",
"We investigated a novel end-to-end multi-sentence AMR coreference resolution model using a graph neural network.",
"Compared with previous rule-based and pipeline methods, our model better captures multi-sentence semantic information.",
"Results on MS-AMR (in-domain) and LP (out-of-domain) datasets show the superiority and robustness of our model.",
"In addition, experiments on the downstream text summarization task further demonstrate the effectiveness of the document-level AMRs produced by our model.",
"In future work, we plan to resolve both the cross-AMR coreference links and the sentence-level ones together with our model.",
"Linfeng Song is the corresponding author.",
"We would like to thank the anonymous reviewers for their insightful comments.",
"We gratefully acknowledge funding from the National Natural Science Foundation of China (NSFC No.61976180).",
"It also receives supported by Tencent AI Lab Rhino-Bird Focused Research Program."
]
| [
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"other",
"method",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"objective",
"method",
"result",
"objective",
"abstain",
"other",
"other",
"other",
"other"
]
|
[
"There is a growing body of work that proposes methods for mitigating bias in machine learning systems.",
"These methods typically rely on access to protected attributes such as race, gender, or age.",
"However, this raises two significant challenges: (1) protected attributes may not be available or it may not be legal to use them, and (2) it is often desirable to simultaneously consider multiple protected attributes, as well as their intersections.",
"In the context of mitigating bias in occupation classification, we propose a method for discouraging correlation between the predicted probability of an individual's true occupation and a word embedding of their name.",
"This method leverages the societal biases that are encoded in word embeddings, eliminating the need for access to protected attributes.",
"Crucially, it only requires access to individuals' names at training time and not at deployment time.",
"We evaluate two variations of our proposed method using a large-scale dataset of online biographies.",
"We find that both variations simultaneously reduce race and gender biases, with almost no reduction in the classifier's overall true positive rate.",
"In recent years, the performance of machine learning systems has improved substantially, leading to the widespread use of machine learning What's in a name? That which we call a rose by any",
"in many domains, including high-stakes domains such as healthcare, employment, and criminal jus-tice (Chalfin et al., 2016; Miotto et al., 2017; Chouldechova, 2017).",
"This increased prevalence has led many people to ask the question, accurate, but for whom? (Chouldechova and G'Sell, 2017).",
"When the performance of a machine learning system differs substantially for different groups of people, a number of concerns arise (Baro-cas and Selbst, 2016; Kim, 2016).",
"First and foremost, there is a risk that the deployment of such a method may harm already marginalized groups and widen existing inequalities.",
"Recent work highlights this concern in the context of online recruiting and automated hiring (De-Arteaga et al., 2019).",
"When predicting an individual's occupation from their online biography, the authors show that if occupation-specific gender gaps in true positive rates are correlated with existing gender imbalances in those occupations, then those imbalances will be compounded over time a phenomenon sometimes referred to as the leaky pipeline.",
"Second, the correlations that lead to performance differences between groups are often irrelevant.",
"For example, while an occupation classifier should predict a higher probability of software engineer if an individual's biography mentions coding experience, there is no good reason for it to predict a lower probability of software engineer if the biography also mentions softball.",
"Prompted by such concerns about bias in machine learning systems, there is a growing body of work on fairness in machine learning.",
"Some of the foundational papers in this area highlighted the limitations of trying to mitigate bias using methods that are unaware of protected attributes such as race, gender, or age (e.g., Dwork et al., 2012).",
"As a result, subsequent work has primarily focused on introducing fairness constraints, defined in terms of protected attributes, that reduce incentives to rely on undesirable correlations (e.g., Hardt et al., 2016; Zhang et al., 2018).",
"This approach is particularly useful if similar performance can be achieved by slightly different meansi.e., fairness constraints may aid in model selection if there are many near-optima.",
"In practice, though, any approach that relies on protected attributes may stand at odds with antidiscrimination law, which limits the use of protected attributes in domains such as employment and education, even for the purpose of mitigating bias.",
"And, in other domains, protected attributes are often not available (Holstein et al., 2019).",
"Moreover, even when they are, it is usually desirable to simultaneously consider multiple protected attributes, as well as their intersections.",
"For example, Buolamwini (2017) showed that commercial gender classifiers have higher error rates for women with darker skin tones than for either women or people with darker skin tones overall.",
"We propose a method for reducing bias in machine learning classifiers without relying on protected attributes.",
"In the context of occupation classification, this method discourages a classifier from learning a correlation between the predicted probability of an individual's occupation and a word embedding of their name.",
"Intuitively, the probability of an individual's occupation should not depend on their namenor on any protected attributes that may be inferred from it.",
"We present two variations of our methodi.e., two loss functions that enforce this constraintand show that they simultaneously reduce both race and gender biases with little reduction in classifier accuracy.",
"Although we are motivated by the need to mitigate bias in online recruiting and automated hiring, our method can be applied in any domain where individuals' names are available at training time.",
"Instead of relying on protected attributes, our method leverages the societal biases that are encoded in word embeddings (Bolukbasi et al., 2016; Caliskan et al., 2017).",
"In particular, we build on the work of Swinger et al. (2019), which showed that word embeddings of names typically reflect the societal biases that are associated with those names, including race, gender, and age biases, as well encoding information about other factors that influence naming practices such as nationality and religion.",
"By using word embeddings of names as a tool for mitigating bias, our method is conceptually simple and empirically powerful.",
"Much like the proxy fairness approach of Gupta et al. (2018), it is applicable when protected attributes are not available; however, it additionally eliminates the need to specify which biases are to be mitigated, and allows simultaneous mitigation of multiple biases, including those that relate to group intersections.",
"Moreover, our method only requires access to proxy information (i.e., names) at training time and not at deployment time, which avoids disparate treatment concerns and extends fairness gains to individuals with ambiguous names.",
"For example, a method that explicitly or implicitly infers protected attributes from names at deployment time may fail to correctly infer that an individual named Alex is female and, in turn, fail to mitigate gender bias for her.",
"Methodologically, our work is also similar to that of Zafar et al. (2017), which promotes fairness by requiring that the covariance between a protected attribute and a data point's distance from a classifier's decision boundary is smaller than some constant.",
"However, unlike our method, it requires access to protected attributes, and does not facilitate simultaneous mitigation of multiple biases.",
"Our method discourages an occupation classifier from learning a correlation between the predicted probability of an individual's occupation and a word embedding of their name.",
"In this section, we present two variations of our methodi.e., two penalties that can be added to an arbitrary loss function and used when training any classifier.",
"We assume that each data point corresponds to an individual, with a label indicating that individual's occupation.",
"We also assume access to the names of the individuals represented in the training set.",
"The first variation, which we call Cluster Constrained Loss (CluCL), uses k -means to cluster word embeddings of the names in the training set.",
"Then, for each pair of clusters, it minimizes between-cluster disparities in the predicted probabilities of the true labels for the data points that correspond to the names in the clusters.",
"In contrast, the second variation minimizes the covariance between the predicted probability of an individual's occupation and a word embedding of their name.",
"Because this variation minimizes the covariance directly, we call it Covariance Constrained Loss (CoCL).",
"The most salient difference between these variations is that CluCL only minimizes disparities between the latent groups captured by the clusters.",
"For example, if the clusters correspond only to gender, then CluCL is only capable of mitigating gender bias.",
"However, given a sufficiently large number of clusters, CluCL is able to simultaneously mitigate multiple biases, including those that relate to group intersections.",
"For both variations, individual's names are not used as input to the classifier itself; they appear only in the loss function used when training the classifier.",
"The resulting trained classifier can therefore be deployed without access to individuals' names.",
"We define x i = { x 1 i , . . . , x Mi } to be a data point, y i to be its corresponding (true) label, and n fi and n li to be the first and last name of the corresponding individual.",
"The classification task is then to (correctly) predict the label for each data point: p i = H ( x i ) (1) y i = arg max 1 j | C | p i [ j ] , (2) where H ( ) is the classifier, C is the set of possible classes, p i R | C | is the output of the classifier for data point x i e.g., p i [ j ] is the predicted probability of x i belonging to class j and y i is the predicted label for x i .",
"We define p y i to be the predicted probability of y i i.e., the true label for x i .",
"The conventional way to train such a classifier is to minimize some loss function L , such as the cross-entropy loss function.",
"Our method simply adds an additional penalty to this loss function: L total = L + LCL , (3) where LCL is either L CluCL or L CoCL (defined in Sections 2.2 and 2.3, respectively), and is a hyperparameter that determines the strength of the penalty.",
"This loss function is only used during training, and plays no role in the resulting trained classifier.",
"Moreover, it can be used in any standard setup for training a classifiere.g., training a deep neural network using mini-batches and the Adam optimization algorithm (Kingma and Ba, 2014).",
"This variation represents each first name n fi and last name n li as a pair of low-dimensional vectors using a set of pretrained word embeddings E .",
"These are then combined to form a single vector: n ei = 1 2 (cid:16) E [ n fi ] + E [ n li ] (cid:17) .",
"Using k -means (Arthur and Vassilvitskii, 2007), CluCL then clusters the resulting embeddings into k clusters, yielding a cluster assignment k i for each name (and corresponding data point).",
"Next, for each class c C , CluCL computes the following average pairwise difference between clusters: l c = 1 k ( k 1) k (cid:88) u,v =1 1 N c,u (cid:88) i : y i = c, k i = u p yi 1 N c,v (cid:88) i : y i = c, k i = v p yi 2 , (5) where u and v are clusters and N c,u is the number of data points in cluster u for which y i = c .",
"CluCL considers each class individually because different classes will likely have different numbers of training data points and different disparities.",
"Finally, CluCL computes the average of l 1 , . . . l | C | to yield L CluCL = 1 | C | (cid:88) c C l c .",
"This variation minimizes the covariance between the predicted probability of a data point's label and the corresponding individual's name.",
"Like CluCL, CoCL represents each name as a single vector n ei and considers each class individually: l c = E i : y i = c (cid:2)(cid:0) p yi cp (cid:1) ( n ei cn ) (cid:3) , (7) where cp = E i : y i = c [ p yi ] and cn = E i : y i = c [ n ei ] .",
"Finally, CoCL computes the following average: L CoCL = 1 | C | (cid:88) c C (cid:107) l c (cid:107) , where (cid:107) (cid:107) is the (cid:96) 2 norm.",
"One of our method's strengths is its ability to simultaneously mitigate multiple biases without access to protected attributes; however, this strength also poses a challenge for evaluation.",
"We are unable to quantify this ability without access to these attributes.",
"To facilitate evaluation, we focus on race and gender biases only because race and gender attributes are more readily available than attributes corresponding to other biases.",
"We further conceptualize both race and gender to be binary (white/non-white and male/female) but note that these conceptualizations are unrealistic, reductive simplifications that fail to capture many aspects of race and gender, and erase anyone who does not fit within their assumptions.",
"We emphasize that we use race and gender attributes only for evaluationthey do not play a role in our method.",
"We use two datasets to evaluate our method: the adult income dataset from the UCI Machine Learning Repository (Dheeru and Karra Taniski-dou, 2017), where the task is to predict whether an individual earns more than $50k per year (i.e., whether their occupation is high status), and a dataset of online biographies (De-Arteaga et al., 2019), where the task is to predict an individual's",
"occupation from the text of their online biography.",
"Each data point in the Adult dataset consists of a set of binary, categorical, and continuous attributes, including race and gender.",
"We preprocess these attributes to more easily allow us to understand the classifier's decisions.",
"Specifically, we normalize continuous attributes to be in the range [0 , 1] and we convert categorical attributes into binary indicator variables.",
"Because the data points do not have names associated with them, we generate synthetic first names using the race and gender attributes.",
"First, we use the dataset of Tzioumis (2018) to identify white and non-white names.",
"For each name, if the proportion of white people with that name is higher than 0.5, we deem the name to be white; otherwise, we deem it to be non-white. 1 Next, we use Social Security Administration data about baby names (2018) to identify male and female names.",
"For each name, if the proportion of boys 1 For 90% of the names, the proportion of white people with that name is greater than 0.7 or less than 0.3, so there is a clear distinction between white and non-white names.",
"with that name is higher than 0.5, we deem the name to be male; otherwise, we deem it to be female. 2 We then take the intersection of these two sets of names to yield a single set of names that is partitioned into four non-overlapping categories by (binary) race and gender.",
"Finally, we generate a synthetic first name for each data point by sampling a name from the relevant category.",
"Each data point in the Bios dataset consists of the text of an individual's biography, written in the third person.",
"We represent each biography as a vector of length V , where V is the size of the vocabulary.",
"Each element corresponds to a single word type and is equal to 1 if the biography contains that type (and 0 otherwise).",
"We limit the size of the vocabulary by discarding the 10% most common word types, as well as any word types that occur fewer than twenty times.",
"Unlike the Adult dataset, each data point has a name associated with it.",
"And, because biographies are typically written in the third person and because pronouns are gendered in English, we can extract (likely) self-identified gender.",
"We infer race for each data point by sampling from a Bernoulli distribution with probability equal to the average of the probability that an individual with that first name is white (from the dataset of Tzioumis (2018), using a threshold of 0.5, as described above) and the probability that an individual with that last name is white (from the dataset of Comenetz (2016), also using a threshold of 0.5).",
"3 Finally, like De-Arteaga et al. (2019), we consider two versions of the Bios dataset: one where first names and pronouns are available to the classifier and one where they are scrubbed.",
"Throughout our evaluation, we use the fastText word embeddings, pretrained on Common Crawl data (Bojanowski et al., 2016), to represent names.",
"Our method can be used with any classifier, including deep neural networks such as recurrent neural networks and convolutional neural networks.",
"However, because the focus of this paper is mitigating bias, not maximizing classifier 2 For 98% of the names, the proportion of boys with that name is greater than 0.7 or less than 0.3, so there is an even clearer distinction between male and female names.",
"3 We note that, in general, an individual's race or gender should be directly reported by the individual in question; inferring race or gender can be both inaccurate and reductive.",
"h i = W h x i + b h p i = softmax ( h i )",
"where W h R | C | M and b h R | C | are the weights.",
"This structure allows us to examine individual elements of the matrix W h in order to understand the classifier's decisions for any dataset.",
"Both the Adult dataset and the Bios dataset have a strong class imbalance.",
"We therefore use a weighted cross-entropy loss as L , with weights set to the values proposed by King and Zeng (2001).",
"To quantify race bias and gender bias, we follow the approach proposed by De-Arteaga et al. (2019) and compute the true positive rate (TPR) race gap and the TPR gender gapi.e., the differences in the TPRs between races and between genders, respectivelyfor each occupation.",
"The TPR race gap for occupation c is defined as follows: TPR r,c = P (cid:104) Y = c | R = r, Y = c (cid:105) (8) Gap r,c = TPR r,c TPR r,c , (9) where r and r are binary races, Y and Y are random variables representing the predicted and true occupations for an individual, and R is a random variable representing that individual's race.",
"Similarly, the TPR gender gap for occupation c is TPR g,c = P (cid:104) Y = c | G = g, Y = c (cid:105) (10) Gap g,c = TPR g,c TPR g,c , (11) where g and g are binary genders and G is a random variable representing an individual's gender.",
"To obtain a single score that quantifies race bias, thus facilitating comparisons, we calculate the root mean square of the per-occupation TPR race gaps: Gap RMS r = (cid:115) 1 | C | (cid:88) c C Gap 2 r,c .",
"We obtain a single score that quantifies gender bias similarly.",
"The motivation for using the root mean square instead of an average is that larger values have a larger effect and we are more interested in mitigating larger biases.",
"Finally, to facilitate worst-case analyses, we calculate the maximum TPR race gap and the maximum TPR gender gap.",
"We again emphasize that race and gender attributes are used only for evaluating our method.",
"We first demonstrate that word embeddings of names encode information about race and gender.",
"We then present the main results of our evaluation, before examining individual elements of the matrix W h in order to better understand our method.",
"We cluster the names associated with the data points in the Bios dataset, represented as word embeddings, to verify that such embeddings indeed capture information about race and gender.",
"We perform k -means clustering (using the k -means++ algorithm) with k = 12 clusters, and then plot the number of data points in each cluster that correspond to each (inferred) race and gender.",
"Figures 1a and 1b depict these numbers, respectively.",
"Clusters 1, 2, 4, 7, 8, and 12 contain mostly white names, while clusters 3, 5, and 9 contain mostly non-white names.",
"Similarly, clusters 4 and 8 contain mostly female names, while cluster 2 contains mostly male names.",
"The other clusters are more balanced by race and gender.",
"Manual inspection of the clusters reveals that cluster 9 contains mostly Asian names, while cluster 8 indeed contains mostly female names.",
"The names in cluster 2 are mostly white and male, while the names in cluster 4 are mostly white and female.",
"This suggests that the clusters are capturing at least some intersections.",
"Together these results demonstrate that word embeddings of names do indeed encode at least some information about race and gender, even when first and last names are combined into a single embedding vector.",
"For a longer discussion of the societal biases reflected in word embeddings of names, we recommend the work of Swinger et al. (2019).",
"The results of our evaluation using the Adult dataset are shown in Table 1.",
"The task is to predict whether an individual earns more than $50k per year (i.e., whether their occupation is high status).",
"Because the dataset has a strong class imbalance, we report the balanced TPRi.e., we compute the per-class TPR and then average over the classes.",
"We experiment with different values of the hyperparameter .",
"When = 0 , the method is equivalent to using the conventional weighted cross-entropy loss function.",
"Larger values of in-crease the strength of the penalty, but may lead to 1 2 3 4 5 6 7 8 9 10 11 12 Cluster 0 10000 20000 30000 40000 50000 60000 N u m b e r o f s a m p l e s White Not white Unknown",
"a less accurate classifier.",
"Using = 0 leads to significant gender bias: the maximum TPR gender gap is 0.303.",
"This means that the TPR is 30% higher for men than for women.",
"We emphasize that this does not mean that the classifier is more likely to predict that a man earns more than $50k per year, but means that the classifier is more likely to correctly predict that a man earns more than $50k per year.",
"Both variations of our method significantly reduce race and gender biases.",
"With CluCL, the root mean square TPR race gap is reduced from 0.12 to 0.085, while the root mean square TPR gender gap is reduced from 0.299 to 0.25.",
"These reductions in bias result in less than one percent decrease in the balanced TPR (79.5% is decreased to 79.3%).",
"With CoCL, the race and gender biases are further reduced: the root mean square TPR race gap is reduced to 0.08, while the root mean square TPR gender gap is reduced to 0.163, with 0.5% decrease in the balanced TPR.",
"We emphasize that although our proposed method significantly reduces race and gender biases, neither variation can completely eliminate them.",
"In order to understand how different values of hyperparameter influence the reduction in race and gender biases, we perform additional experiments using CoCL where we vary from 0 to 10.",
"Figure 2 depicts these results.",
"Larger values of indeed reduce race and gender biases; however, to achieve a root mean square TPR gender gap of zero means reducing the balanced TPR to 50%, which is unacceptably low.",
"That said, there are a wide range of values of that significantly reduce race and gender biases, while maintaining an acceptable balanced TPR.",
"For example, = 6 results in a root mean square TPR race gap of 0.038 and a root mean square TPR gender gap of 0.046, with only a 7.3% decrease in the balanced TPR.",
"The results of our evaluation using the original and scrubbed (i.e., names and pronouns are scrubbed) versions of the Bios dataset are shown in Tables 2 and 3, respectively.",
"The task is to predict an individual's occupation from the text of their online biography.",
"Because the dataset has a strong class imbalance, we again report the balanced TPR.",
"CluCL and CoCL reduce race and gender biases for both versions of the dataset.",
"For the original version, CluCL reduces the root mean square TPR gender gap from 0.173 to 0.165 and the maximum TPR gender gap by 2.5%.",
"Race bias is also reduced, though to a lesser extent.",
"These reductions reduce the balanced TPR by 0.7%.",
"For the scrubbed version, the reductions in race and gender biases are even smaller, likely because most of the information about race and gender has been removed by scrubbing names and pronouns.",
"We hypothesize that these smaller reductions in race and gender biases, compared to the Adult dataset, are because the Adult dataset has fewer attributes and classes than the Bios dataset, and contains explicit race and gender information, making the task of reducing biases much simpler.",
"We also note that each biography in the Bios dataset is represented as a vector of length V , where V is over 11,000.",
"This means that the corresponding classifier has a very large number of weights, and there is a strong overfitting effect.",
"Because this overfitting effect increases with , we suspect it explains why CluCL has a larger root mean square TPR gender gap when = 2 than when = 1 .",
"Indeed, the root mean square TPR gender gap for the training set is 0.05 when = 2 .",
"Using dropout and (cid:96) 2 weight regularization lessened this effect, but did not eliminate it entirely.",
"Our method mitigates bias by making training-time adjustments to the classifier's weights that minimize the correlation between the predicted probability of an individual's occupation and a word embedding of their name.",
"Because of our choice of classifier (a single-layer neural network, as described in Section 3.2), we can examine individual elements of the matrix W h to understand the effect of our method on the classifier's decisions.",
"Figure 3a depicts the values of several weights for the conventional weighted cross-entropy loss function (i.e., = 0 ) and for CoCL with = 2 for the Adult dataset.",
"When = 0 , the attributes sex Female and sex Male have large negative and positive weights, respectively.",
"This means that the classifier is more likely to predict that a man earns more than $50k per year.",
"With CoCL, these weights are much closer to zero.",
"Similarly, the weights for the race attributes are also closer to zero.",
"We note that the weight for age race_Black race_Whitesex_Female sex_Male 0.3 0.2 0.1 0.0 0.1 0.2 S c o r e s Regular CoCL, R=2 =2",
"the attribute age is also reduced, suggesting that CoCL may have mitigated some form of age bias.",
"Figure 3b depicts the values of several weights specific to the occupation surgeon for the conventional weighted cross-entropy loss function (i.e., = 0 ) and for CoCL with = 2 for the original version of the Bios dataset.",
"When = 0 , the attributes she and her have large negative weights, while the attribute he has a positive weight.",
"This means that the classifier is less likely to predict that a biography that contains the words she or her belongs to a surgeon.",
"With CoCL, these magnitudes of these weights are reduced, though these reductions are not as significant as the reductions shown for the Adult dataset.",
"In this paper, we propose a method for reducing bias in machine learning classifiers without relying on protected attributes.",
"In contrast to previous work, our method eliminates the need to specify which biases are to be mitigated, and allows simultaneous mitigation of multiple biases, including those that relate to group intersections.",
"Our method leverages the societal biases that are encoded in word embeddings of names.",
"Specifically, it discourages an occupation classifier from learning a correlation between the predicted probability of an individual's occupation and a word embedding of their name.",
"We present two variations of our method, and evaluate them using a large-scale dataset of online biographies.",
"We find that both variations simultaneously reduce race and gender biases, with almost no reduction in the classifier's overall true positive rate.",
"Our method is conceptually simple and empirically powerful, and can be used with any classifier, including deep neural networks.",
"Finally, although we focus on English, we expect our method will work well for other languages, but leave this direction for future work."
]
| [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"objective",
"method",
"result",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"method",
"result",
"method",
"method"
]
|
[
"Prompting has recently been shown as a promising approach for applying pre-trained language models to perform downstream tasks.",
"We present Multi-Stage Prompting, a simple and automatic approach for leveraging pre-trained language models to translation tasks.",
"To better mitigate the discrepancy between pre-training and translation, MSP divides the translation process via pre-trained language models into multiple separate stages: the encoding stage, the re-encoding stage, and the decoding stage.",
"During each stage, we independently apply different continuous prompts for allowing pre-trained language models better shift to translation tasks.",
"We conduct extensive experiments on three translation tasks.",
"Experiments show that our method can significantly improve the translation performance of pre-trained language models.",
"1 1 Introduction Prompting (Brown et al., 2020; Lester et al., 2021), which refers to the approach of generating task-specific outputs from language models (LMs) by conditioning on extra information (known as prompts ), has emerged as a new way of using LMs to perform natural language processing (NLP) tasks (Gao et al., 2020; Liu et al., 2021).",
"While being efficient in parameters (Lester et al., 2021), prompting can enable mixed-task inference, which is not possible for other related approaches like finetuning or adapter-based tuning (Li and Liang, 2021; Lester et al., 2021).",
"Prompting also opens the possibility of using a single pre-trained LM to perform all NLP tasks (Liu et al., 2021).",
"and Knowles, 2017).",
"While neural machine translation (NMT) (Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017) is the current de facto approach for machine translation, using pre-trained LMs as translators via prompting is appealing in several aspects.",
"For example, for the method described in this paper, supporting a new translation direction with a pre-trained LM occupies disk spaces below 20M, which is much smaller than training a separate neural machine translation model, where the model size is typically larger than 60M per language pair for the Transformer architecture.",
"2 Furthermore, the pre-trained LM also retains the ability to perform other downstream tasks, which is an important characteristic that has not been validated available on neural machine translation models.",
"However, it is challenging to leverage pre-trained LMs to translation tasks via prompting.",
"First, finding an appropriate prompt for a translation task is not trivial and requires specific designs (Brown et al., 2020; Gao et al., 2020; Li and Liang, 2021; Lester et al., 2021).",
"Second, the prompting method with a single prompt may be sub-optimal for steering pre-trained LMs to translation tasks, as there is a clear discrepancy between the objectives of translation and pre-training.",
"Translation imposes strict semantic equivalence and language space constraint, in which a source sentence must translate to a semantically equivalent sentence in the target language space.",
"As the objective of pretraining is usually to reconstruct parts of the input sentence (Radford et al., 2018; Devlin et al., 2019), the generation of a pre-trained LM conditioned on a source sentence will likely be in the source language space with non-equivalent semantics.",
"Therefore, using a single prompt to guide the LM for mitigating both the semantic and language gap is likely to be sub-optimal.",
"Third, prevalent 2 Assume using the transformer-base setting with a vocabulary size of 32K.",
"generative LMs such as GPTs use a decoder-only architecture (Radford et al., 2018), which is unidirectional and may be sub-optimal for encoding source sentences (Devlin et al., 2019).",
"While re-cent works in prompting like prefix-tuning (Li and Liang, 2021) or prompt tuning (Lester et al., 2021) alleviate the first challenge by introducing differentiable continuous prompts, the last two challenges remain to be addressed.",
"In this paper, we present Multi-Stage Prompting (MSP) for addressing the challenges of steering pre-trained language models to translation tasks.",
"MSP encapsulates the idea of breaking translation tasks into simpler consecutive stages, allowing the pre-trained LM to learn smoother transitions to translation tasks by providing different prompts at different stages.",
"For GPT-style pre-trained LMs, we design a three-stage prompting scheme for modeling the translation process, which consists of an encoding stage , a re-encoding stage , and a decoding stage .",
"Specifically, the pre-trained LM focuses on learning source representations at the encoding stage and learns refined bidirectional representations by re-encoding source sentences at the re-encoding stage.",
"Therefore, the LM can produce better translations with refined source representations at the decoding stage.",
"Following prefix-tuning (Li and Liang, 2021) and prompt tuning (Lester et al., 2021), we use independent trainable continuous prompts at different stages, which are learned through back-propagation.",
"The difference between basic (single-stage) prompting and multi-stage prompting is illustrated in Figure",
"1. We demonstrate the effectiveness of our method with a multilingual GPT (mGPT) model on Romanian-English, English-German, and English-Chinese translation tasks.",
"Experiments verify that compared with prompt tuning or prefix-tuning, MSP can significantly improve the translation performance of pre-trained LMs.",
"Our method improves the translation performance of pre-trained language models via prompt tuning and prefix-tuning by 18.6 and 4.1 BLEU points on average over the three translation tasks, respectively, suggesting that MSP is a more effective prompting method for translation tasks.",
"Prompting is an approach of using an LM to perform downstream tasks by adding extra information for the LM to condition during its generation (Lester et al., 2021).",
"This extra information, also known as a prompt , plays an important role in prompting methods and is often prepended to LM's input for better control of its generation.",
"Depending on the form of prompts, prompting methods can be divided into two categories: using textual prompts or using continuous prompts.",
"Textual prompts are typically composed of natural language tokens.",
"As a representative approach of textual prompts, Brown et al. (2020) use manually designed prompts to steer GPT-3's generation.",
"A typical prompt used in GPT-3 consists of a task description and a few task-specific examples.",
"Gao et al. (2020) and Shin et al. (2020) propose different automatic methods to generate textual prompts.",
"Textual prompts are typically understandable by humans.",
"However, Shin et al. (2020) indicate that automatically generated textual prompts may lack interpretability.",
"Continuous prompts, which consist of a sequence of continuous vectors, have gained increasing popularity recently.",
"For example, in (Li and Liang, 2021), the continuous prompts consist of a sequence of key-value pairs (also called prefixes).",
"Lester et al. (2021) propose a simplified version of continuous prompts, which consists of virtual 6132 tokens that are only added to the embedding layer.",
"Compared with textual prompts, using continuous prompts is generally more powerful but less interpretable (Lester et al., 2021).",
"In this paper, we use GPT (Radford et al., 2018, 2019; Brown et al., 2020) as the backbone LM for machine translation tasks.",
"GPTs are a series of causal language models based on the Transformer architecture (Vaswani et al., 2017).",
"To be more suitable for translation tasks that involve multiple languages, we introduce a multilingual GPT (mGPT) model instead of using a standard GPT-2 model.",
"3 The main difference between mGPT and GPT-2 is the training data.",
"mGPT is trained on the mC4 dataset (Xue et al., 2021), which is a multilingual dataset covering over 101 languages.",
"For further details about mGPT, please refer to Appendix A.1.",
"Let z = [ z 1 , . . . , z n ] be a sequence of tokens, mGPT uses an autoregressive Transformer network to model the conditional probability P ( z t | z <t ) , where t [1 , n ] and z <t = [ z 1 , . . . , z t 1 ] .",
"We use f LM ( z , H ; ) to denote the Transformer network, where z is a word embedding, H is a sequence of past activations, and denotes the parameters of the Transformer network.",
"Initially, the inputs to the Transformer network are z 1 and H 0 , where H 0 is an empty sequence.",
"The Transformer network produces two outputs: the final output g 1 R d and the activation h 1 R 2 N d , 4 where d denotes the hidden size of the Transformer network and N is the number of layers of the Transformer network.",
"For subsequent inputs z t and H t 1 , where H t 1 = [ h 1 , . . . , h t 1 ] , the computation is formally described as g t , h t = f LM ( e z t , H t 1 ) , (1) where e z t denotes the word embedding of z t .",
"To make the notation simpler, we use the following equation to denote the repeated application of f LM over a sequence z i : j = [ z i , . . . , z j ] given past activations A : G i : j , H i : j = f LM ( Z i : j , A ) , (2) where Z i : j = [ e z i , . . . , e z j ] , G i : j = [ g i , . . . , g j ] , and H i : j = [ h i , . . . , h j ] .",
"3 We release our checkpoint at https://huggingface.",
"co/THUMT/mGPT .",
"We propose multi-stage prompting (MSP), a simple and lightweight method for steering pre-trained LMs to translation tasks.",
"We first describe the concept of deep continuous prompts in Section 3.1.",
"Then we detail the stages and training objective in Section 3.2 and Section 3.3, respectively.",
"Finally, we describe the reparameterization of deep continuous prompts in Section 3.4.",
"We adopt continuous prompts (Li and Liang, 2021; Lester et al., 2021) instead of using textual prompts in our method.",
"Using continuous prompts allows learning through differentiable methods like back-propagation (Lester et al., 2021).",
"To be specific, we use deep continuous prompts which are in the same form as in (Li and Liang, 2021).",
"Formally, a prompt P is a sequence of L continuous vectors [ p 1 , . . . , p L ] .",
"Each vector p i (1 i L ) is a concatenation of key-value pairs in all N Transformer layers, which directly affect the computation of every attention layer.",
"Therefore, the dimension of p i is 2 N d .",
"We give an illustration of conditioning on a deep continuous prompt in Figure",
"2. 3.2 Stages To effectively mitigate the semantic and language gap between the pre-training and translation, we 6133 p ( e ) 1 p ( e ) 2 mGPT x 1 x 2 x 3 x 4 x 5 h ( e ) 1 h ( e ) 2 h ( e ) 3 h ( e ) 4 h ( e ) 5 mGPT x 1 x 2 x 3 x 4 x 5 p ( r ) 1 p ( r ) 2 h ( r ) 1 h ( r ) 2 h ( r ) 3 h ( r ) 4 h ( r ) 5 p ( d ) 1 p ( d ) 2 mGPT y 0 y 1 y 2 y 3 y 4 y 1 y 2 y 3 y 4 </S> The Encoding Stage The Re-Encoding Stage The Decoding Stage Figure 3: Detailed computations involved in the multi-stage prompting for machine translation tasks.",
"propose multi-stage prompting which divides the procedure of using pre-trained LMs as translators into three separate stages: the encoding, the re-encoding, and the decoding stages.",
"Given different prompts at different stages, the pre-trained LM is expected to behave differently during each stage and is more capable of generating translations.",
"Given a source sentence x = [ x 1 , . . . , x S ] and a target sentence y = [ y 1 , . . . , y T ] , the details of the three stages are described as follows: The Encoding Stage.",
"At the encoding stage, the pre-trained LM encodes the source sentence x into a sequence of activations H 1: S e by using an encoding stage prompt P e .",
"This procedure is the same as basic prompting.",
"Formally, it can be described as follows: G 1: S e , H 1: S e = f LM ( X 1: S , P e ) .",
"The Re-encoding Stage.",
"At the re-encoding stage, the pre-trained LM produces fine-grained representations of the source sentence by re-encoding x given past activations H 1: S e and a re-encoding stage prompt P r , which allows each representation to condition on all words in x .",
"This procedure can be described as G 1: S r , H 1: S r = f LM ( X 1: S , (cid:74) P r ; H 1: S e (cid:75) ) , (5) where (cid:74) P r ; H 1: S e (cid:75) denotes the concatenation of two sequences P r and H 1: S e .",
"It is also possible to employ more than one re-encoding stage, allowing the pre-trained LM to obtain further refined representations of the source sentence.",
"The Decoding Stage.",
"Finally, we obtain the hidden vectors G 1: T d for predicting the probability of the target sentence y at the decoding stage, given the refined source representations H 1: S r and a decoding stage prompt P d : G 1: T d , H 1: T d = f LM ( Y 1: T , (cid:74) P d ; H 1: S r (cid:75) ) .",
"More precisely, the reparameterization of the three prompts are as follows: P e = max( e , 1 . 0) e , (8) P r = max( r , 1 . 0) r , (9) P d = max( d , 1 . 0) d , (10) 6134 where e R 2 N d , r R 2 N d , and d R 2 N d .",
"(6) Figure 3 gives a detailed illustration of MSP.",
"By dividing the translation process into multiple stages and applying different prompts, we expect the pre-trained LM model can generate better translations.",
"We use the cross-entropy loss for learning prompts.",
"Given G 1: T d = [ g ( d ) 1 , . . . , g ( d ) T ] in Eq.",
"(6), the training objective is formally described as follows: L = 1 TT (cid:88) t =1 log P ( y t | y <t , x ) = 1 TT (cid:88) t =1 log exp ( e T z t g ( d ) t ) (cid:80) | V | i =1 exp ( e T z i g ( d ) t ) .",
"(7) Note that the parameters of the pre-trained LM are fixed during training.",
"Li and Liang (2021) suggest that using a neural network to reparameterize continuous prompts is more robust to different choices of hyperparameters.",
"In contrast to their approach which uses an MLP network to reparameterize continuous prompts, we introduce a much simpler scaled reparameterization method, in which a continuous prompt is reparam-eterized as a product of a learnable scalar and an embedding.",
"e , r , and d are initialized to 1.0 at the beginning of training.",
"Therefore, the set of trainable parameters in our method is = { e , r , d , e , r , d } , which contains much less tunable parameters than an MLP network.",
"Scaled reparameterization enables directly adjusting the value of prompts by a tunable scaling factor, leading to a much faster convergence without loss of performance.",
"Further analysis is presented in Section 4.7.",
"Datasets We conduct experiments on Romanian-English (Ro-En), English-German (En-De), and English-Chinese (En-Zh) translation tasks to verify our proposed method.",
"For the Ro-En translation task, we used the WMT16 Romanian-English dataset, which consists of 0.6M bilingual sentence pairs and 2M back-translated sentence pairs.",
"5 We used newsdev2016 as the development set and new-stest2016 as the test set.",
"For the En-De translation task, we used the WMT14 English-German dataset, which consists of 4.5M sentence pairs.",
"The development set is newstest2013 and the test set is newstest2014 .",
"For the En-Zh translation task, we used the WMT20 English-Chinese dataset as the training corpus, which consists of 28M sentence pairs.",
"The development set is newstest2019 and the test set is newstest2020 .",
"The details of preprocessing and postprocessing are given in Appendix A.2.",
"Metric.",
"We used case-sensitive BLEU (Pap-ineni et al., 2002) as the evaluation metric.",
"The BLEU score is calculated using the SACREBLEU toolkit (Post, 2018).",
"6 Baselines.",
"We used the mGPT model as the backbone LM in all our experiments, which contains 560M parameters.",
"We compare our method with the following prompting methods: 7 Prompt tuning (Lester et al., 2021).",
"5 http://data.statmt.org/rsennrich/wmt16_ backtranslations/ro-en 6 Signature: nrefs:1|case:mixed|eff:no|tok:{13a,zh}| smooth:exp|version:2.0.0 7 In our preliminary experiments, we also experimented with the few-shot approach as described in (Brown et al., 2020).",
"However, we found mGPT often failed to generate meaningful translations.",
"Prefix-tuning (Li and Liang, 2021).",
"A prompting method that uses deep continuous prompts, which prepend virtual tokens to all key-value pairs in attention layers of pre-trained LMs.",
"We use an MLP network to reparameterize a continuous prompt during training as suggested in (Li and Liang, 2021).",
"Implementations.",
"All our models are trained on a machine with 8 RTX 3090Ti GPUs.",
"For all prompting methods, we set the prompt length to 128.",
"For the training, we use the Glorot uniform initilalizer (Glorot and Bengio, 2010) to initialize tunable parameters unless otherwise noted.",
"We use Adam (Kingma and Ba, 2015) ( 1 = 0.9, 2 = 0.98 and = 1 10 9 ) as the optimizer with a batch size of roughly 32K tokens.",
"We use the same learning rate schedule as described in (Vaswani et al., 2017).",
"The number of warmup steps is set to 4K.",
"We set the maximum learning rate to 0.02 for prompt tuning and MSP, and 7e-4 for prefix-tuning.",
"8 We train prompts for a total of 80K steps for prompt tuning and prefix-tuning, and 40K steps for MSP.",
"For the inference, we use the beam search algorithm to obtain translation from the mGPT model, and the beam size is set to 4.",
"The length penalty is determined by the results evaluated on the development set.",
"We set the length penalty to 1.0 for the En-Zh translation task and 0.0 for other translation tasks.",
"We implement our models on top of the THUMT (Tan et al., 2020) toolkit and the Transformers library (Wolf et al., 2020).",
"Table 1 shows the results for the Ro-En, En-De, and En-Zh translation tasks.",
"As the most parameter-efficient among the three prompting methods, prompt tuning introduces only 131K parameters during training for each translation task.",
"However, it only achieves 9.4 BLEU points on average over the three translation tasks.",
"Lester et al. (2021) indicate that language model capacity is a key ingredient for prompt tuning to succeed.",
"As mGPT is a pre-trained LM with only 560M parameters, the results coincide with the conclusion of Lester et al. (2021).",
"Prefix-tuning, which uses deep continuous prompts, achieves an average of 23.9 BLEU points over the three translation tasks.",
"The results indicate that using deep continuous prompts is beneficial 8 We found using a large learning rate for prefix-tuning would result in unstable training.",
"for steering mGPT to translation tasks.",
"However, introducing deep continuous prompts inevitably requires more free parameters.",
"The MLP network used in prefix-tuning introduces about 26M parameters for each translation task during training in our experiments.",
"Finally, MSP achieves 28.0 BLEU points on average over the three translation directions and outperforms prompt tuning and prefix-tuning by 18.6 and 4.1 BLEU points, respectively.",
"MSP introduces 19M parameters for each translation task during training, which is more than prompt tuning but less than prefix-tuning.",
"MSP explicitly divides the translation process using mGPT into separate stages, which are not present in prompt tuning and prefix-tuning.",
"The results suggest that MSP is more effective in instructing pre-trained LMs to perform translation than prompt tuning and prefix-tuning.",
"Table 2 gives the results of mT5-XXL (Zhang et al., 2021), CPM-2 (Zhang et al., 2021), Ernie 3.0 (Sun et al., 2021a), and mGPT on the WMT20 En-Zh translation task.",
"Except for mGPT, other LMs are based on the encoder-decoder architecture.",
"Despite using a much smaller pre-trained LM with about 5% parameters of mT5-XXL, CPM-2, and Ernie 3.0, MSP achieves the best performance on the En-Zh translation task.",
"Therefore, we show that MSP is an efficient and effective approach to steering pre-trained LMs to translation tasks.",
"We compare our method with the state-of-the-art Transformer NMT model (Vaswani et al., 2017) 9 on the TedTalks dataset (Blackwood et al., 2018) and the WMT14 English-German dataset.",
"TedTalks dataset is an English-centric multilingual corpus including 59 languages with around 3K to 200K sentence pairs per language pair.",
"For the sake of simplicity, we only report results for 5 selected languages that contain more than 150K sentence pairs.",
"However, the Transformer model is trained on all available parallel sentences covering 59 languages, serving as a strong NMT baseline.",
"For mGPT with MSP, we individually train the model on each language pair following the same procedure described in this paper.",
"The results of X En and En X directions are shown in Table 3.",
"Although mGPT with MSP is independently trained on each language pair, the model still outperforms the strong multilingual NMT baseline by 3.4 and 3.9 BLEU points on X-En and En-X directions, respectively.",
"The results demonstrate that using pre-trained LMs as translators with an appropriate prompting method has the potential to excel a strong Transformer NMT model.",
"Table 4 shows the comparison between Transformer and our mGPT model with MSP on the En-9 We used the transformer-big setting.",
"De translation task.",
"While there is still a noticeable performance gap between Transformer and mGPT with MSP, using mGPT as a translator with MSP is much more parameter-efficient than training a separate NMT model.",
"Supporting En-De translation with mGPT only introduces 19M parameters with MSP method.",
"In comparison, the model size of the Transformer model for En-De translation is 450M.",
"While mGPT model can perform other downstream tasks by providing different prompts, such abilities have not been validated on the Transformer NMT model.",
"Besides being efficient in disk spaces, learning prompts for the En-De translation task are also faster than training a separate NMT model.",
"It takes 21 hours to train prompts for MSP, whereas 72 hours for training a Transformer model.",
"Figure 4 shows the effect of prompt length for prefix-tuning and MSP.",
"We omit the comparison to prompt tuning because of its inferior performance.",
"We found that using longer prompts generally leads to better performance for both prefix-tuning and MSP, but with diminishing returns.",
"This finding is consistent with previous studies (Li and Liang, 2021; Lester et al., 2021).",
"Furthermore, MSP consistently outperforms prefix-tuning when using the same prompt length.",
"Even MSP with a prompt length of 64 performs better than prefix-tuning with a prompt length of 256 (19.0 vs. 18.2).",
"The results further confirm that MSP is a better prompting 64 128 192 256 16 18 20 22 19.0 21.2 22.2 22.4 14.8 17.5 18.2 18.2 Prompt Length BLEU Multi-Stage Prompting Prefix-Tuning Figure 4: Comparison between MSP and prefix-tuning on the WMT14 En-De translation task with different prompt lengths.",
"method than prefix-tuning for steering pre-trained LMs to translation tasks.",
"For the inference time, we found longer prompts do not significantly affect the decoding speed on GPUs as the computation of attention layers are highly parallel, which is also consistent with the findings of Li and Liang (2021).",
"Table 5 shows the comparison of using different stage settings on the WMT14 En-De and the WMT20 En-Zh translation tasks.",
"For single-stage prompting, we also adopt scaled reparameterization instead of MLP reparameterization for a fair comparison.",
"On the WMT14 En-De translation task, using single-stage prompting achieves 17.9 BLEU points.",
"By comparison, explicitly separating encoding and decoding stages improve the translation performance over single-stage prompting by 2.3 BLEU points, which indicates the importance of differentiating stages.",
"Adding a re-encoding stage further improves the translation performance by 1.0 BLEU point, suggesting that the re-encoding stage is effective.",
"Adding a second re-encoding stage further improves the translation performance 6137 Method #Params.",
"by 0.6 BLEU points.",
"Although adding stages introduces more trainable parameters, it should be noted that sharing a single prompt for the encoding/re-encoding/decoding stages also improves over the single-stage prompting by 1.9 BLEU points.",
"The results suggest that most improvements are attributed to the explicit separation of stages rather than increased parameters.",
"Adding more stages generally slows the training speed.",
"However, we do not observe notable inference speed drop as re-encoding stages are computed one time in parallel during inference.",
"On the En-Zh translation task, the results are consistent with the results on the En-De translation task.",
"Therefore, we conclude that using more stages helps improve the translation quality.",
"Figure 5 shows the comparison between MSP using scaled reparameterization and without using reparameterization.",
"Using scaled reparameterization converges faster than without using reparameterization.",
"These two methods achieve nearly the same Prompt Distribution w/o prompt en (16%), ru (10%) Prefix-tuning zh (80%), ja (12%) MSP (encoding stage) en (51%), la (14%) MSP (re-encoding stage) en (24%), la (17%) MSP (decoding stage) zh (91%), ja (9%) Table 6: Language distribution of the free generations using mGPT by conditioning on different prompts learned by different prompting methods on the WMT20 En-Zh dataset.",
"translation performance when the training is converged.",
"As a result, using scaled reparameterization can make the convergence much faster and reduce the total training time.",
"Knowledge.",
"As continuous prompts are learned using bilingual sentence pairs, an interesting question arises: Is the translation knowledge stored in the continuous prompts or the pre-trained LM?",
"To answer this question, we discard the prompts and feed the mGPT model a concatenation of a parallel sentence pair as an input, and calculate the cosine similarities between the source and target hidden activations on each mGPT layer.",
"We found that although the prompts are not given, the nearest pairs of tokens between the source and target language frequently turn out to coincide with bilingual alignments.",
"This finding reveals to some extent that the translation knowledge mainly resides in the pre-trained LM instead of the learned continuous prompts, while the prompts play a role in guiding the model to perform translation during generation.",
"Examples are given in Appendix A.3.",
"Bottleneck.",
"We study the bottleneck of the current prompting method.",
"We train a separate Transformer encoder and an adapter network that directly 6138 maps a source sentence into a deep continuous prompt, leaving the mGPT model only serving as a decoder.",
"This model introduces 378M tunable parameters and achieves 25.9 BLEU points on the WMT14 En-De translation task.",
"Compared with 21.2 BLEU points by MSP, the result shows that there is still room to advance the translation performance of pre-trained LM by improving the prompting method, such as using dynamic prompts (Liu et al., 2021) for each input sentence.",
"However, as translation knowledge may come from the pre-trained LM, the translation performance may be bottlenecked by the capability of the backbone LM.",
"Interpretability.",
"We did not find our learned prompts to be interpretable, which agrees with the findings of Shin et al. (2020) and Lester et al. (2021).",
"However, we do observe prompts of different stages changing the behavior of mGPT significantly.",
"Specifically, we sample 100 examples generated from mGPT by providing prompts of different stages learned on the English-Chinese translation task and identify the language ids of generated texts using the langid toolkit.",
"The top-2 identified language distributions of each generation are shown in Table 6.",
"Without providing prompts, mGPT generates a random sentence from a random language.",
"By given continuous prompts learned by prefix-tuning, the mGPT mostly generates texts related to Chinese.",
"For MSP, it is noticeable that there is a transition from English to Chinese.",
"mGPT generates English-related text given the encoding stage prompt.",
"The distribution of languages becomes smoother when providing the prompt at the re-encoding stage.",
"Finally, mGPT generates Chinese texts dominantly given the decoding stage prompt.",
"The results coincide with our intuition that MSP helps the pre-trained LM to learn smoother transitions to the translation task.",
"Prompting.",
"Brown et al. (2020) propose to use a task description and a few examples to adapt the GPT-3 model to downstream tasks, which is referred to as in-context learning.",
"Their prompts are manually designed.",
"Gao et al. (2020) present LM-BFF for automatic prompts generation.",
"They use T5 model (Raffel et al., 2020) to generate templates for prompting pre-trained LMs.",
"Li and Liang (2021) propose prefix-tuning, which uses continuous vectors as prompts.",
"These prompts are trained using task-specific data and optimized through back-propagation.",
"Lester et al. (2021) propose prompt tuning, which is similar to prefix-tuning but with fewer trainable parameters.",
"Our method is also based on prompting.",
"We use continuous prompts for steering PLMs to translation tasks.",
"Unlike Li and Liang (2021) and Lester et al. (2021) who present general frameworks, our method is focused on improving the translation performance of pre-trained LMs.",
"Using Pre-trained Models as Translators.",
"Stickland et al. (2021) investigate using BART and mBART models for machine translation tasks, their approach relies on adapter networks and finetuning parts of pre-trained LMs.",
"Guo et al. (2020) build a non-autoregressive NMT model by using a source BERT model as the encoder and a target BERT as the decoder with adapter layers.",
"Sun et al. (2021b) propose grafting a source BERT model and a target GPT model for translation tasks.",
"Bapna and Firat (2019) propose using small adapter layers to adapt a base NMT model to new translation tasks.",
"All these methods are adapter-based, which injects additional tunable modules into the pre-trained models.",
"As a result, the pre-trained models lose the ability to perform mixed-task inference.",
"Our approach is based on prompting, which only uses prompts for steering the pre-trained LMs to translation tasks.",
"Zhang et al. (2021) investigate using prompt tuning for steering CPM-2 model to the WMT20 English-Chinese translation task.",
"Furthermore, their approach applied to encoder-decoder architecture pre-trained LMs while ours applied to decoder-only pre-trained LMs.",
"We have presented multi-stage prompting, a method for making pre-trained language models better translators.",
"Experiments show that with multi-stage prompting, pre-trained LMs can generate better translations, showing the potential of using pre-trained LMs for translation tasks.",
"This work was supported by the National Key R&D Program of China (No. 2018YFB1005103), the National Natural Science Foundation of China (No. 62006138, No. 61925601), Institute Guo Qiang at Tsinghua University, and Huawei Noah's Ark Lab.",
"We thank Kehai Chen for the discussion of this work and all anonymous reviewers for their valuable comments and suggestions on this work."
]
| [
"abstain",
"method",
"abstain",
"method",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"abstain",
"other",
"other"
]
|
[
"How do typological properties such as word order and morphological case marking affect the ability of neural sequence models to acquire the syntax of a language?",
"Cross-linguistic comparisons of RNNs' syntactic performance (e.g., on subject-verb agreement prediction) are complicated by the fact that any two languages differ in multiple typological properties, as well as by differences in training corpus.",
"We propose a paradigm that addresses these issues: we create synthetic versions of English, which differ from English in one or more typological parameters, and generate corpora for those languages based on a parsed English corpus.",
"We report a series of experiments in which RNNs were trained to predict agreement features for verbs in each of those synthetic languages.",
"Among other findings, (1) performance was higher in subject-verb-object order (as in English) than in subject-object-verb order (as in Japanese), suggesting that RNNs have a recency bias; (2) predicting agreement with both subject and object (polypersonal agreement) improves over predicting each separately, suggesting that underlying syntactic knowledge transfers across the two tasks; and (3) overt morphological case makes agreement prediction significantly easier, regardless of word order.",
"The strong performance of recurrent neural networks (RNNs) in applied natural language processing tasks has motivated an array of studies that have investigated their ability to acquire natural language syntax without syntactic annotations; these studies have identified both strengths (Linzen et al., 2016; Giulianelli et al., 2018; Gulordava et al., 2018; Kuncoro et al., 2018; van Schijndel and Linzen, 2018; Wilcox et al.,",
"2018) and limitations (Chowdhury and Zampar-elli, 2018; Marvin and Linzen, 2018; Wilcox et al., 2018).",
"Most of the work so far has focused on English, a language with a specific word order and relatively poor morphology.",
"Do the typological properties of a language affect the ability of RNNs to learn its syntactic regularities?",
"Recent studies suggest that they might.",
"Gulordava et al. (2018) evaluated language models on agreement prediction in English, Russian, Italian and Hebrew, and found worse performance on English than the other languages.",
"In the other direction, a study on agreement prediction in Basque showed substantially worse average-case performance than reported for English (Ravfogel et al., 2018).",
"Existing cross-linguistic comparisons are difficult to interpret, however.",
"Models were inevitably trained on a different corpus for each language.",
"The constructions tested can differ across languages (Gulordava et al., 2018).",
"Perhaps most importantly, any two natural languages differ in a number of typological dimensions, such as morphological richness, word order, or explicit case marking.",
"This paper proposes a controlled experimental paradigm for studying the interaction of the inductive bias of a neural architecture with particular typological properties.",
"Given a parsed corpus for a particular natural language (English, in our experiments), we generate corpora for synthetic languages that differ from the original language in one of more typological parameters (Chomsky, 1981), following Wang and Eisner (2016).",
"In a synthetic version of English with a subject-object-verb order, for example, sentence (1-a) would be transformed into (1-b): (1)",
"they say the broker took them out for lunch frequently .",
"We then train a model to predict the agreement features of the verb; in the present paper, we focus on predicting the plurality of the subject and the object (that is, whether they are singular or plural).",
"The subject plurality prediction problem for (1-b), for example, can be formulated as follows: (2) The man the apples (cid:104) singular/plural subject?",
"We illustrate the potential of this approach in a series of case studies.",
"We first experiment with polypersonal agreement, in which the verb agrees with both the subject and the object ( 3).",
"We then manipulate the order of the subject, the object and the verb ( 4), and experiment with overt morphological case ( 5).",
"For a preview of our synthetic languages, see Figure 1. 2 Setup Synthetic Language Generation We used an expert-annotated corpus, to avoid potential confounds between the typological parameters we manipulated and possible parse errors in an automatically parsed corpus.",
"As our starting point, we took the English Penn Treebank (Marcus et al., 1993), converted to the Universal Dependencies scheme (Nivre et al. 2016) using the Stanford converter (Schuster and Manning, 2016).",
"We then manipulated the tree representations of the sentences in the corpus to generate parametrically modified English corpora, varying in case systems, agreement patterns, and order of core elements.",
"For each parametric version of English, we recorded the verb-argument relations within each sentence, and created a labeled dataset.",
"We exposed our models to sentences from which one of the verbs was omitted, and trained them to predict the plurality of the arguments of the unseen verb.",
"The following paragraph describes the process of collecting verb-argument relations; a detailed discussion of the parametric generation process for agreement marking, word order and case marking is given in the corresponding sections.",
"We have made our synthetic language generation code publicly available.",
"1 Argument Collection We created a labeled agreement prediction dataset by first collecting verb-arguments relations from the parsed corpus.",
"We collected nouns, proper nouns, pronouns, adjectives, cardinal numbers and relative pronouns connected to a verb (identified by its part-of-speech tag) with an nsubj , nsubjpass or dobj dependency edge, and record the plurality of those arguments.",
"Verbs that were the head of a clausal complement without a subject ( xcomp dependencies) were excluded.",
"We recorded the plurality of the dependents of the verb regardless of whether the tense and person of the verb condition agreement in English (that is, not only in third-person 1 https://github.com/Shaul1321/rnn typology Prediction Subject Object Object task accuracy accuracy recall Subject 94 . 7 0 . 3 -Object 88 . 9 0 . 26 81 . 8 1 . 4 Joint 95 . 7 0 . 23 90 . 0 0 . 1 85 . 4 2 . 3 Table 1: Results of the polypersonal agreement experiments. Joint refers to multitask prediction of subject and object plurality. Singular Plural Subject -kar -kon Object -kin -ker Indirect Object -ken -kre Table 2: Case suffixes used in the experiments. Verbs are marked by a concatenation of the suffixes of their corresponding arguments. present-tense verbs).",
"For relative pronouns that function as subjects or objects, we recorded the plurality of their referent; for instance, in the phrase Treasury bonds, which pay lower interest rates , we considered the verb pay to have a plural subject.",
"Prediction Task We experimented both with prediction of one of the arguments of the verb (subject or object), and with a joint setting in which the model predicted both arguments of each verb.",
"Consider, for example, the prediction problem (3) (the verb in the original sentence was gave ): (3) The state (cid:104) verb (cid:105) CenTrust 30 days to sell the Rubens .",
"In the joint prediction setting the system is expected to make the prediction (cid:104) subject: singular, object: plural (cid:105) .",
"For each argument, the model predicts one of three categories: SINGULAR , PLURAL or NONE .",
"The NONE label was used in the object prediction task for intransitive verbs, which do not have an object; it was never used in the subject prediction task.",
"Model We used bidirectional LSTMs with 150 hidden units.",
"The bidirectional LSTM's representation of the left and right contexts of the verb was fed into a multilayer perceptron (MLP) with two hidden layers of sizes 100 and 50.",
"We used independent MLPs to predict subject and object plurality.",
"To capture morphological information, words were represented as the sum of the word embedding and embeddings of the character n grams that made up the word.",
"2 The model (including the embedding layer) was trained end-to-end using the Adam optimizer (Kingma and Ba, 2014).",
"For each of the experiments described in the paper, we trained four models with different random initializations; we report averaged results alongside standard deviations.",
"In languages with polypersonal agreement, verbs agree not only with their subject (as in English), but also with their direct object.",
"Consider the following Basque example: 3 (4) Kutxazain-ek cashierPL .",
"ERG bezeroa-ri customerSG .",
"DAT liburu-ak bookPL .",
"ABS eman dizkiote gave they-them-to-her/him The cashiers gave the books to the customer.",
"Information about the grammatical role of certain constituents in the sentence may disambiguate the function of others; most trivially, if a word is the subject of a given verb, it cannot simultaneously be its object.",
"The goal of the present experiment is to determine whether jointly predicting both object and subject plurality improves the overall performance of the model.",
"Corpus Creation In sentences with multiple verbs, agreement markers on verbs other than the prediction target could plausibly help predict the features on the target verb.",
"In a preliminary experiment, we did not observe clear differences between different verb marking schemes (e.g., avoiding marking agreement on verbs other than the prediction target).",
"We thus opted for full marking in all experiments: verbs are modified with suffixes that encode the number of all their arguments (see Figure 1).",
"The suffixes we used for verbs are a concatenation of the respective case suffixes of their arguments (Table 2).",
"For consistency, we remove plurality markers from English 2 Specifically, let E t and E ng be word and n -gram embedding matrices, and let t w and NG w be the word and the set of all n -grams of lengths 1 to 5, for a given word w .",
"The final vector representation of w , e w , is given by e w = E t [ t ] + (cid:80) ng NG w E ng [ ng ] .",
"3 The verb in Basque agrees with the indirect object as well.",
"In preliminary experiments, the recall of models trained on indirect object prediction was very low, due to the small number of indirect objects in the training corpus; we therefore do not include this task.",
"Single Task Results The basic results are summarized in Table 1. Recall is calculated as the proportion of the sentences with a direct object for which the model predicted either SINGULAR or PLURAL , but not NONE .",
"Since all verbs included in the experiment had a subject, subject recall was 100% and is therefore not reported.",
"Plurality prediction accuracy was higher for subjects than objects.",
"Recall for object prediction was 81.8%, indicating that in many sentences the model was unable to identify the direct object.",
"The lower performance on object plurality prediction is likely to be due to the fact that only about third of the sentences contain a direct object.",
"This hypothesis is supported by the results of a preliminary experiment, in which the model was trained only on transitive sentences (with a direct object).",
"Transitive-only training led to a reversal of the pattern: object plurality was predicted with higher accuracy than subject plurality.",
"We conjecture that this is due to the fact that most noun modifiers in English follow the head, making the head of the object, which in general determines the plurality of the phrase, closer on average to the verb than the head of the subject (see Table 3 below).",
"The accuracy we report for subject prediction, 94.7%, is lower than the accuracy of over 99% reported by Linzen et al. (2016).",
"This may be due to one of several reasons.",
"First, our training set was smaller: 35,000 sentences in our treebank corpus compared to 121,000 in their automatically parsed corpus.",
"Second, sentences in the Wall Street Journal corpus may be more syntactically complex on average than sentences in Wikipedia, making it more challenging to identify the verb's arguments.",
"Finally, we predicted agreement in all tenses, whereas Linzen et al. (2016) limited their study to the present tense (where English does in fact show agreement); it may be the case that sentences with past tense verbs are on average more complex than those with present tense verbs, regardless of the corpus.",
"Multitask Training Accuracy was higher in the joint setting: polypersonal agreement prediction is easier for the model.",
"Subject prediction accuracy rose from 94.7% to 95.7%, object precision was slightly higher (90.0% compared to 88.9%), and object recall was significantly higher, increasing from 81.8% to 85.4%.",
"We hypothesize that supervision signals from the prediction of both arguments lead to more robust abstract syntactic representations that transfer across the two tasks (Enguehard et al., 2017); for example, the model may be better able to identify the head of a noun phrase, regardless of whether it is the subject or the object.",
"These findings suggest that when training on an auxiliary agreement prediction task in order to improve a language model's syntactic performance, additional supervisionin the form of predicting both subject and objectmay be bene-ficial.",
"Languages vary in the typical order of the core elements of a clause: the subject, the object and the verb (Dryer, 2013).",
"For example, whereas in English the canonical order is Subject-Verb-Object (SVO, The priests are reading the book ), in Irish it is Verb-Subject-Object (VSO, Dillon and O Croinin 1961): (5) L eann read.",
"While there are six possible orderings of these three elements, in most human languages the subject precedes both the object and the verb: about 86.5% of the languages use either SOV or SVO orders, 9% of the languages use VOS order, and OVS and OSV languages are extremely rare (Tomlin, 1986).",
"To test whether RNNs have inductive biases favoring certain word orders over others, we created synthetic versions of English with all six possible orders of core elements.",
"While natural languages often allow at least a limited degree of word order flexibility, our experiments used a simplified setting in which word order was either completely fixed (e.g., always SVO) or fully flexible, where one of the six orders was selected uniformly at random for each sentence in the corpus (the same order is used for all of the clauses of the sentence).",
"Given a dependency parse for a sentence, we modulated the order of the subject and object nodes with respect to their verb.",
"When changing the position of an argument node, we moved the entire subtree rooted in that node, including verbs and other arguments in this subtree.",
"In the permutation process, we moved to the subject position not only nominal subjects ( nsubj and nsubjpass edges in UD), but also clausal subjects ( csubj edges).",
"Similarly, we moved to the object position not only nominal objects ( dobj edge), but also clausal complements ( ccomp and xcomp ).",
"We kept negations, adverbial modifiers, particles and auxiliaries in their original position with respect to the verb.",
"Other non-core dependents of the verb (i.e. not the subject or the object), such as prepositional phrases, were placed according to their original position relative to the verb.",
"For instance, in the clause the broker took them out for lunch , the phrase for lunch appeared directly following the verb and the arguments of the subtree in which it resides ( took , them , the broker ) in all word orders, reflecting its original position relative to the verb took (see Figure 1).",
"Relative pronouns and complementizers remained in their original position.",
"4 In all experiments in this section, we trained the model to jointly predict the plurality of the subject and the object.",
"For consistency across the object and subject plurality prediction tasks, we used the polypersonal agreement markers on all verbs in the sentence (except, of course, for the prediction target, which was withheld completely).",
"For example, in the OVS version of the sentence presented in Figure 1, the input was (6), where kon marks the fact that say has a plural subject: (6) them (cid:104) verb (cid:105) out frequently the broker for lunch say kon they .",
"Performance varied significantly across word orders (Table 3).",
"Subject plurality prediction accuracy was inversely correlated with the frequency of attractors (intervening nouns of the opposite plurality) in the language: accuracy was lowest for subject prediction in the VOS and SOV languages, in which objects intervene between the subject and the verb (Figure 2).",
"The degraded performance in these languages is consistent with the attraction effects found in previous studies of agree-4 For example, the result of transforming",
"ment in natural languages (Linzen et al., 2016; Gulordava et al., 2018), and support the hypothesis that RNNs have an inductive bias favoring dependencies with recent elements; we test this hypothesis in a more controlled way in 4.3.",
"Attractors affected object prediction accuracy as well.",
"The highest accuracy among the synthetic languages was in the SVO language and the worst performance observed in the OSV language.",
"As in 3, subjects were easier to predict than objects, likely because all verbs in the training set had a subject, but only 35% had an object.",
"Flexible word order was especially challenging for the model, with a subject plurality prediction accuracy of 88.6%, object plurality prediction accuracy of 74.1%, and object recall of 60.2%.",
"This does not necessarily bear on the RNNs' inductive biases: flexible word order without case marking would make it difficult for any learner to infer syntactic relations.",
"Without overt cues, the model must resort to selectional restrictions (e.g., in the apples ate the man , the only plausible subject is the man ), but those are difficult to learn from a small corpus.",
"What's more, some sentences are truly ambiguous when there are no case markers or word order cues; this happens for example when both arguments are animate, as in the lawyer saw the doctor (Gibson et al., 2013; Ettinger et al., 2018).",
"The previous experiments suggested that the RNN has a tendency to identify the more recent argument as the subject, leading to attraction effects",
"caused by the object.",
"We conjectured that this is due to the fact that many verbs are intransitive, that is, have a subject but not an object.",
"The clauses in which those verbs appear provide ambiguous evidence: they are equally compatible with a generalization in which the subject is the most recent core element before the verb, and with a generalization in which the subject is the first core constituent of the clause.",
"Attraction effects suggest that the inductive bias of the RNN leads it to adopt the incorrect recency-based generalization.",
"To test this hypothesis in a controlled way, we adopt the poverty of the stimulus paradigm (Wilson, 2006; Culbertson and Adger, 2014; McCoy et al., 2018): we withhold all evidence that disambiguates these two hypotheses (namely, all transitive sentences), and test how the RNN generalizes to the withheld sentence type.",
"We used the SOV and VOS corpora described before; in both of these languages, the object intervened between the subject and the verb, potentially causing agreement attraction.",
"Crucially, we train only on sentences without a direct object, and test on the following three types of sentences: 1. Sentences with an object of the opposite plurality from the subject (object attractor).",
"2. Sentences with an object of the same plurality as the subject (non-attractor object).",
"5 3. Sentences without an object, but with one or more nouns of the opposite plurality in-5 When the object is a noun-noun compound, it is considered a non-attractor if its head is not of the opposite plurality of the subject, regardless of the plurality of other elements.",
"This can only make the task harder compared with the alternative of considering compound objects such as screen displays as attractors for plural subjects.",
"tervening between the subject and the verb (non-object attractor); e.g., The gap between winners and losers will grow is intransitive, but the plural words winners and losers , which are a part of a modifier of the subject, may serve as attractors for the singular subject gap .",
"The results are shown in Table 4. Withholding direct objects during training dramatically degraded the performance of the model on sentences with an object attractor: the accuracy decreased from 90.6% for the model trained on the full SOV corpus (Table 3) to 60.0% for the model trained only on intransitive sentences from the same corpus.",
"There was an analogous drop in performance in the case of VOS (89.5% compared to 48.3%).",
"By contrast, attractors that were not core arguments, or objects that were not attractors, did not hurt performance in a comparable way.",
"This suggests that in our poverty of the stimulus experiments RNNs were able to distinguish between core and non-core elements, but struggled on instances in which where the object directly preceded the verb (the instances that were withheld in training).",
"This constitutes strong evidence for the RNN's recency bias: our models extracted the generalization that subjects directly precede the verb, even though the data were equally compatible with the generalization that the subject is the first core argument in the clause.",
"These findings align with the results of Khan-delwal et al. (2018), who demonstrated that RNN language models are more sensitive to perturbations in recent input words compared with perturbations to more distant parts of the input.",
"While in their case the model's recency preference can Object Object Non-object (attractor) (non attractor) attractor SOV 60 .",
"be a learned property (since recent information is more relevant for the task of next-word predic-tion), our experiment focuses on the inherent inductive biases of the model, as the cues that are necessary for differentiating between the two generalizations were absent in training.",
"Our reordering manipulation was limited to core element (subjects, objects and verbs).",
"Languages also differ in word order inside other types of phrases, including noun phrases (e.g., does an adjective precede or follow the noun?), adposi-tional phrases (does the language use prepositions or postpositions?), and so on.",
"Greenberg (1963) pointed out correlations between head-modifier orders across phrase categories; while a signifi-cant number of exceptions exist, these correlations have motivated proposals for a language-wide setting of a Head Directionality Parameter (Stowell, 1981; Baker, 2001).",
"In future work, we would like to explore whether consistent reordering across categories improves the model's performance.",
"In practice, even languages with a relatively rigid word order almost never enforce this order in every clause.",
"The order of elements in English, for example, is predominately SVO, but constructions in which the verb precedes the subject do exist, e.g., Outside were three police officers .",
"Other languages are considerably more flexible than English (Dryer, 2013).",
"Given that word order flexi-bility makes the task more difficult, our setting is arguably simpler than the task the model would face when learning a natural language.",
"The fact that the agreement dependency between the subject and the verb was more challenging to establish in the SOV order compared to the SVO order is consistent with the hypothesis that SVO languages make it easier to distinguish the subject from the object (Gibson et al., 2013); indeed, to compensate for this issue, SOV languages more frequently employ case marking (Matthew Dryer, quoted in Gibson et al. 2013).",
"There was not a clear relationship between the prevalence of a particular word order in the languages of the world and the difficulty that our models experienced with that order.",
"The model performed best on the OVS word order, which is present in a very small number of languages ( 1%).",
"SOV languages were more difficult for our RNNs to learn than SVO languages, even though SOV languages are somewhat more common (Dryer, 2013).",
"These results weakly support functional explanations of these typological tendencies; such explanations appeal to communicative efficiency considerations rather than learning biases (Maurits et al., 2010).",
"Of course, since the inductive biases of humans and RNNs are likely to be different in many respects, our results do not rule out the possibility that the distribution of word orders is driven by a human learning bias after all.",
"The vast majority of noun phrases in English are not overtly marked for grammatical function (case), with the exception of pronouns; e.g., the first-person singular pronoun is I when it is a subject and me when it is an object.",
"Other languages mark case on most nouns.",
"Consider, for example, the following example from Russian: 6 (7)",
"a. ya I kupil bought knig-u.",
"bookOBJECT I bought the book.'",
"b. knig-a bookSUBJECT ischezla.",
"disappeared The book disappeared.' Overt case marking reduces ambiguity and facilitates parsing languages with flexible word order.",
"To investigate the influence of case on agreement predictionand on the ability to infer sentence structurewe experimented with different case systems.",
"In all settings, we used fused suffixes, which encode both plurality and grammatical function.",
"We considered three case systems (see Figure 1): 1. An unambiguous case system, with a unique 6 The standard grammatical term for these cases are nominative (for subject) and accusative (for object); we use SUBJECT and OBJECT for clarity.",
"suffix for each combination of number and grammatical function.",
"2. A partially syncretic (ambiguous) case system, in which the same suffix was attached to both singular subjects and plural objects (modeled after Basque).",
"3. A fully syncretic case system (argument marking only): the suffix indicated only the plurality of the argument, regardless of its grammatical function (cf. subject/object syncretism in Russian neuter nouns).",
"In the typological survey reported in Baerman and Brown (2013), 62% of the languages had no or minimal case marking, 20% had syncretic case systems, and 18% had case systems with no syncretism.",
"Corpus Creation The suffixes we used are listed in Table 2. We only attached the suffix to the head of the relevant argument; adjectives and other modifiers did not carry case suffixes.",
"The same suffix was used to mark plurality/case on noun and the agreement features on the verb; e.g., if the verb eat had a singular subject and plural object, it appeared as eat karker (the singular subject suffix was kar and the plural object suffix was ker ).",
"We stripped off plurality and case markers from the original English noun phrases before adding these suffixes.",
"Setup We evaluated the interaction between different case marking schemes and three word orders: flexible word order and the two orders on which the model achieved the best (OVS) and worst (VOS) subject prediction accuracy.",
"We train one model for each combination of case system and word order.",
"We jointly predicted the plurality of subject and the object.",
"Results and Analysis The results are summarized in Table 5.",
"Unambiguous case marking dramatically improved subject and object plurality prediction compared with the previous experiments; accuracy was above 98% for all three word orders.",
"Partial syncretism hurt performance somewhat relative to the unambiguous setting (except with flexible word order), especially for object prediction.",
"The fully syncretic case system, which marked only the plurality of the head of each argument, further decreased performance.",
"At the same time, even this limited marking scheme was helpful: accuracy in the most challenging setting, flexible word order (subject: 96.0%; object: 86.1%), was not very different from the results on unmod-ified English (95.7% and 90.0%).",
"This contrasts with the poor results on the flexible setting without cases (subject: 88.6%; object: 60.2%).",
"On the rigid orders, a fully syncretic system still significantly improved agreement prediction.",
"The moderate effect of case syncretism on performance suggests that most of the benefits of case marking stems from the overt marking of the heads of all arguments.",
"Overall, these results are consistent with the observation that languages with explicit case marking tend to allow a more flexible word orders compared with languages such as English that make use of word order to express grammatical function of words.",
"Our approach of constructing synthetic languages by parametrically modifying parsed corpora for natural languages is closely inspired by Wang and Eisner (2016) (see also Wang and Eisner 2017).",
"While they trained a model to mimic the POS tags order-statistics of the target language, we manually modified the parsed corpora; this allows us to control for selected parameters, at the expense of reducing generality.",
"Simpler synthetic languages (not based on natural corpora) have been used in a number of recent studies to examine the inductive biases of different neural architectures (Bowman et al., 2015; Lake and Baroni, 2018; McCoy et al., 2018).",
"In an-other recent study, Cotterell et al. (2018) measured the ability of RNN and n -gram models to perform character-level language modeling in a sample of languages, using a parallel corpus; the main typological property of interest in that study was morphological complexity.",
"Finally, a large number of studies, some mentioned in the introduction, have used syntactic prediction tasks to examine the generalizations acquired by neural models (see also Bernardy and Lappin 2017; Futrell et al. 2018; Lau et al. 2017; Conneau et al. 2018; Ettinger et al. 2018; Jumelet and Hupkes 2018).",
"We have proposed a methodology for generating parametric variations of existing languages and evaluating the performance of RNNs in syntactic feature prediction in the resulting languages.",
"We used this methodology to study the grammatical inductive biases of RNNs, assessed whether certain grammatical phenomena are more challenging for RNNs to learn than others, and began to compare these patterns with the linguistic typology literature.",
"In our experiments, multitask training on polypersonal agreement prediction improved performance, suggesting that the models acquired syntactic representations that generalize across argument types (subjects and objects).",
"Performance varied significantly across word orders.",
"This variation was not correlated with the frequency of the word orders in the languages of the world.",
"Instead, it was inversely correlated with the frequency of attractors, demonstrating a recency bias.",
"Further supporting this bias, in a poverty-of -the-stimulus paradigm, where the data were equally consistent with two generalizationsfirst, the generalization that the subject is the first argument in the clause, and second, the generalization that the subject is the most recent argument preceding the verbRNNs adopted the recency-based generalization.",
"Finally, we found that overt case marking on the heads of arguments dramatically improved plurality prediction performance, even when the case system was highly syncretic.",
"Agreement feature prediction in some of our synthetic languages is likely to be difficult not only for RNNs but for many other classes of learners, including humans.",
"For example, agreement in a language with very flexible word order and without case marking is impossible to predict in many cases (see 4.2), and indeed such languages are very rare.",
"In future work, a human experiment based on the agreement prediction task can help determine whether the difficulty of our languages is consistent across humans and RNNs.",
"This work is supported by the Israeli Science Foundation (grant number 1555/15) and by Theo Hoffenberg, the founder & CEO of Reverso."
]
| [
"abstain",
"abstain",
"objective",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other"
]
|
[
"While traditional corpus-level evaluation metrics for machine translation (MT) correlate well with fluency, they struggle to reflect adequacy.",
"Model-based MT metrics trained on segment-level human judgments have emerged as an attractive replacement due to strong correlation results.",
"These models, however, require potentially expensive re-training for new domains and languages.",
"Furthermore, their decisions are inherently non-transparent and appear to reflect unwelcome biases.",
"We explore the simple type-based classifier metric, MACROF 1 , and study its applicability to MT evaluation.",
"We find that MACROF 1 is competitive on direct assessment, and outperforms others in indicating downstream cross-lingual information retrieval task performance.",
"Further, we show that MACROF 1 can be used to effectively compare supervised and unsupervised neural machine translation, and reveal significant qualitative differences in the meth-ods' outputs.",
"1 1 Introduction Model-based metrics for evaluating machine translation such as BLEURT (Sellam et al., 2020), ESIM (Mathur et al., 2019), and YiSi (Lo, 2019) have recently attracted attention due to their superior correlation with human judgments (Ma et al., 2019).",
"However, BLEU (Papineni et al., 2002) remains the most widely used corpus-level MT metric.",
"It correlates reasonably well with human judgments, and moreover is easy to understand and cheap to calculate, requiring only reference translations in the target language.",
"By contrast, model-based metrics require tuning on thousands of examples of human evaluation for every new target language or domain 1 Tools and analysis are available at https://github.",
"com/thammegowda/007-mt-eval-macro .",
"MT evaluation metrics are at https://github.com/isi-nlp/sacrebleu/tree/ macroavg-naacl21 .",
"(Sellam et al., 2020).",
"Model-based metric scores are also opaque and can hide undesirable biases, as can be seen in Table",
"1. Reference: You must be a doctor.",
"The source of model-based metrics' (e.g. BLEURT) correlative superiority over model-free metrics (e.g. BLEU) appears to be the former's ability to focus evaluation on adequacy , while the latter are overly focused on fluency .",
"BLEU and most other generation metrics consider each output token equally.",
"Since natural language is dominated by a few high-count types, an MT model that concentrates on getting its if s, and s and but s right will benefit from BLEU in the long run more than one that gets its xylophone s, peripatetic s, and defenestrate s right.",
"Can we derive a metric with the discriminating power of BLEURT that does not share its bias or expense and is as interpretable as BLEU?",
"As it turns out, the metric may already exist and be in common use.",
"Information extraction and other areas concerned with classification have long used both micro averaging , which treats each token equally, and macro averaging , which instead treats each type equally, when evaluating.",
"The latter in particular is useful when seeking to avoid results dominated by overly frequent types.",
"In this work we take a classification-based approach to evaluating machine translation in order to obtain an easy-to-calculate metric that focuses on adequacy as much as BLEURT but does not have the expensive overhead, opacity, or bias of model-based methods.",
"Our contributions are as follows: We consider MT as a classification task, and thus admit MACROF 1 as a legitimate approach to evaluation (Section 2).",
"We show that MACROF 1 is competitive with other popular methods at tracking human judgments in translation (Section 3.2).",
"We offer an additional justification of MACROF 1 as a performance indicator on adequacy-focused downstream tasks such as cross-lingual information retrieval (Section 3.3).",
"Finally, we demonstrate that MACROF 1 is just as good as the expensive BLEURT at discriminating between structurally different MT approaches in a way BLEU cannot, especially regarding the adequacy of generated text, and provide a novel approach to qualitative analysis of the effect of metrics choice on quantitative evaluation (Section 4).",
"Neural machine translation (NMT) models are often viewed as pairs of encoder-decoder networks.",
"Viewing NMT as such is useful in practice for implementation; however, such a view is inadequate for theoretical analysis.",
"Gowda and May (2020) provide a high-level view of NMT as two fundamental ML components: an autoregressor and a classifier.",
"Specifically, NMT is viewed as a multi-class classifier that operates on representations from an autoregressor.",
"We may thus consider classifier-based evaluation metrics.",
"Consider a test corpus, T = {( x ( i ) , h ( i ) , y ( i ) ) i = 1 , 2 , 3 ... m } where x ( i ) , h ( i ) , and y ( i ) are source, system hypothesis, and reference translation, respectively.",
"Let x = { x ( i ) i } and similar for h and y .",
"Let V h , V y , V h y , and V be the vocabulary of h , the vocabulary of y , V h V y , and V h V y , respectively.",
"For each class c V , PREDS ( c ) = m i = 1 C ( c , h ( i ) ) REFS ( c ) = m i = 1 C ( c , y ( i ) ) MATCH ( c ) = m i = 1 min { C ( c , h ( i ) ) , C ( c , y ( i ) )} where C ( c , a ) counts the number of tokens of type c in sequence a (Papineni et al., 2002).",
"For each class c V h y , precision ( P c ), recall ( R c ), and F measure ( F ; c ) are computed as follows: 2 P c = MATCH ( c ) PREDS ( c ) ; R c = MATCH ( c ) REFS ( c ) F ; c = ( 1 + 2 ) P c R c 2 P c + R c The macro-average consolidates individual performance by averaging by type, while the micro-average averages by token: MACROF = c VF ; c V MICROF = c V f ( c ) F ; c c V f ( c ) where f ( c ) = REFS ( c )+ k for smoothing factor k .",
"3 We scale MACROF and MICROF values to percentile, similar to BLEU , for the sake of easier readability.",
"In the following sections, we verify and justify the utility of MACROF 1 while also offering a comparison with popular alternatives such as MICROF 1 , BLEU , CHRF 1 , and BLEURT.",
"4 We use Kendall's rank correlation coefficient, , to compute the association between metrics and human judgments.",
"Correlations with p-values smaller than = 0 .",
"05 are considered to be statistically significant.",
"We use the 2017 WebNLG Challenge dataset (Gar-dent et al., 2017; Shimorina, 2018) 5 to analyze the differences between microand macroaveraging.",
"WebNLG is a task of generating English text for sets of triples extracted from DBPedia.",
"Human annotations are available for a sample of 223 records each from nine NLG systems.",
"The human 2 We consider F ; c for c / V h y to be",
"0. 3 We use k = 1. When k , MICROF MACROF .",
"4 BLEU and CHRF 1 scores reported in this work are computed with SACREBLEU ; see the Appendix for details.",
"BLEURT scores are from the base model (Sellam et al., 2020).",
"We consider two varieties of averaging to obtain a corpus-level metric from the segment-level BLEURT: mean and median of segment-level scores per corpus.",
"5 https://gitlab.com/webnlg/ webnlg-human-evaluation Name Fluency & Grammar Semantics BLEU .444 .500 CHRF 1 .278 .778 MACROF 1 .222 .722 MICROF 1 .333 .611 BLEURTmean .444 .833 BLEURTmedian .611 .667 Table 2: WebNLG data-to-text task: Kendall's between system-level MT metric scores and human judgments.",
"judgments provided have three linguistic aspects fluency, grammar, and semantics 6 which enable us to perform a fine grained analysis of our metrics.",
"We compute Kendall's between metrics and human judgments, which are reported in Table",
"2. As seen in Table 2, the metrics exhibit much variance in agreements with human judgments.",
"For instance, BLEURT median is the best indicator of fluency and grammar, however BLEURT mean is best on semantics.",
"BLEURT, being a model-based measure that is directly trained on human judgments, scores relatively higher than others.",
"Considering the model-free metrics, CHRF 1 does well on semantics but poorly on fluency and grammar compared to BLEU .",
"Not surprisingly, both MICROF 1 and MACROF 1 , which rely solely on unigrams, are poor indicators of fluency and grammar compared to BLEU , however MACROF 1 is clearly a better indicator of semantics than BLEU .",
"The discrepancy between MICROF 1 and MACROF 1 regarding their agreement with fluency, grammar, and semantics is expected: micro-averaging pays more attention to function words (as they are frequent types) that contribute to fluency and grammar whereas macro-averaging pays relatively more attention to the content words that contribute to semantic adequacy.",
"The take away from this analysis is as follows: MACROF 1 is a strong indicator of semantic adequacy, however, it is a poor indicator of fluency.",
"We recommend using either MACROF 1 or CHRF 1 when semantic adequacy and not fluency is a desired goal.",
"In this section, we verify how well the metrics agree with human judgments using Workshop on Machine Translation (WMT) metrics task datasets for 20172019 (Bojar et al., 2017; Ma et al., 2018,",
"2019).",
"7 We first compute scores from each MT metric, and then calculate the correlation with human judgments.",
"As there are many language pairs and translation directions in each year, we report only the mean and median of , and number of wins per metric for each year in Table",
"3. We have excluded BLEURT from comparison in this section since the BLEURT models are fine-tuned on the same datasets on which we are evaluating the other methods.",
"8 CHRF 1 has the strongest mean and median agreement with human judgments across the years.",
"In 2018 and 2019, both MACROF 1 and MICROF 1 mean and median agreements outperform BLEU whereas in 2017 BLEU was better than MACROF 1 and MICROF 1 .",
"As seen in Section 3.1, MACROF 1 weighs towards semantics whereas MICROF 1 and BLEU weigh towards fluency and grammar.",
"This indicates that recent MT systems are mostly fluent, and adequacy is the key discriminating factor amongst them.",
"BLEU served well in the early era of statistical MT when fluency was a harder objective.",
"Recent advancements in neural MT models such as Transformers (Vaswani et al., 2017) produce fluent outputs, and have brought us to an era where semantic adequacy is the focus.",
"In this section, we determine correlation between MT metrics and downstream cross-lingual information retrieval (CLIR) tasks.",
"CLIR is a kind of information retrieval (IR) task in which documents in one language are retrieved given queries in another (Grefenstette, 2012).",
"A practical solution to CLIR is to translate source documents into the query language using an MT model, then use a monolingual IR system to match queries with translated documents.",
"Correlation between MT and IR metrics is accomplished in the following steps:",
"1. Build a set of MT models and measure their performance using MT metrics.",
"2. Using each MT model in the set, translate all source documents to the target language, build an IR model, and measure IR performance on translated documents.",
"3. For each MT metric, find the correlation between the set of MT scores and their corresponding set of IR scores.",
"The MT metric that has a 7 http://www.statmt.org/wmt19/metrics-task.html 8 https://github.com/google-research/bleurt Year Pairs BLEUBLEUMACROF 1 MICROF 1 CHRF 1 2019 18 Mean .751 .771 .821 .818 .841 Median .782 .752 .844 .844 .875 Wins 3 3 6 3 5 2018 14 Mean .858 .857 .875 .873 .902 Median .868 .868 .901 .879 .919 Wins 1 2 3 2 6 2017 13 Mean .752 .713 .714 .742 .804 Median .758 .733 .735 .728 .791 Wins 5 4 2 2 6 Table 3: WMT 201719 Metrics task: Mean and median Kendall's between MT metrics and human judgments.",
"stronger correlation with the IR metric(s) is more useful than the ones with weaker correlations.",
"4. Repeat the above steps on many languages to verify the generalizability of findings.",
"An essential resource of this analysis is a dataset with human annotations for computing MT and IR performances.",
"We conduct experiments on two datasets: firstly, on data from the 2020 workshop on Cross-Language Search and Summarization of Text and Speech (CLSSTS) (Zavorin et al., 2020), and secondly, on data originally from Europarl, prepared by Lignos et al. (2019) (Europarl).",
"CLSSTS datasets contain queries in English (EN), and documents in many source languages along with their human translations, as well as query-document relevance judgments.",
"We use three source languages: Lithuanian (LT), Pashto (PS), and Bulgarian (BG).",
"The performance of this CLIR task is evaluated using two IR measures: Actual Query Weighted Value (AQWV) and Mean Average Precision (MAP).",
"AQWV 9 is derived from Actual Term Weighted Value (ATWV) metric (Weg-mann et al., 2013).",
"We use a single CLIR system (Boschee et al., 2019) with the same IR settings for all MT models in the set, and measure Kendall's between MT and IR measures.",
"The results, in Table 4, show that MACROF 1 is the strongest indicator of CLIR downstream task performance in five out of six settings.",
"AQWV and MAP have a similar trend in agreement to the MT metrics.",
"CHRF 1 and BLEURT, which are strong contenders when generated text is directly evaluated by humans, do not indicate 9 https://www.nist.gov/system/files/documents-/2017/10/26/aqwv_derivation.pdf CLIR task performance as well as MACROF 1 , as CLIR tasks require faithful meaning equivalence across the language boundary, and human translators can mistake fluent output for proper translations (Callison-Burch et al., 2007).",
"3.3.2 Europarl Datasets We perform a similar analysis to Section 3.3.1 but on another cross-lingual task set up by Lignos et al. (2019) for Czech English (CS-EN) and German English (DE-EN), using publicly available data from the Europarl v7 corpus (Koehn, 2005).",
"This task differs from the CLSSTS task (Section 3.3.1) in several ways.",
"Firstly, MT metrics are computed on test sets from the news domain, whereas IR metrics are from the Europarl domain.",
"The domains are thus intentionally mismatched between MT and IR tests.",
"Secondly, since there are no queries specifically created for the Europarl domain, GOV2 TREC topics 701850 are used as domain-relevant English queries.",
"And lastly, since there are no query-document relevance human judgments for the chosen query and document sets, the documents retrieved by BM25 (Jones et al., 2000) on the English set for each query are treated as relevant documents for computing the performance of the CS-EN and DE-EN CLIR setup.",
"As a result, IR metrics that rely on boolean query-document relevance judgments as ground truth are less informative, and we use Rank-Based Overlap (RBO; p = 0 . 98) (Webber et al., 2010) as our IR metric.",
"We perform our analysis on the same experiments as Lignos et al. (2019).",
"10 NMT models for CS-EN and DE-EN translation are trained using a convolutional NMT architecture (Gehring 10 https://github.com/ConstantineLignos/ mt-clir-emnlp-2019 Domain IR Score BLEUMACROF 1 MICROF 1 CHRF 1 BLEURTmean BLEURTmedian LT-EN In AQWV .429 .363 .508 .385 .451 .420 MAP .495 .429 .575 .451 .473 .486 In+Ext AQWV .345 .527 .491 .491 .491 .477 MAP .273 .455 .418 .418 .418 .404 PS-EN In AQWV .559 .653 .574 .581 .584 .581 MAP .493 .632 .487 .494 .558 .554 In+Ext AQWV .589 .682 .593 .583 .581 .571 MAP .519 .637 .523 .482 .536 .526 BG-EN In AQWV .455 .550 .527 .382 .418 .418 MAP .491 .661 .564 .491 .527 .527 In+ext AQWV .257 .500 .330 .404 .367 .367 MAP .183 .426 .257 .330 .294 .294 Table 4: CLSSTS CLIR task: Kendall's between IR and MT metrics under study. The rows with Domain=In are where MT and IR scores are computed on the same set of documents, whereas Domain=In+Ext are where IR scores are computed on a larger set of documents that is a superset of segments on which MT scores are computed. Bold values are the best correlations achieved in a row-wise setting; values with are not significant at = 0 . 05. BLEUMACROF 1 MICROF 1 CHRF 1 BT BT CS-EN .850 .867 .850 .850 .900 .867 DE-EN .900 .900 .900 .912 .917 .900 Table 5: Europarl CLIR task: Kendall's between MT metrics and RBO. BT and BT are short for BLEURT mean and BLEURT median . All correlations are significant at = 0 . 05. et al., 2017) implemented in the FAIRSeq (Ott et al., 2019) toolkit.",
"For each of CS-EN and DE-EN, a total of 16 NMT models that are based on different quantities of training data and BPE hyperparame-ter values are used.",
"The results in Table 5 show that BLEURT has the highest correlation in both cases.",
"Apart from the trained BLEURT median metric, MACROF 1 scores higher than the others on CS-EN, and is competitive on CS-EN.",
"MACROF 1 is not the metric with highest IR task correlation in this setting, unlike in Section 3.3.1, however it is competitive with BLEU and CHRF 1 , and thus a safe choice as a downstream task performance indicator.",
"Unsupervised neural machine translation (UNMT) systems trained on massive monolingual data without parallel corpora have made significant progress recently (Artetxe et al., 2018; Lample et al., 2018a,b; Conneau and Lample, 2019; Song et al., 2019; Liu et al., 2020).",
"In some cases, UNMT yields a BLEU score that is comparable with strong 11 supervised neural machine transla-11 though not, generally, the strongest tion (SNMT) systems.",
"In this section we leverage MACROF 1 to investigate differences in the translations from UNMT and SNMT systems that have similar BLEU .",
"We compare UNMT and SNMT for English German (EN-DE, DE-EN), English French (EN-FR, FR-EN), and English Romanian (EN-RO, RO-EN).",
"All our UNMT models are based on XLM (Conneau and Lample, 2019), pretrained by Yang (2020).",
"We choose SNMT models with similar BLEU on common test sets by either selecting from systems submitted to previous WMT News Translation shared tasks (Bojar et al., 2014, 2016) or by building such systems.",
"12 Specific SNMT models chosen are in the Appendix (Table 12).",
"Table 6 shows performance for these three language pairs using a variety of metrics.",
"Despite comparable scores in BLEU and only minor differences in MICROF 1 and CHRF 1 , SNMT models have consistently higher MACROF 1 and BLEURT than the UNMT models for all six translation directions.",
"In the following section, we use a pairwise maximum difference discriminator approach to compare corpus-level metrics BLEU and MACROF 1 on a segment level.",
"Qualitatively, we take a closer look at the behavior of the two metrics when comparing a translation with altered meaning to a translation with differing word choices using the metric.",
"12 We were unable to find EN-DE and DE-EN systems with comparable BLEU in WMT submissions so we built standard Transformer-base (Vaswani et al., 2017) models for these using appropriate quantity of training data to reach the desired BLEU performance.",
"We report EN-RO results with diacritic removed to match the output of UNMT.",
"We consider cases where a metric has a strong opinion of one translation system over another, and analyze whether the opinion is well justified.",
"In order to obtain this analysis, we employ a pairwise segment-level discriminator from within a corpus-level metric, which we call favoritism .",
"We extend the definition of T from Section 2 to T = { x , h S , h U , y } where each of h S and h U is a separate system's hypothesis set for x .",
"13 Let M be a corpus-level measure such that M ( h , y ) R and a higher value implies better translation quality.",
"M ( h ( i ) , y ( i ) ) is the corpus-level score obtained by excluding h ( i ) and y ( i ) from h and y , respectively.",
"We define the benefit of segment i , M ( i ; h ) : M ( i ; h ) = M ( h , y ) M ( h ( i ) , y ( i ) ) If M ( i ; h ) > 0, then i is beneficial to h with respect to M , as the inclusion of h ( i ) increases the corpus-13 The subscripts represent SNMT and UNMT in this case, though the definition is general.",
"level score.",
"We define the favoritism of M toward i as M ( i ; h S , h U ) : M ( i ; h S , h U ) = M ( i ; h S ) M ( i ; h U ) (1) If M ( i ; h S , h U ) > 0 then M favors the translation of x ( i ) by system S over that in system U .",
"Table 7 reflects the results of a manual examination of the ten sentences in the DE-EN test set with greatest magnitude favoritism; complete results are in the Appendix, Tables 15 and 16.",
"Meaning-altering changes such as untranslation' , (wrong) time' , and (wrong) translation' are marked in italics , while changes that do not fundamentally alter the meaning, such as synonym,' (different) inflec-tion,' and (different) word order' are marked in plain text.",
"14 The results indicate that MACROF 1 generally favors SNMT, and with good reasons, as the favored translation does not generally alter sentence meaning, while the disfavored translation does.",
"On 14 Some changes, such as word order' may change meaning; these are italicized or not on a case-by-case basis.",
"the other hand, for the ten most favored sentences according to BLEU , four do not contain meaning-altering divergences in the disfavored translation.",
"Importantly, none of the sentences with greatest favoritism according to MACROF 1 , all of which having meaning altering changes in the disfavored alternatives, appears in the list for BLEU .",
"This indicates relatively bad judgment on the part of BLEU .",
"One case of good judgment from MACROF 1 and bad judgment from BLEU regarding truncation is shown in Table 8.",
"From our qualitative examinations, MACROF 1 is better than BLEU at discriminating against untranslations and trucations in UNMT.",
"The case is similar for FR-EN and RO-EN, except that ROEN has more untranslations for both SNMT and UNMT, possibly due to the smaller training data.",
"Complete tables and annotated sentences are in the Appendix, in Section C. 5 Related Work 5.1 MT Metrics Many metrics have been proposed for MT evaluation, which we broadly categorize into model-free or model-based .",
"Model-free metrics compute scores based on translations but have no significant parameters or hyperparameters that must be tuned a priori ; these include BLEU (Papineni et al., 2002), NIST (Doddington, 2002), TER (Snover et al., 2006), and CHRF 1 (Popovic, 2015).",
"Model-based metrics have a significant number of parameters and, sometimes, external resources that must be set prior to use.",
"These include METEOR (Banerjee and Lavie, 2005), BLEURT (Sellam et al., 2020), YiSi (Lo, 2019), ESIM (Mathur et al., 2019), and BEER (Stanojevic and Sima'an, 2014).",
"Model-based metrics require significant effort and resources when adapting to a new language or domain, while model-free metrics require only a test set with references."
]
| [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
]
|
[
"While national politics often receive the spotlight, the overwhelming majority of legislation proposed, discussed, and enacted is done at the state level.",
"Despite this fact, there is little awareness of the dynamics that lead to adopting these policies.",
"In this paper, we take the first step towards a better understanding of these processes and the underlying dynamics that shape them, using data-driven methods.",
"We build a new large-scale dataset, from multiple data sources, connecting state bills and legislator information, geographical information about their districts, and donations and donors' information.",
"We suggest a novel task, predicting the legislative body's vote breakdown for a given bill, according to different criteria of interest, such as gender, rural-urban and ideological splits.",
"Finally, we suggest a shared relational embedding model, representing the interactions between the text of the bill and the legislative context in which it is presented.",
"Our experiments show that providing this context helps improve the prediction over strong text-based models.",
"Despite the fact that state-level legislation is rarely discussed, it has a dramatic influence on the everyday life of residents of the respective states.",
"The policies enacted at the state-level touch on all aspects, from mundane topics, such as trash removal and state mascots, to highly ideologically-charged topics such as education, religious liberties, and health-care access.",
"Moreover, state-legislatures discuss and vote-on significantly more bills than their Federal counterparts, adding up to over 120,000 bills per year (King, 2019).",
"Also, the lack of general interest, as well as the complexity of the processes that differ across states, often leads to public disengagement from local politics.",
"This results in decisions being made with little understanding of Republican Democrat",
"the processes that shape them and how they are likely to influence different demographics.",
"Similarly, most effort directed at understanding political processes using data was directed at the Federal level.",
"In the NLP community, several works looked at analyzing political texts (Iyyer et al., 2014) and the resulting behaviors of legislators (Gerrish and Blei, 2011, 2012).",
"The only exception is recent work (Eidelman et al., 2018), predicting whether a bill would pass the preliminary stage, legislative committee, to a full-body vote.",
"State-level demographic cleavages: Our goal in this paper is to take a first step towards understanding the processes and interests that underlie how decisions are passed using data-driven methods.",
"Our main intuition is that the impact of bills on different demographics will be reflected in the behavior and voting patterns of their representatives.",
"Thus, providing the ability to automatically identify bills, before they are put to a vote, that will have a positive or negative influence on a specific demographic can help inform public responses and increase engagement with local political processes.",
"To help achieve this goal, we define two novel text classification tasks, characterizing the breakdown of votes, based on different cleavages or demographic indicators such as gender, geography (i.e., rural vs. urban districts), party membership and ideological splits.",
"With respect to each one of these splits, we define two aggregate-level properties of a vote, competitive and inverse-competitive cleavages.",
"Both of these measures capture the lack of consensus in the legislature body around a specific bill, but in different ways.",
"We say that a bill is competitive in a vote (Fig. 1b) if the majority of legislators from a logical group (e.g., democrats, women, urban districts, liberals) vote differently from the majority of legislators from the opposite group (e.g., republican, men, rural districts, con-servatives).",
"A bill is inverse-competitive (Fig. 1c) if there is a partial or complete tie within the legislators from the same group (e.g., women).",
"To help explain these concepts, consider a bill restricting access to abortion clinics.",
"This bill is likely to results in a competitive vote, based on ideology.",
"On the other hand, a bill granting tax breaks for farmers might result in a inverse-competitive vote, based on ideology.",
"In that case, a competitive vote, based on geography is more likely.",
"In Table 1, we provide examples of the different splits associated with real bills that were brought to a vote.",
"Unsurprisingly, a benign bill, such as #1 is widely accepted and does not result in any contention.",
"A contentious bill, such as #2, touching on the way religion is taught is split ideologically (i.e., the vote is almost unanimous inside each ideological group), but mixed based on economic and gender splits.",
"Bill #4 addressing nepotism issues and regulating public contracts is contentious across all splits.",
"Alerting the public when such bills are brought to a vote can help ensure that legislators take into account the opinions and voiced raised in their constituencies.",
"Technical Contributions Although a text classification scheme is a reasonable starting point to determine demographic cleavages of bills only based on their content, it is not sufficient.",
"Our key insight in this paper is that the context or relations through which specific information is propagated among different players in the legislative process (e.g., money donors and legislators), can be leveraged to further improve the performance.",
"Thus, we build a shared relational architecture that models the text of a bill and its context into a graph; Our model captures the behavior of individual legislators, language of bills, and influence of contributions on the decision to identify demographic cleavages.",
"While there are different ways to realize our relational model, we chose to build on recent advances in the NLP space, Relational Graph Convolutional Network (RGCN) (Schlichtkrull et al., 2018) and pretrained BERT transformers (Devlin et al., 2018).",
"RGCN allows us to define multiple relations between each pair of entities (e.g., a legislator sponsorship and casting a vote on a bill) and BERT enables us to represent the textual information more efficiently.",
"With the help of the attention-based architecture, BERT has been shown to outperform LSTM models.",
"To operationalize our relational settings, we collected information from different sources and introduced a new dataset combining information about legislators, bills, donations, and donors as well as demographic information about the legislators and their districts.",
"In our experiments, we analyze the implication of different relations on the performance and show that our shared architecture outperforms existing text and graph models.",
"Bill analysis at the state level has received little attention and our work, while conducting a new in-depth modeling and analysis, is inspired by the following works:",
"Classification of congress roll calls.",
"(Eidelman et al., 2018) combines the text of the bill with partisan identity of the bill's sponsor(s) in a model predicting the likelihood of a member of the U.S. Congress voting in support of a non-unanimous congress bill or resolution.",
"They find that the models that combine text with sponsorship data significantly outperform several alternative models.",
"Similarly, (Gerrish and Blei, 2011) uses topics associated with congress bills to infer its location in ideological space and then uses ideal point models to predict the likelihood of a U.S. Senator or House member voting in support of a bill.",
"They find that their model increases predictive accuracy by about 4% over a nave baseline model.",
"(Patil et al., 2019; Kraft et al., 2016; Karimi et al., 2019; Kornilova et al., 2018; Peng et al., 2016) extend this congress model to learn embeddings for legislators and congress bills using other sources of data (e.g., Twitter, knowledge graphs).",
"More recently, (Budhwar et al., 2018) evaluates different models for predicting roll-call votes based on verbal statements that legislators make during questioning.",
"Predicting progress of bills Rather than using bill text in models to explain the roll-call behavior of individual legislators, (Yano et al., 2012) include the legislation's text in a model that predicts whether a bill emerges from a standing committee, a point in the legislative process that most bills do not pass.",
"In particular, they use features based on the urgency and importance of the issue being addressed by the bill as well as a set of features extracted from co-sponsors of the bill.",
"Examining the fate of bills between the 103rd and 111th congresses, they find that including features of the bill drawn from the text improves the model's predictive accuracy over their baseline model.",
"(Eidelman et al., 2018) repeat a similar analysis for the states and they show that combining contextual information about the legislators and the legislatures with bill text consistently provides the best predictions.",
"(Nay, 2017) examines the text of congressional to identify the text structure most associated with a congress bill's enactment and then embeds it using Word2Vec for the classification based on Random Forests; Nay concludes that the full text of a congress bill enables better prediction efficiency.",
"Demographic bill cleavages.",
"Demographic bill cleavages is a well-studied topic in the political science space.",
"Research has properly differentiated between the multiple ways demographic background of legislators can influence roll-call voting.",
"(Pinney and Serra, 1999) finds that Congressional Black Caucus members vote more consistently with the caucus than they do with fellow partisans or with representatives from their state.",
"(Jenk-ins, 2012) discusses gender moderates the effect of party and ideology in roll-call voting.",
"Similarly, (Frederick, 2010) discusses gender influences the roll-call vote in the Senate by moderating the effect of partisanship for GOP women.",
"(Broach, 1972) demonstrates that urban-rural cleavages structure vote in less partisan states and on bills that clearly divide urban and rural interests.",
"NLP applications of GCN.",
"Recently, GCNs have been explored in different NLP tasks.",
"Semantic role labeling (SRL) (Marcheggiani and Titov, 2017), relation classification in clinical narratives (Li et al., 2018), and machine translations (Bastings et al., 2017) are a few instances.",
"In such tasks, GCN is used to encode syntactic structure of sentences.",
"In a similar context, some works explored the idea of graph neural networks (GNNs) (Peng et al., 2018; Henaff et al., 2015; Def-ferrard et al., 2016), where each part of a document (e.g., sentences) is collapsed into a graph of words or the citation relations (Kipf and Welling, 2016) creates a network among different documents.",
"We model the legislative process as a graph that consists of bills, legislators, and money donors in all states.",
"Building a global graph captures contextual information and relationships that interconnect different states.",
"For instance, money donation by a contributor to two legislators from different states could indicate they have a similar roll call behaviors on abortion bills.",
"Given this intuition, after a brief overview of the legislative process in US states, we describe how we collapse it into a graph structure.",
"Although there are some specific differences across state legislatures, a common process, shown in Figure 2, prevails.",
"This process starts with one or more legislators (Representatives or Senators) who sponsor and file a bill.",
"The idea of a bill could be original or come from a constituent, public official, or an interest group.",
"Each state consists of two cham-bers: the House of Representatives (House\") and the Senate. To become law, the bill goes through a reviewing process in the origin chamber, where it can die at different stages.",
"If the bill gets a pass vote, it is sent to the other chamber and the same process repeats.",
"Finally, the bill is reviewed by the state Governor for signature.",
"In parallel to these efforts, external contributors, e.g., money donors and lobbyists, play an important yet indirect role in the process.",
"By sourcing information and money into the process, they leave an impact on legislators, which can change the progression of a bill.",
"First Reading by title.",
"Then, the chamber president may refer the bill to a committee for review.",
"If the committee casts a vote on the bill, it can be defeated or advance to Second Reading by the full body of legislators.",
"Next, the chamber leadership may decide to approve the bill for Third Reading, where it again comes to a vote by the full body of legislators and a majority vote can advance the bill.",
"A close look reveals that the legislative process cannot be captured in a simple graph as there can be multiple relations between a pair of nodes (e.g., sponsorship and vote between legislators and bills), and the graph consists of several nodes types with different attributes and labels (e.g., bills with competitive labels).",
"Thus, we model the process using a heterogeneous multi-relational graph, as follows: Node attributes : The nodes in our proposed legislative graph come with a rich set of features and information: (1) Bill nodes contain title, description, and full text of the house and senate state bills.",
"(2) Legislator nodes contain diverse textual information abstracting the behavior of a legislator such as his biography, political interests, committee assignments, and demographic attributes (gender, party, and ideology and the district information).",
"(3) Contributors nodes come with different information (in the textual format) on money donors such as their specific and general business interests, party, and their type (individual vs non-individual).",
"Relations : Based on the legislative process, we identify that legislator and bill nodes participate in three main relations: sponsorship (R1) , negative (Nay) vote (R2) , and positive (Yea) vote (R3) .",
"Similarly, we establish two types of relations between contributors and legislators: positive donation edges (R4) , which are realized based on the real data, and negative or lack of donation edges (R5) , inferred when a contributor shows lack of interest in specific legislators (e.g., always donates to Democrats).",
"In this case, we randomly sample such legislators and link them to the contributor.",
"Based on our data analysis, more than 62% of unique contributors always contribute to one party in our dataset.",
"We also conducted an ablation study, not included due to space constraints, and the donor information contributed between 2 to 11 F1 points.",
"For a bill and one of its roll calls in the legislative graph, we seek to predict if (1) it evinces identi-fiable voting cleavages or (2) it can advance by getting a pass.",
"For voting cleavages, we defined four demographic attributes (gender, party, ideology, and the urban/rural nature of the district) to divide legislators into groups.",
"We assign nine labels to each bill as follows: (1) Competitive labels : For an attribute (e.g., party), a voting round of a bill is defined as competitive if the majority of legislators from one group (e.g., Democrats) votes differently from the majority of the other group (e.g., Republicans).",
"For example, in Figure 1b, 70% of Democrats vote Yea and 80% Republicans vote Nay on a roll call, then the bill is competitive and the disagreement between the groups is 10% (=80%-70%).",
"(2) Inverse-competitive labels : Similarly, for an attribute (e.g., party), we call a voting round as inverse-competitive if there is a partial or full cleavage among the legislators of the same group.",
"For instance, consider a bill with 55% of Democrats voting Yea and 45% of them voting Nay (Figure 1c).",
"In this case, the bill turns out to be inverse-competitive and the disagreement is 45% (the percentage of minority votes).",
"(3) Survival label : Depending on the progress, a bill passes a certain voting round if it gets a majority vote (e.g., in 2nd/3rd Reading) or if two-thirds of legislators agree to it (e.g., in amendments).",
"We argue for a joint graph and text embedding model to represent the nodes and their textual attributes in the legislative graph, which is used for the roll-call prediction and aggregation.",
"Embedding models that only leverage textual information ignore important relations in the legislative graph.",
"Graph-based models make textual information less distinguishable at the classification stage, where it matters.",
"At a high level, our approach combines the complementary strengths of both approaches.",
"Our architecture (Figure 4a) uses BERT's pretrained embedding to represent the textual information of nodes in the graph; and text-attributed RGCN to generate an embedding for them based on their relations.",
"Finally, we combine them to build a representation of edges in the graph for our relation prediction and then aggregate vote relations.",
"The lower half of our architecture is based on BERT, which leverages transformers and acts as an efficient replacement for sequential models.",
"In our case, we use the BERT's pretrained embedding to form an initial representation for the textual information of the nodes in the legislative graph.",
"Bill representation: We represent a bill by averaging three different vectors (Figure 4b) corresponding to: (1) title, (2) description, and (3) body of the bill.",
"For each of these components, we compute the average word vector based on BERT's pretrained word embedding.",
"Thus, the bill representation is X bill = Avg ( e title + e description + e body ) .",
"Legislator representation : To represent a legislator, we compute BERT's pretrained embedding for his textual information: (1) attributes, (2) biography, and (3) committee information.",
"Finally, we take the average of these vectors, X legislator = Avg ( e attributes + e biography + e cmte info ) , as illustrated in Figure 4c.",
"Contributor representation : Similarly, We transform different pieces of textual information on a contributor, i.e., partyand type-related attributes, business information, and industry data, into separate vectors, e attributes , e business , e industry and then take their average as the final representation, X contributor (Figure 4d).",
"We feed the text representation of the bill, legislator, and contributor nodes, as their initial representation, into Relational Graph Convolutional Network (RGCNs) to better represent them given the legislative graph structure.",
"In parallel, a feed-forward neural network (FFNN) processes these text representations and takes them to a concatenation layer for the joint text-graph optimization.",
"From the message passing perspective, each (non-relational) GCN layer performs two operations: propagation and aggregation .",
"In the propagation phase, the neighborhood nodes send their feature/hidden representation to the node that needs to be updated.",
"In the aggregation phase, the node sums up all the messages coming from its neighborhood with its properties.",
"The aggregated message is passed through a non-linear activation function which forms the new representation of the node.",
"If the graph edges are not typed, the hidden representation of each node i , at ( l + 1) 'th layer, is computed by: h il +1 = (cid:32) (cid:88) j N i 1 c i W l h lj (cid:33) (1) In which the weight matrix W l is shared by all edges in layer l",
".Also, c i is a normalization factor, which is often set to c i = | N i | .",
"Relational GCN (RGCN) generalizes GCNs to handle different relations between any pair of nodes, and thus being a better fit for our problem.",
"Unlike GCNs, RGCNs use a different weight matrix and normalization factors (e.g., c ri = | N ri | ) for each relation type and thus the hidden representation of nodes in ( l +1) 'th Table 2: Statistics of the legislative graphs, aggregated over the 2011-2018 period.",
"layer is computed as:",
"By having a K -layer RGCN (stacking layers onto each other), we can capture k th -order relations from a node in the graph.",
"However, a 2-layer RGCN turns out to be sufficient in our case as it fully realizes the 2nd order relations between contributors and bills.",
"By combining the outputs of the RGCN and FFNN, we train a model for predicting relations in the legislative graph through FFNN+softmax.",
"One could leverage DistMult scoring functions (Schlichtkrull et al., 2018; Yang et al., 2014) as well.",
"Next, we post-process the roll-call relations and aggregate them to form the demographic and pass/fail vote breakdowns and determine the final class labels.",
"In more detail, the representation of an edge or relation ( s, d ) is the dot product of e joints and e jointd , which are the embedding of the corresponding nodes.",
"The representation of a node comes from the concatenation of two components: (1) text embedding (hidden states) coming from the BERT layer after being fine-tuned through FFNN, and (2) the graph embedding (hidden state of the node) from the last RGCN layer.",
"Loss function : At a high level, our loss function is L = L Cls + L Text + L Graph and jointly optimizes the text and graph embeddings as well as the relation prediction and roll-call aggregation.",
"L Cls is the cross-entropy loss of the relation prediction; L Graph and L Text are the L2 regularizations of RGCN's and FFNN's weights that generate the graph and text representations, respectively.",
"In this section, we describe our comprehensive legislative dataset, combining different sources of data (e.g., money donors data, diverse information on",
"legislators).",
"Table 2 shows the statistics of our dataset after pruning and forming the legislative graph (discussed in Section 3).",
"Next, we focus on our joint embedding model and its great ability in outperforming existing prediction models.",
"Bills and legislator data.",
"From the LegiScan web-site (LegiScan, 2019), we collected data on the text and lower chamber disposition of all bills introduced in Indiana, Oregon, and Wisconsin from the 2011 through 2018 sessions.",
"To do so, we developed a comprehensive crawler in Python that performs multiple operations.",
"First, it uses the LegiScan API to collect legislative information on every bill that covers: (1) bill metadata that includes the bill type, title, description, sponsors, and links to its texts; (2) vote metadata that consists of the individual legislator's vote Yea, Nay, Absent, or NV; and",
"(c) legislator metadata containing party and district information.",
"Then, our crawler accurately converts bill texts that are stored in the PDF format to text files, using open-source libraries.",
"To identify the fine-grained progression of bills in the legislative process, our crawler downloads and processes the History section of each bill on the LegiScan Website, which consists of a series of events associated with a bill's history (e.g., committee report, roll-call vote).",
"Such information is not readily available in the LegiScan API.",
"Overall, we collected 34443 bills introduced in the target states from 2011 to 2018.",
"We studied 58% of the bills that had both the votes of individual legislators and full texts, which are necessary for determining vote breakdowns and cleavage labels; However, our focus in this paper is on the 2nd/3rd reading, in which all members of the chambers vote, so we selected 32% of the bills that reached this stage to build the legislative graph (Table 2).",
"Biography, ideology and geography data.",
"Finally, our crawler uses Ballotpedia (Ballotpedia, 2019) to collect texts on each legislator's biography, political interests, and committee assignments.",
"Also, it aggregates other publicly available datasets to identify each legislator's attributes such as ideology, gender, and district nature (urban/rural).",
"The ideology scores for legislators were taken (Shor and McCarty, 2011) and they were grouped into conservatives, moderates, and liberals.",
"The district identi-fier was combined with GIS census data (Census, 2019) to categorize each legislator as representing either an urban or rural district.Table 3 shows the breakdown of legislators' party, gender, ideology, and district information in our target states.",
"For less than 10% of legislators, Ballotpedia profiles were missing.",
"Thus, we used other public textual information about them (e.g., Twitter).",
"Donors data : FollowTheMoney (FollowThe-Money, 2019) captures and keeps tracks of donations to legislators and candidates in the US states.",
"Our crawler consumes the FollowTheMoney API to collect the information of donors for each legislator and cosponsors of our bills.",
"This includes multiple textual attributes and information for each contributor: type that could be individual or nonindividual, general party, and economic and business information.",
"While the contributor data can be used in more sophisticated ways, in this work, we focused on major contributors by setting a donation threshold ($10000) and removing those who contributed to a single legislator; We also separated between ideological contributors and pragmatic ones (donating to both parties) by inferring negative (lack of) donation relations (see Section 3); We set the fraction of negative donations to 30% of the positive ones extracted from the real data.",
"Table 2 shows the final per-state statistics of contributors.",
"We build different graph and textual models on top of PyTorch, DGL (Deep Graph Library), and spaCy.",
"In our joint text-graph model (Figure 4) and other baselines, the initial embedding dimension of both BERT (bert-large-uncased) and the first-layer RGCN are set to 1024.",
"The FFNN (fully connected layer) and the second-layer RGCN take the initial text and graph embeddings to a 256-dimensional space.",
"We have also experimented with different settings, which while resulting in lower overall performance, retained the same trend when comparing the other models.",
"We used Adam to optimize our model and for each observed relation (Table 2), we sampled a negative example.",
"Data splits .",
"Our focus is on the bill cleavage and survival and thus we split legislative graphs based on bill nodes.",
"To evaluate different scenarios, we have three configurations: (1) random where we select 20% of the bills for testing and keep the rest for training and validation.",
"(2) time-based where 20% of most recent bills are considered for testing; and (3) state-based: where the test bills come from one specific state and train bills from the other states.",
"The test bills and corresponding legislators appear in the test graph, and the difference of the original and test graphs is used for training.",
"Note that vote relations of sponsoring legislators and a bill are known, and appear in training.",
"To demonstrate the benefits of our joint text-graph embedding, we implement a series of text and graph embedding architectures as the baseline.",
"Category 1: text embedding models : We realize our bill encoder (Figure 4b) using three text embedding models and then train a logistic regression classifier to directly predict if a bill text shows a certain cleavage or passes/fails:",
"(a) BoW , where unigram and bigram features (top 10K highest scoring ones using scikit-learn (Pedregosa et al., 2011)) used to represent bill texts.",
"(b) GloVe (Penning-ton et al., 2014) that is a popular word embedding model using the square loss; We used the GloVe-840B-300D pre-trained word vectors in our experiments.",
"(c) BERT (Devlin et al., 2018) that is a transformer based architecture capable of capturing contextualized embedding.",
"Category 2: featureless graph embedding models : We build a edge classifier over edge embeddings generated by models that assume nodes in the legislative graph are homogeneous and featureless, and then aggregate the roll call results:",
"(a) DeepWalk (Perozzi et al., 2014) is an embedding model that generates node vectors by running Skip-Gram on random walks formed at different nodes in the graph.",
"(b) GCN (Kipf and Welling, 2016) is the basic two-layer GCN model that uses a single weight matrix in each layer and begins with the random node features in the first layer.",
"(c) RGCN (Schlichtkrull et al., 2018) is the relational version of the GCN that captures different relations in our legislative graph.",
"Category 3: text-attributed (TA) graph embedding models : We use the same edge classifier Table 4: Macro-F1 in bill survival and cleavage prediction for the random split and known sponsors' relations.",
"but use the graph models that can consume the text-based node features generated by our BERT-based node encoders:",
"(a) TA-DeepWalk (Yang et al., 2015) that changes the graph factorization in DeepWalk to support node features.",
"(b) TAGCN (Kipf and Welling, 2016) is the original GCN that takes as input an initial node features.",
"(c) TA-RGCN (Schlichtkrull et al., 2018) is a relational GCN that captures node features initialized by our text-based node encoders.",
"Category 4: naive baselines .",
"We evaluate two other naive classifiers:",
"(a) Majority : A baseline predicting the most frequent class in the training data:",
"(b) Sponsor : A logistic regression classifier that directly predicts bill survival and cleavages based on the one-hot encoded sponsors' info.",
"encoded.",
"Performance of different textual and graph models .",
"Table 4 shows macro F1 for different bill cleavages and pass/fail.",
"We first analyze the performance of different models in each category: (1) Among the naive models, the sponsor-based classifier improves the bill survival prediction compared to the majority model but has no positive impact on bill cleavages as expected intuitively.",
"(2) In the textual models, we observe BERT improves the F1 performance by 2%-8% compared to GloVe and BoW.",
"By leveraging a bidirectional operation, BERT more efficiently captures the context of each word in the bill title, summary, and body.",
"(3) In the featureless graph models, RGCN consistently outperforms the standard GCN and DeepWalk models as it treats each of the relations in the legislative graph (e.g., donation and voting) differently and does not mix their weight matrices with each other.",
"This benefit of RGCN is entirely enabled by our new dataset that explicitly tracks different legislative relations; (4) Unlike the second category, the text attributed graph models capture implicit relations between different nodes in the graph through their text features.",
"By leveraging our node encoders, they begin with better initial representations of the nodes and relations (e.g., particularly votes) and thus provide an improvement by up to 15% in the performance compared to their featureless counterparts.",
"(5) Finally, our proposed model by combining and jointly optimizing the graph and textual representations consistently provides a higher F1 score.",
"Compared to the other models, it improves recall while maintaining high precision, e.g., in the case of the bill survival prediction, the macro precision and recall values for BERT, TA-RGCN, and our model are (0.72, 0.67), (0.92, 0.66), (0.82, 0.84), respectively.",
"Language and implications of different cleavages .",
"We can make a few observations: it is slightly more challenging to identify inverse-competitive bills compared to competitive ones.",
"This happens across different graph and text models, and thus indicating the language of these bills and the dynamics of relations behind them is rather complex.",
"To help provide an intuition, we summarized in Table 6 the top bigrams and unigrams used in competitive and inverse-competitive bills across the different cleavages.",
"Interestingly, the top n-grams of competitive bills align better with the cleavages (e.g., abortion is competitive both based on ideology and gender) compared to the top inverse-competitive n-grams, which often focus on mundane issues such as taxes and services, suggesting that when non-polarizing legislation is discussed, group agreement takes a secondary role.",
"From another angle, Figure 5 further illustrates the differences between these two categories of cleavages.",
"Overall, there are 10%-20% more com-Table 5: Macro F1 for bill survival and party cleavages for the best model in each category based on the state-and time-based data splits.",
"petitive bills compared to inverse-competitive ones under the party and ideology attributes, indicating cross-group disagreements (e.g., conservatives VS. moderates VS. liberals) are more likely than intragroup disagreement.",
"This pattern is reversed for the gender and geography attributes.",
"Implication of stateand time-based data splits .",
"For the pass/fail and party cleavages with the best model in each category, Table 5 shows a sharp drop in the F1 score for the state-based and time-based data split, particularly for graph-based models (RGCN and TA-RGCN).",
"By training the model with the two states and testing it with another one, the graph-based embedding models are challenged with representing many unseen legislators.",
"While GCN-based solutions are capable of creating such representations in the test time (us-ing the same weight matrix), they are sub-optimal particularly in featureless GCN settings.",
"One interesting observation is that when the model is tested with the OR data, the drop is even sharper as OR tends to be a democratic state; While WI and IN are often republican states.",
"For the time-based data split, we observe a similar but slightly better performance as the number of unseen nodes are fewer.",
"In all these different configurations, our joint model still improves the F1 score but it is limited on how the underlying graph model behaves.",
"In this paper, we take the first step towards understanding the dynamics of state-level legislative processes in the US through a data-driven approach.",
"We proposed to collapse the legislative process into a heterogeneous multi-relational graph and suggest several tasks for capturing disagreement over several ideological and demographic cleavages, as well as predicting the outcome of the legislative process.",
"We approach these problems by formulating them as aggregate roll-call prediction.",
"To fully realize the potential of graph-based modeling, we created a new dataset, used to characterize the real-world context in which the legislative process takes place, consisting of bills, donors, and legislators and their behavior.",
"We model the rich relationship between these entities and the content of the bills using a joint text and graph prediction model on top of BERT and RGCN, outperforming each one of the models in isolation."
]
| [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"method",
"abstain",
"objective",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain"
]
|
[
"Neural-based end-to-end approaches to natural language generation (NLG) from structured data or knowledge are data-hungry, making their adoption for real-world applications difficult with limited data.",
"In this work, we propose the new task of few-shot natural language generation .",
"Motivated by how humans tend to summarize tabular data, we propose a simple yet effective approach and show that it not only demonstrates strong performance but also provides good generalization across domains.",
"The design of the model architecture is based on two aspects: content selection from input data and language modeling to compose coherent sentences, which can be acquired from prior knowledge.",
"With just 200 training examples, across multiple domains, we show that our approach achieves very reasonable performances and outperforms the strongest baseline by an average of over 8.0 BLEU points improvement.",
"Our code and data can be found at https: //github.com/czyssrs/Few-Shot-NLG 1 Introduction Natural language generation (NLG) from structured data or knowledge (Gatt and Krahmer, 2018) is an important research problem for various NLP applications.",
"Some examples are task-oriented dialog, question answering (He et al., 2017; Ghazvininejad et al., 2018; Su et al., 2016; Saha et al., 2018; Yin et al., 2016) and interdisciplinary applications such as medicine (Hasan and Farri, 2019; Cawsey et al., 1997) and healthcare (Hasan and Farri, 2019; DiMarco et al., 2007).",
"There is great potential to use automatic NLG systems in a wide range of real-life applications.",
"Recently, deep neural network based NLG systems have been developed, such as those seen in the E2E challenge (Novikova et al., 2017), WEATHERGOV (Liang et al., 2009), as well as more complex ones such as WIKIBIO (Liu et al., 2018) and ROTOWIRE (Wiseman et al., 2017).",
"Compared to traditional slot-filling pipeline approaches, such neural-based systems greatly reduce feature engineering efforts and improve text diversity as well as fluency.",
"Although they achieve good performance on benchmarks such as E2E challenge (Novikova et al., 2017) and WIKIBIO (Lebret et al., 2016), their performance depends on large training datasets, e.g., 500k table-text training pairs for WIKIBIO (Lebret et al., 2016) in a single domain.",
"Such data-hungry nature makes neural-based NLG systems difficult to be widely adopted in real-world applications as they have significant manual data curation overhead.",
"This leads us to formulate an interesting research question:",
"1. Can we significantly reduce human annotation effort to achieve reasonable performance using neural NLG models?",
"2. Can we make the best of generative pre-training, as prior knowledge, to generate text from structured data?",
"Motivated by this, we propose the new task of few-shot natural language generation : given only a handful of labeled instances (e.g., 50 200 training instances), the system is required to produce satisfactory text outputs (e.g., BLEU 20).",
"To the best of our knowledge, such a problem in NLG community still remains under-explored.",
"Herein, we propose a simple yet very effective approach that can generalize across different domains.",
"In general, to describe information in a table, we need two skills to compose coherent and faithful sentences.",
"One skill is to select and copy factual content from the table this can be learned quickly by reading a handful of tables.",
"The other is to compose grammatically correct sentences that bring those facts together this skill is not re-Input Table Attribute (R) Value (V) Name Walter Extra Nationality German Occupation Aircraft designer and manufacturer ... ...",
"Figure 1 : Overview of our approach: Under the base framework with switch policy, the pre-trained language model serves the generator.",
"We follow the same encoder as in (Liu et al., 2018).",
"The architecture is simple in terms of both implementation and parameter space that needs to be learned from scratch, which should not be large given the few-shot learning setting.",
"stricted to any domain.",
"One can think of a latent switch that helps us alternate between these two skills to produce factually correct and coherent sentences.",
"To do this, we use the pre-trained language model (Chelba et al., 2013; Radford et al., 2019) as the innate language skill, which provides strong prior knowledge on how to compose fluent and coherent sentences.",
"The ability to switch and select/copy from tables can be learned successfully using only a few training instances, freeing the neural NLG model from data-intensive training.",
"Previous best performing methods based on large training data, such as (Liu et al., 2018), which does not apply such switch mechanism but trains a strong domain-specific language model, perform very poorly under few-shot setting.",
"Since we are operating under a highly data-restricted few-shot regime, we strive for simplicity of model architecture.",
"This simplicity also implies better generalizability and reproducibility for real-world applications.",
"We crawl multi-domain table-to-text data from Wikipedia as our training/test instances.",
"With just 200 training instances, our method can achieve very reasonable performance.",
"In a nutshell, our contributions are summarized as the following: We propose the new research problem of few-shot NLG, which has great potential to benefit a wide range of real-world applications.",
"To study different algorithms for our proposed problem, we create a multi-domain table-to-text dataset.",
"Our proposed algorithm can make use of the external resources as prior knowledge to significantly decrease human annotation effort and improve the baseline performance by an average of over 8.0 BLEU on various domains.",
"As it is a core objective in many NLP applications, natural language generation from structured data/-knowledge (NLG) has been studied for many years.",
"Early traditional NLG systems follow the pipeline paradigm that explicitly divides generation into content selection, macro/micro planning and surface realization (Reiter and Dale, 1997).",
"Such a pipeline paradigm largely relies on templates and hand-engineered features.",
"Many works have been proposed to tackle the individual modules, such as (Liang et al., 2009; Walker et al., 2001; Lu et al., 2009).",
"Later works (Konstas and Lapata, 2012, 2013) investigated modeling context selection and surface realization in an unified framework.",
"Most recently, with the success of deep neural networks, data-driven, neural based approaches have been used, including the end-to-end methods that jointly model context selection and surface realization (Liu et al., 2018; Wiseman et al., 2018; Puduppully et al., 2018).",
"Such data-driven approaches achieve good performance on several benchmarks like E2E challenge (Novikova et al., 2017), WebNLG challenge (Gardent et al., 2017) and WIKIBIO (Lebret et al., 2016).",
"However, they rely on massive amount of training data.",
"ElSahar et al. (2018) propose zero-shot learning for question generation from knowledge graphs, but their work applies on the transfer learning setting for unseen knowledge base types, based on seen ones and their textual contexts, which still requires large in-domain training dataset.",
"This is different from our few-shot learning setting.",
"Ma et al. (2019) propose low-resource table-to-text generation with 1,000 paired examples and large-scale target-side examples.",
"In contrast, in our setting, only tens to hundreds of paired training examples are required, meanwhile without the need for any target examples.",
"This is especially important for real-world use cases where such large target-side gold references are mostly hard to obtain.",
"Therefore, our task is more challenging and closer to real-world settings.",
"Many of the current best-performing methods for various NLP tasks adopt a combination of pretraining followed by supervised fine-tuning, using task-specific data.",
"Different levels of pre-training include word embeddings (Mikolov et al., 2013; Pennington et al., 2014; Peters et al., 2018), sentence embeddings (Le and Mikolov, 2014; Kiros et al., 2015), and most recently, language modeling based pre-training like BERT (Devlin et al., 2018) and GPT-2 (Radford et al., 2019).",
"Such models are pre-trained on large-scale open-domain corpora, and provide down-streaming tasks with rich prior knowledge while boosting their performance.",
"In this paper, we adopt the idea of employing a pre-trained language model to endow in-domain NLG models with language modeling ability, which cannot be well learned from few shot training instances.",
"We are provided with semi-structured data: a table of attribute-value pairs { R i : V i } ni =1 .",
"Both R i and V i can be either a string/number, a phrase or a sentence.",
"Each value is represented as a sequence of words V i = { v j } mj =1 .",
"For each word v j , we have its corresponding attribute name R i and position information of the word in the value sequence.",
"The target is to generate a natural language description based on the semi-structured data, provided with only a handful of training instances.",
"We start with the field-gated dual attention model proposed in (Liu et al., 2018), which achieves state-of-the-art performance (BLEU) on WIKIBIO dataset.",
"Their method uses an LSTM decoder with dual attention weights.",
"We first apply a switch policy that decouples the framework into table content selection/copying and language model based generation.",
"Inspired by the pointer generator (See et al., 2017), at each time step, we maintain a soft switch p copy to choose between generating from softmax over vocabulary or copying from input table values with the attention weights as the probability distribution.",
"Where c t = (cid:80) i a it h i , { h i } is the encoder hidden states, x t , s t , a t is the decoder input, state and attention weights respectively at time step t",
"W c , W s , W x and b are trainable parameters.",
"The pointer generator learns to alternate between copying and generating based on large training data and shows its advantage of copying out-of-vocabulary words from input.",
"In our task, the training data is very limited, and many of the table values are not OOV.",
"We need to explicitly teach the model where to copy and where to generate.",
"Therefore, to provide the model accurate guidance of the behavior of the switch, we match the target text with input table values to get the positions of where to copy.",
"At these positions, we maximize the copy probability p copy via an additional loss term.",
"Our loss function: L = L c + (cid:88) w j m m { V i } (1 p jcopy ) Where L c is the original loss between model outputs and target texts.",
"w j is the target token at position j , { V i } is the input table value list defined in Section 3.1, and m means a matched phrase.",
"is hyperparameter as the weight for this copy loss term.",
"We also concatenate the decoder input with its matched attribute name and position information in the input table as x t to calculate p copy .",
"We use a pre-trained language model as the generator, serving as the innate language skill.",
"Due to the vocabulary limitation of few training instances, we leave the pre-trained word embedding fixed while fine-tuning other parameters of the pre-trained language model, so that it can generalize with tokens unseen during training.",
"Figure 1 shows our model architecture.",
"We use the pre-trained language model GPT-2 1 proposed in (Radford et al., 2019), which is a 12-layer transformer.",
"The final hidden state of the transformer is used to calculate attention weights and the copy 1 https://github.com/openai/gpt-2 Domain Humans Books Songs # of training instances 50 100 200 500 50 100 200 500 50 100 200 500 Template 16.3 --25.6 --30.1 --Base-original -2.2 3.7 4.9 5.1 -5.8 6.1 7.4 6.7 -9.2 10.7 11.1 11.3 Base -2.9 5.1 6.1 8.3 -7.3 6.8 7.8 8.8 -10.4 12.0 11.6 13.1 Base + switch -15.6 17.8 21.3 26.2 -24.7 26.9 30.5 33.2 -29.7 30.6 32.5 34.9 Base + switch + LM-scratch -6.6 11.5 15.3 18.6 -7.1 9.2 14.9 21.8 -11.6 16.2 20.6 23.7 Base + switch + LM (Ours) -25.7 29.5 36.1 41.7 -34.3 36.2 37.9 40.3 -36.1 37.2 39.4 42.2 Table 1 : BLEU-4 results on three domains.",
"Base-original: the original method in (Liu et al., 2018); Base: applies pre-trained word embedding; Base+switch: adds the switch policy; Base+switch+LM-scratch: makes the same architecture as our method, but trains the model from scratch without pre-trained weights for the generator.",
"Template: manually crafted templates switch p copy .",
"We first feed the embedded attribute-value list serving as the context for generation.",
"In this architecture, the generator is fine-tuned from pre-trained parameters while the encoder and attention part is learned from scratch, the initial geometry of the two sides are different.",
"Therefore we need to apply larger weight to the copy loss p copy , to give the model a stronger signal to teach it to copy facts from the input table.",
"The original WIKIBIO dataset (Lebret et al., 2016) contains 700k English Wikipedia articles of wellknown humans, with the Wiki infobox serving as input structured data and the first sentence of the article serving as target text.",
"To demonstrate generalizability, we collect datasets from two new domains: Books and Songs by crawling Wikipedia pages.",
"After filtering and cleanup, we end up with 23,651 instances for Books domain and 39,450 instances for Songs domain 2 .",
"Together with the Humans domain of the original WIKIBIO dataset, for all three domains we conduct experiments by varying the training dataset size to 50, 100, 200 and 500.",
"The rest of data is used for validation (1,000) and testing.",
"The weight of the copy loss term is set to 0.7.",
"Other parameter settings can be found in Appendix A. To deal with vocabulary limitation of few-shot training, for all models we adopt the Byte Pair Encoding (BPE) (Sennrich et al., 2016) and subword vocabulary in (Radford et al., 2019).",
"We compare the proposed method with other approaches investigated in Section 3, serving as the baselines Base-original: the original model 2 Note that the target text sometimes contains information not in the infobox.",
"This is out of the scope of the fewshot generation in this work.",
"Therefore we further filter the datasets and remove the ones with rare words out of infobox.",
"Check (Dhingra et al., 2019) for a related study of this issue on the WikiBio dataset in (Liu et al., 2018); Base: uses the same architecture, but in addition applies the pre-trained word embedding and fix it during training; Base + switch: adds the switch policy; Base + switch + LM-scratch: makes the architecture same as our method, except training the model from scratch instead of using pre-trained weights for generator.",
"Template: template-based non-neural approach, manually crafted for each domain.",
"Following previous work (Liu et al., 2018), we first conduct automatic evaluations using BLEU-4, shown in Table",
"1. The ROUGE-4 (F-measure) results follow the same trend with BLEU-4 results, which we show in Appendix B. As we can see, the original model Base-original (Liu et al., 2018), which obtains the state-of-the-art result on WIKIBIO full set, performs very poorly under few-shot setting.",
"It generates all tokens from softmax over vocabulary, which results in severe overfitting with limited training data, and the results are far behind the template-based baseline.",
"With the switch policy, Base+switch first brings an improvement of an average of over 10.0 BLEU points.",
"This indicates that the content selection ability is easier to be learned with a handful of training instances.",
"However, it forms very limited, not fluent sentences.",
"With the augmentation of the pre-trained language model, our model Base+switch+LM brings one more significant improvement of an average over 8.0 BLEU points.",
"We provide sample outputs of these methods using 200 training instances in Table",
"2. Table 3 shows the effect of the copy switch loss p copy introduced in Section 3.2, giving the model a stronger signal to learn to copy from input table.",
"Ma et al. (2019) propose the Pivot model, for low-resource NLG with 1,000 paired examples and large-scale target-side examples.",
"We compare our Attribute Value Attribute Value name andri ibo fullname andri ibo birth date 3 april 1990 birth place sentani , jayapura , indonesia height 173 cm currentclub persipura jayapura position defender ...",
"Table 3 : Ablation study: Effect of the copy loss term on Humans domain, measured by BLEU-4.",
"The loss term brings an average improvement of over 4.0 BLEU points.",
"method with the Pivot model in table",
"4. Note that here we train and evaluate the models on the original WikiBio dataset used in their work, in order to maintain the size of the target side examples for their settings.",
"Table 4 : Comparison with the Pivot model (Ma et al., 2019).",
"Compared to their method using additional large-scale target side examples, our method requires no additional target side data, while achieving better performance.",
"We also conduct human evaluation studies using Amazon Mechanical Turk, based on two aspects: Factual correctness and Language naturalness .",
"We evaluate 500 samples.",
"Each evaluation unit is assigned to 3 workers to eliminate human variance.",
"The first study attempts to evaluate how well the generated text correctly conveys information in the table, by counting the number of facts in the text supported by the table, and contradicting with or missing from the table.",
"The 2nd and 3rd columns of Table 5 show the average number of supporting and contradicting facts for our method, comparing to the strongest baseline and the gold reference.",
"The second study evaluates whether the generated text is grammatically correct and fluent, regardless of factual correctness.",
"We conduct pairwise comparison among all methods, and calculate the average times each method is chosen to be better than another, shown in the 4th column of Table",
"5. Our method brings a significant improvement over the strongest baseline ( p < 0 . 01 in Tukey's HSD test for all measures).",
"The copy loss term further alleviates producing incorrect facts.",
"The language naturalness result of our method without the copy loss is slightly better, because this evaluation does not consider factual correctness; thus the generated texts with more wrong facts can still get high score.",
"See Appendix C for more details of our evaluation procedure.",
"Table 5 : Human evaluation results: Average number of supporting facts (column 2, the larger the better), contradicting facts (column 3, the smaller the better), and language naturalness score (column 4, the larger the better).",
"In this paper, we propose the new research problem of few-shot natural language generation.",
"Our approach is simple, easy to implement, while achieving strong performance on various domains.",
"Our basic idea of acquiring language modeling prior can be potentially extended to a broader scope of generation tasks, based on various input structured data, such as knowledge graphs, SQL queries, etc.",
"The deduction of manual data curation efforts for such tasks is of great potential and importance for many real-world applications.",
"We thank the anonymous reviewers for their thoughtful comments.",
"We thank Shuming Ma for releasing the processed data and code for the Pivot model.",
"This research was supported by the Intel AI Faculty Research Grant.",
"The authors are solely responsible for the contents of the paper and the opinions expressed in this publication do not reflect those of the funding agencies."
]
| [
"abstain",
"objective",
"objective",
"abstain",
"result",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"abstain",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"other",
"other",
"other",
"other"
]
|
[
"By describing the features and abstractions of our world, language is a crucial tool for human learning and a promising source of supervision for machine learning models.",
"We use language to improve few-shot visual classification in the underexplored scenario where natural language task descriptions are available during training, but unavailable for novel tasks at test time.",
"Existing models for this setting sample new descriptions at test time and use those to classify images.",
"Instead, we propose language-shaped learning (LSL), an end-to-end model that regularizes visual representations to predict language.",
"LSL is conceptually simpler, more data efficient, and outperforms baselines in two challenging few-shot domains.",
"Humans are powerful and efficient learners partially due to the ability to learn from language (Chopra et al., 2019; Tomasello, 1999).",
"For instance, we can learn about robins not by seeing thousands of examples, but by being told that a robin is a bird with a red belly and brown feathers .",
"This language further shapes the way we view the world, constraining our hypotheses for new concepts: given a new bird (e.g. seagulls ), even without language we know that features like belly and feather color are relevant (Goodman, 1955).",
"In this paper, we guide visual representation learning with language, studying the setting where no language is available at test time , since rich linguistic supervision is often unavailable for new concepts encountered in the wild.",
"How can one best use language in this setting?",
"One option is to just regularize, training representations to predict language descriptions.",
"Another is to exploit the compositional nature of language directly by using it as a bottleneck in a discrete latent variable model.",
"For example, the recent Learning with Latent Language (L3; Andreas et al., 2018) model does both: during training, language is used to classify images; at test time, with no language, descriptions are sampled from a decoder conditioned on the language-shaped image embeddings.",
"Whether the bottleneck or regularization most benefits models like L3 is unclear.",
"We disentangle these effects and propose language-shaped learning (LSL), an end-to-end model that uses visual representations shaped by language (Figure 1), thus avoiding the bottleneck.",
"We find that discrete bottlenecks can hurt performance, especially with limited language data; in contrast, LSL is architecturally simpler, faster, uses language more efficiently, and outperforms L3 and baselines across two few-shot transfer tasks.",
"Language has been shown to assist visual classification in various settings, including traditional visual classification with no transfer (He and Peng, 2017) and with language available at test time in the form of class labels or descriptions for zero-(Frome et al., 2013; Socher et al., 2013) or few-shot (Xing et al., 2019) learning.",
"Unlike past work, we have no language at test time and test tasks differ from training tasks, so language from training cannot be used as additional class information (cf. He and Peng, 2017) or weak supervision for labeling additional in-domain data (cf. Hancock et al., 2018).",
"Our setting can be viewed as an instance of learning using privileged information (LUPI; Vapnik and Vashist, 2009), where richer supervision augments a model only during training.",
"In this framework, learning with attributes and other domain-specific rationales has been tackled extensively (Zaidan et al., 2007; Donahue and Grau-man, 2011; Tokmakov et al., 2019); language less so.",
"Gordo and Larlus (2017) use METEOR scores between captions as a similarity measure for specializing embeddings for image retrieval, but do not directly ground language explanations.",
"Srivastava et al. (2017) explore a supervision setting similar to ours, except in simple text and symbolic domains where descriptions can be easily converted to executable logical forms via semantic parsing.",
"Another line of work studies the generation of natural language explanations for interpretability across language (e.g. entailment; Camburu et al., 2018) and vision (Hendricks et al., 2016, 2018) tasks, but here we examine whether predicting language can actually improve task performance; similar ideas have been explored in text (Rajani et al., 2019) and reinforcement learning (Bahdanau et al., 2019; Goyal et al., 2019) domains.",
"We are interested in settings where language explanations can help learn representations that generalize more efficiently across tasks, especially when training data for each task is scarce and there are many spurious hypotheses consistent with the input.",
"Thus, we study the few-shot (meta-)learning setting, where a model must learn from a set of train tasks, each with limited data, and then generalize to unseen tasks in the same domain.",
"with K examples each: S ( t ) n = { x ( t ) n, 1 , . . . , x ( t ) n,K } .",
"Each task has M query examples Q ( t ) = { ( x ( t ) 1 , y ( t ) 1 ) , . . . , ( x ( t ) M , y ( t ) M ) } .",
"Given the m -th query example x ( t ) m as input, the goal is to predict its class y ( t ) m { 1 , . . . , N } .",
"After learning from a set of tasks T train , a model is evaluated on unseen tasks T test .",
"While the language approach we propose is applicable to nearly any meta-learning framework, we use prototype networks (Snell et al., 2017), which have a simple but powerful inductive bias for few-shot learning.",
"Prototype networks learn an embedding function f for examples; the embeddings of the support examples of a class n are averaged to form a class prototype (omitting task ( t ) for clarity): c n = 1 KK (cid:88) k =1 f ( x n,k ) .",
"Given a query example ( x m , y m ) , we predict class n with probability proportional to some similarity function s between c n and f ( x m ) :",
"Now assume that during training we have for each class S n a set of J n associated natural language descriptions W n = { w 1 , . . . , w J n } .",
"Each w j should explain the relevant features of S n and need not be associated with individual examples.",
"1 In Figure 1, we have one description w 1 = ( A , red , . . . , square ) .",
"Our approach is simple: we encourage f to learn prototypes that can also decode the class language descriptions.",
"Let c n be the prototype formed by averaging the support and query examples of class n .",
"Then define a language model g (e.g., a recurrent neural network), which conditioned on 1 If we have language associated with individual examples, we can regularize at the instance-level, essentially learning an image captioner.",
"We did not observe major gains with instance-level supervision (vs class-level) in the tasks explored here, in which case class-level language is preferable, since it is much easier to obtain.",
"There are likely tasks where instance-level supervision is superior, which we leave for future work.",
"c n provides a probability distribution over descriptions g ( w j | c n ) with a corresponding natural language loss : LNL ( , ) = N (cid:88) n =1 J n (cid:88) j =1 log g ( w j | c n ) , (4) i.e. the total negative log-likelihood of the class descriptions across all classes in the task.",
"Since LNL depends on parameters through the prototype c n , this objective should encourage our model to better represent the features expressed in language.",
"Now we jointly minimize both losses: arg min , [ LCLS ( ) + NLLNL ( , )] , (5) where the hyperparameter NL controls the weight of the natural language loss.",
"At test time, we simply discard g and use f to classify.",
"We call our approach language-shaped learning (LSL; Figure 1).",
"L3 (Andreas et al., 2018) has the same basic components of LSL, but instead defines the concepts c n to be embeddings of the language descriptions themselves, generated by an additional recurrent neural network (RNN) encoder h : c n = h ( w n ) .",
"During training, the ground-truth description is used for classification, while g is trained to produce the description; at test time, L3 samples candidate descriptions w n from g , keeping the description most similar to the images in the support set according to the similarity function s (Figure 1).",
"Compared to L3, LSL is simpler since it (1) does not require the additional embedding module h and (2) does not need the test-time language sampling procedure.",
"2 This also makes LSL much faster to run than L3 in practice: without the language machinery, LSL is up to 50x faster during inference in our experiments.",
"Here we describe our two tasks and models.",
"For each task, we evaluate LSL, L3, and a prototype network baseline trained without language (Meta; Figure 1).",
"For full details, see Appendix A. 2 LSL is similar to the Meta+Joint model of Andreas et al. (2018), which did not improve over baseline.",
"However, they used separate encoders for the support and query examples, with only the support encoder trained to predict language, resulting in overfitting of the query encoder.",
"ShapeWorld.",
"First, we use the ShapeWorld (Kuhnle and Copestake, 2017) dataset used by Andreas et al. (2018), which consists of 9000 training, 1000 validation, and 4000 test tasks (Figure 2).",
"3 Each task contains a single support set of K = 4 images representing a visual concept with an associated (artificial) English language description, generated with a minimal recursion semantics representation of the concept (Copestake et al., 2016).",
"Each concept is a spatial relation between two objects, each object optionally qualified by color and/or shape, with 2-3 distractor shapes present.",
"The task is to predict whether a query image x belongs to the concept.",
"For ease of comparison, we report results with models identical to Andreas et al. (2018), where f is the final convolutional layer of a fixed ImageNet-pretrained VGG-16 (Simonyan and Zisserman, 2015) fed through two fully-connected layers: f ( x ) = FC(ReLU(FC(VGG-16( x )))) .",
"However, because fixed ImageNet representations may not be the most appropriate choice for artificial data, we also run experiments with convolutional networks trained from scratch: either the 4-layer convolutional backbone used in much of the few-shot literature (Chen et al., 2019), as used in the Birds experiments we describe next, or a deeper ResNet-18 (He et al., 2016).",
"This is a special binary case of the few-shot learning framework, with a single positive support class S and prototype c .",
"Thus, we define the similarity function to be the sigmoid function s ( a, b ) = ( a b ) and the positive prediction P ( y = 1 | x ) = s ( f ( x ) , c ) .",
"g is a 512-dimensional gated recurrent unit (GRU) RNN (Cho et al., 2014) trained with teacher forcing.",
"Through a grid search on the validation set, we set NL = 20 .",
"Birds.",
"To see if LSL can scale to more realistic scenarios, we use the Caltech-UCSD Birds dataset (Wah et al., 2011), which contains 200 bird species, each with 4060 images, split into 100 train, 50 validation, and 50 test classes.",
"During training, tasks are sampled dynamically by selecting N classes from the 100 train classes.",
"K support and 16 query examples are then sampled from each class (sim-ilarly for val and test).",
"For language, we use the descriptions collected by Reed et al. (2016), where 3 This is a larger version with 4x as many test tasks for more stable confidence intervals (see Appendix A).",
"AMT crowdworkers were asked to describe individual images of birds in detail, without reference to the species (Figure 2).",
"While 10 English descriptions per image are available, we assume a more realistic scenario where we have much less language available only at the class level: removing associations between images and their descriptions, we aggregate D descriptions for each class, and for each K -shot training task we sample K descriptions from each class n to use as descriptions W n .",
"This makes learning especially challenging for LSL due to noise from captions that describe features only applicable to individual images.",
"Despite this, we found improvements with as few as D = 20 descriptions per class, which we report as our main results, but also vary D to see how efficiently the models use language.",
"We evaluate on the N = 5 -way, K = 1 -shot setting, and as f use the 4-layer convolutional backbone proposed by Chen et al. (2019).",
"Here we use a learned bilinear similarity function, s ( a, b ) = a (cid:62) W b , where W is learned jointly with the model.",
"g is a 200-dimensional GRU, and with another grid search we set NL = 5 .",
"Results are in Table 1.",
"For ShapeWorld, LSL outperforms the meta-learning baseline (Meta) by 6.7%, and does at least as well as L3; Table 2 shows similar trends when f is trained from scratch.",
"For Birds, LSL has a smaller but still significant 3.3% increase over Meta, while L3 drops below baseline.",
"Furthermore, LSL uses language more efficiently: Figure 3 shows Birds performance as the captions per class D increases from 1 (100 total) to 60 (6000 total).",
"LSL benefits from a remarkably small number of captions, with limited gains past 20; in contrast, L3 requires much more language to 50 52 54 56 58 60 1 5 10 20 30 40 50 60 D descriptions/class B i r d s A cc u r a cy Model LSLL3 Figure 3: Varying the descriptions per class, D , for Birds.",
"In the low-data regime, L3's lower performance is unsurprising, since it must generate language at test time, which is difficult with so little data.",
"Example output from the L3 decoder in Figure 4 highlights this fact: the language looks reasonable in some cases, but in others has factual errors ( dark gray bird; black pointed beak ) and fluency issues.",
"These results suggest that any benefit of L3 is likely due to the regularizing effect that language has on its embedding model f , which has been trained to predict language for test-time inference; in fact, the discrete bottleneck actually hurts in some settings.",
"By using only the regularized visual representations and not relying exclusively on the generated language, LSL is the simpler, more efficient, and overall superior model.",
"To identify which aspects of language are most helpful, in Figure 5 we examine LSL performance under ablated language supervision: (1) keeping only a list of common color words, (2) filtering out color words, (3) shuffling the words in each caption, and (4) shuffling the captions across tasks (see Figure 6 for examples).",
"We find that while the benefits of color/no-color language varies across tasks, neither component provides the benefit of complete language, demonstrating that LSL leverages both colors and other attributes (e.g. size, shape) described in language.",
"Word order is important for Birds but surprisingly unimportant for ShapeWorld, suggesting that even with decoupled colors and shapes, the model can often infer the correct relation from the shapes that consistently appear in the examples.",
"Finally, when captions are shuffled across tasks, LSL for Birds does no worse than Meta, while ShapeWorld suffers, suggesting that language is more important for ShapeWorld than for the fine-grained, attribute-based Birds task.",
"We presented LSL, a few-shot visual recognition model that is regularized with language descriptions during training.",
"LSL outperforms baselines across two tasks and uses language supervision more efficiently than L3.",
"We find that if a model is trained to expose the features and abstractions in language, a linguistic bottleneck on top of these Birds ShapeWorld a cyan pentagon is to the right of a magenta shape cyan magenta a pentagon is to the right of a shape shape right the is a pentagon a of cyan to magenta a green square is below a triangle The bird has a white underbelly, black feathers in the wings, a large wingspan, and a white beak.",
"language-shaped representations is unnecessary, at least for the kinds of visual tasks explored here.",
"The line between language and sufficiently rich attributes and rationales is blurry, and recent work (Tokmakov et al., 2019) suggests that similar performance gains can likely be observed by regularizing with attributes.",
"However, unlike attributes, language is (1) a more natural medium for annotators, (2) does not require preconceived restrictions on the kinds of features relevant to the task, and (3) is abundant in unsupervised forms.",
"This makes shaping representations with language a promising and easily accessible way to improve the generalization of vision models in low-data settings.",
"We thank Pang Wei Koh, Sebastian Schuster, and Dan Iter for helpful discussions and feedback, Mike Wu and Jacob Andreas for discussions and code, and our anonymous reviewers for insightful comments.",
"This work was supported by an NSF Graduate Research Fellowship for JM, a SAIL-Toyota Research Award, and the Office of Naval Research grant ONR MURI N00014-16-1-2007.",
"Toyota Research Institute (TRI) provided funds to assist the authors with their research but this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity.",
"Code, data, and experiments are available at https: //github.com/jayelm/lsl and on CodaLab at https://bit.ly/lsl_acl20 ."
]
| [
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
]
|
[
"Research on overlapped and discontinuous named entity recognition (NER) has received increasing attention.",
"The majority of previous work focuses on either overlapped or discontinuous entities.",
"In this paper, we propose a novel span-based model that can recognize both overlapped and discontinuous entities jointly.",
"The model includes two major steps.",
"First, entity fragments are recognized by traversing over all possible text spans, thus, overlapped entities can be recognized.",
"Second, we perform relation classification to judge whether a given pair of entity fragments to be overlapping or succession.",
"In this way, we can recognize not only discontinuous entities, and meanwhile doubly check the overlapped entities.",
"As a whole, our model can be regarded as a relation extraction paradigm essentially.",
"Experimental results on multiple benchmark datasets (i.e., CLEF, GENIA and ACE05) show that our model is highly competitive for overlapped and discontinuous NER.",
"Named entity recognition (NER) (Sang and De Meulder, 2003) is one fundamental task for natural language processing (NLP), due to its wide application in information extraction and data mining (Lin et al., 2019b; Cao et al., 2019).",
"Traditionally, NER is presented as a sequence labeling problem and widely solved by conditional random field (CRF) based models (Lafferty et al., 2001).",
"However, this framework is difficult to handle overlapped and discontinuous entities (Lu and Roth, 2015; Muis and Lu, 2016), which we illustrate using two examples as shown in Figure",
"1. The two entities Pennsylvania and Pennsylvania radio station are nested with each other, 1 and the secCorresponding author.",
"ond example shows a discontinuous entity mitral leaflets thickened involving three fragments.",
"There have been several studies to investigate overlapped or discontinuous entities (Finkel and Manning, 2009; Lu and Roth, 2015; Muis and Lu, 2017; Katiyar and Cardie, 2018; Wang and Lu, 2018; Ju et al., 2018; Wang et al., 2018; Fisher and Vlachos, 2019; Luan et al., 2019; Wang and Lu, 2019).",
"The majority of them focus on overlapped NER, with only several exceptions to the best of our knowledge.",
"Muis and Lu (2016) present a hypergraph model that is capable of handling both overlapped and discontinuous entities.",
"Wang and Lu (2019) extend the hypergraph model with long short-term memories (LSTMs) (Hochreiter and Schmidhuber, 1997).",
"Dai et al. (2020) proposed a transition-based neural model for discontinuous NER.",
"By using these models, NER could be conducted universally without any assumption to exclude overlapped or discontinuous entities, which could be more practical in real applications.",
"The hypergraph (Muis and Lu, 2016; Wang and Lu, 2019) and transition-based models (Dai et al., 2020) are flexible to be adapted for different tasks, achieving great successes for overlapped or discontinuous NER.",
"However, these models need to manually define graph nodes, edges and transition actions.",
"Moreover, these models build graphs or generate transitions along the words in the sentences gradually, which may lead to error propagation (Zhang et al., 2016).",
"In contrast, the span-based scheme might be a good alternative, which is much simpler including only span-level classification.",
"Thus, it needs less manual intervention and meanwhile span-level classification can be fully parallelized without error propagation.",
"Recently, Luan et al. (2019) utilized the span-based model for information extraction effectively.",
"tinuous entities simultaneously in an end-to-end way.",
"The model utilizes BERT (Devlin et al., 2019) to produce deep contextualized word representations, and then enumerates all candidate text spans (Luan et al., 2019), classifying whether they are entity fragments.",
"Following, fragment relations are predicted by another classifier to determine whether two specific fragments involve a certain relation.",
"We define two relations for our goal: Overlapping or Succession , which are used for overlapped and discontinuous entities, respectively.",
"In essence, the joint model can be regarded as one kind of relation extraction models, which is adapted for our goal.",
"To enhance our model, we utilize the syntax information as well by using a dependency-guided graph convolutional network (Kipf and Welling, 2017; Zhang et al., 2018; Jie and Lu, 2019; Guo et al., 2019).",
"We evaluate our proposed model on several benchmark datasets which includes both overlapped and discontinuous entities (e.g., CLEF (Suominen et al., 2013)).",
"The results show that our model outperforms the hypergraph (Muis and Lu, 2016; Wang and Lu, 2019) and transition-based models (Dai et al., 2020).",
"Besides, we conduct experiments on two benchmark datasets including only overlapped entities (i.e., GENIA (Kim et al., 2003) and ACE05).",
"Experimental results show that our model can also obtain comparable performances with the state-of-the-art models (Luan et al., 2019; Wadden et al., 2019; Strakova et al., 2019).",
"In addition, we observe that our approaches for model enhancement are effective in the benchmark datasets.",
"Our code is available at https://github.com/foxlf823/sodner .",
"In the NLP domain, NER is usually considered as a sequence labeling problem (Liu et al., 2018; Lin et al., 2019b; Cao et al., 2019).",
"With well-designed features, CRF-based models have achieved the leading performance (Lafferty et al., 2001; Finkel et al., 2005; Liu et al., 2011).",
"Recently, neural network models have been exploited for feature representations (Chen and Manning, 2014; Zhou et al., 2015).",
"Moreover, contextualized word representations such as ELMo (Peters et al., 2018), Flair (Akbik et al., 2018) and BERT (Devlin et al., 2019) have also achieved great success.",
"As for NER, the end-to-end bi-directional LSTM CRF models (Lample et al., 2016; Ma and Hovy, 2016; Yang et al., 2018) is one representative architecture.",
"These models are only capable of recognizing regular named entities.",
"For overlapped NER, the earliest model to our knowledge is proposed by Finkel and Manning (2009), where they convert overlapped NER as a parsing task.",
"Lu and Roth (2015) propose a hypergraph model to recognize overlapped entities and lead to a number of extensions (Muis and Lu, 2017; Katiyar and Cardie, 2018; Wang and Lu, 2018).",
"Moreover, recurrent neural networks (RNNs) are also used for overlapped NER (Ju et al., 2018; Wang et al., 2018).",
"Other approaches include multi-grained detection (Xia et al., 2019), boundary detection (Zheng et al., 2019), anchor-region network (Lin et al., 2019a) and machine reading comprehension (Li et al., 2020).",
"The state-of-the-art models for overlapped NER include the sequence-to-sequence (seq2seq) model (Strakova et al., 2019), where the decoder predicts multiple Input WordRep Graph Convolutional Network \" # $ Span Representation \"% #% $% &'( # &'( $ ) $,$ ) $,# ) $,\" ) #,# ) \",\" Entity Fragment Recognition Fragment Relation Prediction Training Dependency Parsing Syntax Information The mitral thickened Decoding '(-,.,) '1)) # '1)) $ 2 3 mitral leaflets thickened Entity Fragment Relation Graph The mitral value leaflets are mildly thickened The 1 0 0 1 0 0 0 mitral 0 1 0 1 0 0 0 value 0 0 1 1 0 0 0 leaflets 1 1 1 1 0 0 1 are 0 0 0 0 1 0 1 mildly 0 0 0 0 0 1 1 thickened 0 0 0 1 1 1 1 BERT ELMo Word2Vec Bidirectional LSTM Figure 2: The architecture of our model. The input is The [ mitral ] 1 valve [ leaflets ] 1 are mildly [ thickened ] 1 . h 1 denotes the original word representation and h (cid:48) 1 denotes the syntax-enhanced word representation. s 1 , 2 denotes the span representation. and control the loss weights of two tasks, namely recognizing entity fragments from text spans and predicting the relation between each pair of fragments. labels for a word and move to next word until it outputs the end of word label, and the span-based model (Luan et al., 2019; Wadden et al., 2019), where overlapped entities are recognized by classification for enumerated spans. Compared with the number of related work for overlapped NER, there are no related studies for only discontinuous NER, but several related studies for both overlapped and discontinuous NER. Early studies addressed such problem by extending the BIO label scheme (Tang et al., 2013; Metke-Jimenez and Karimi, 2016). Muis and Lu (2016) first proposed a hypergraph-based model for recognizing overlapped and discontinuous entities, and then Wang and Lu (2019) utilized deep neural networks to enhance the model. Very recently, Dai et al. (2020) proposed a transition-based neural model with manually-designed actions for both overlapped and discontinuous NER. In this work, we also aim to design a competitive model for both overlapped and discontinuous NER. Our differences are that our model is span-based (Luan et al., 2019) and it is also enhanced by dependency-guided graph convolutional network (GCN) (Zhang et al., 2018; Guo et al., 2019). To our knowledge, syntax information is commonly neglected in most previous work for overlapped or discontinuous NER, except Finkel and Manning (2009). The work employs a constituency parser to transform a sentence into a nested entity tree, and syntax information is used naturally to facilitate NER. By contrast, syntax information has been utilized in some studies for traditional regular NER. Under the traditional statistical setting, syntax information is used by manually-crafted features (Hacioglu et al., 2005; Ling and Weld, 2012) or auxiliary tasks (Florian et al., 2006) for NER. Recently, Jie et al. (2017) build a semi-CRF model based on dependency information to optimize the research space of NER recognition. Jie and Lu (2019) stack the dependency-guided graph convolutional network (Zhang et al., 2018; Guo et al., 2019) on top of the BiLSTM layer. These studies have demonstrated that syntax information could be an effective feature source for NER. 3 Method The key idea of our model includes two mechanisms. First, our model enumerates all possible text spans in a sentence and then exploits a multi-classification strategy to determine whether one span is an entity fragment as well as the entity type. Based on this mechanism, overlapped entities could be recognized. Second, our model performs pairwise relation classifications over all entity fragments to recognize their relationships. We define three kinds of relation types: Succession , indicating that the two entity fragments belong to one single named entity. Overlapping , indicating that the two entity fragments have overlapped parts. Other , indicating that the two entity fragments have other relations or no relations. With the Succession relation, we can recognize discontinuous entities. Through the Overlapping relation, we aim to improve the recognition of overlapped entities with double supervision. The proposed model is essentially a relation extraction model being adapted for our task. The architecture of our model is illustrated in Figure 2, where the main components include the following parts: (1) word representation, (2) graph convolutional network, (3) span representation, and (4) joint decoding, which are introduced by the following subsections, respectively. 3.1 Word Representation We exploit BERT (Devlin et al., 2019) as inputs for our model, which has demonstrated effective for a range of NLP tasks. 2 Given an input sentence x = { x 1 , x 2 , ..., x N } , we convert each word x i into word pieces and then feed them into a pretrained BERT module. After the BERT calculation, each sentential word may involve vectorial representations of several pieces. Here we employ the representation of the beginning word piece as the final word representation following (Wadden et al., 2019). For instance, if fevers is split into fever and ##s, the representation of fever is used as the whole word representation. Therefore, all the words in the sentence x correspond to a matrix H = { h 1 , h 2 , ..., h N } RN d h , where d h denotes the dimension of h i . 3.2 Graph Convolutional Network Dependency syntax information has been demonstrated to be useful for NER previously (Jie and Lu, 2019). In this work, we also exploit it to enhance our proposed model. 3 Graph convolutional network (GCN) (Kipf and Welling, 2017) is one representative method to encode dependency-based graphs, which has been shown effective in information extraction (Zhang et al., 2018). Thus, we choose it as one standard strategy to enhance our word representations. Concretely, we utilize the 2 We also investigate the effects of different word encoders in the experiments. Please refer to Appendix A. 3 Some cases are shown in Appendix B. Self-Attention Self-Attention ! \"#$% &' ( &' )",
"In order to illustrate the network of AGGCN (Figure 3), we start with the standard GCN module.",
"Given the word representations H = { h 1 , h 2 , ..., h N } , the standard GCN uses the following equation to update them: h ( l ) i = ( N (cid:88) j =1 A ij W ( l ) h ( l 1) j + b ( l ) ) , (1) where W ( l ) and b ( l ) are the weight and bias of the l -th layer.",
"A RN N is an adjacency matrix obtained from the dependency graph, where A ij = 1 indicates there is an edge between the word i and j in the dependency graph.",
"Figure 2 offers an example of the matrix which is produced by the corresponding dependency syntax tree.",
"In fact, A can be considered as a form of hard attention in GCN, while AGGCN (Guo et al., 2019) aims to improve the method by using A in the lower layers and updating A at the higher layers via multi-head self-attention (Vaswani et al., 2017) as below: A t = softmax( H t W tQ ( H t W tK ) T d head ) , (2) where W tQ and W tK are used to project the input H t RN d head ( d head = d h N head ) of the t -th head into a query and a key.",
"A t RN N is the updated adjacency matrix for the t -th head.",
"For each head t , AGGCN uses A t and a densely connected layer to update the word representations, which is similar to the standard GCN as shown in Equation",
"1. The output of the densely connected layer is H t RN d h .",
"Then a linear combination layer is used to merge the output of each head, namely H = [ H 1 , , HN head ] W 1 , where W 1 R ( N head d h ) d h is the weight and H RN d h is the final output of AGGCN.",
"After that, H is concatenated with the original word representations H to form final word representations H (cid:48) RN ( d h + d f ) = [ H , HW 2 ] , where W 2 R d h d f indicates a linear transformation for dimensionality reduction.",
"4 3.3 Span Representation We employ span enumeration (Luan et al., 2019) to generate text spans.",
"Take the sentence The mitral valve leaflets are mildly thickened in Figure 2 as an example, the generated text spans will be The, The mitral, The mitral valve, ..., mildly, mildly thickened and thickened.",
"To represent a text span, we use the concatenation of word representations of its startpoint and endpoint.",
"For example, given word representations H = { h 1 , h 2 , ..., h N } RN d h (or H (cid:48) = { h (cid:48) 1 , h (cid:48) 2 , ..., h (cid:48) N } ) and a span ( i, j ) that starts at the position i and ends at j , the span representation will be s i,j = [ h i , h j , w ] or [ h (cid:48) i , h (cid:48) j , w ] , (3) where w is a 20-dimensional embedding to represent the span width following previous work (Luan et al., 2019; Wadden et al., 2019).",
"Thus, the dimension d s of s i,j is 2 d h + 20 (or 2( d h + d f ) + 20 ).",
"Our decoding consists of two parts.",
"First, we recognize all valid entity fragments, and then perform pairwise classifications over the fragments to uncover their relationships.",
"4 We employ third-party tools to perform parsing for the corpora that do not contain gold syntax annotations.",
"Since sometimes parsing may fail, dependency-guided GCN will be noneffective.",
"Concatenation can remedy such problem since H still works even if H is invalid.",
"classify whether the span is an entity fragment and what is the entity type, formalized as:",
"p 1 = softmax(MLP 1 ( s i,j )) , (4) where p 1 indicates the probabilities of entity types such as Organization , Disease and None (i.e., not",
"an entity fragment).",
"Fragment Relation Prediction: Given two entity fragments ( i, j ) and ( i, j ) represented as s i,j and s i, j , we utilize another MLP to classify their relations: p 2 = softmax(MLP 2 ([ s i,j , s i,j s i, j , s i, j ])) , (5) where p 2 indicates the probabilities of three classes, namely Succession , Overlapping and Other , and the feature representations are mostly referred from Luan et al. (2019) and Wadden et al. (2019).",
"Noticeably, although the overlapped entities can be recognized at the first step, here we use the Overlapping as one auxiliary strategy to further enhance the model.",
"During decoding (Algorithm 1), our model recognizes entity fragments from text spans (lines 2-4) in the input sentence and selects each pair of these fragments to determine their relations (lines 5-7).",
"Therefore, the prediction results can be considered as an entity fragment relation graph (line 8), where a node denotes an entity fragment and an edge denotes the relation between two entity fragments.",
"5 The decoding object is to find all the subgraphs in which each node connects with any other node (line 9).",
"Thus, each of such subgraph composes an entity (line 10).",
"In particular, the entity fragment that has no edge with others composes an entity by itself.",
"5 We only use the Succession relations during decoding while ignore the Overlapping relations.",
"The Overlapping relations are only used during training.",
"During training, we employ multi-task learning (Caruana, 1997; Liu et al., 2017) to jointly train different parts of our model.",
"6 The loss function is defined as the negative log-likelihood of the two classification tasks, namely Entity Fragment Recognition and Fragment Relation Prediction : L = (cid:88) log p 1 ( y ent ) + log p 2 ( y rel ) , (6) where y ent and y rel denote the corresponding gold-standard labels for text spans and span pairs, and are the weights to control the task importance.",
"During training, we use the BertAdam algorithm (Devlin et al., 2019) with the learning rate 5 10 5 to finetune BERT and 1 10 3 to finetune other parts of our model.",
"The training process would terminate if the performance does not in-crease by 15 epochs.",
"Datasets: To evaluate our model for simultaneously recognizing overlapped and discontinuous entities, we follow prior work (Muis and Lu, 2016; Wang and Lu, 2019; Dai et al., 2020) and employ the data, called CLEF , from the ShARe/CLEF eHealth Evaluation Lab 2013 (Suominen et al., 2013), which consists of 199 and 99 clinical notes for training and testing.",
"Note that Dai et al. (2020) used the full CLEF dataset in their experiments (179 for training, 20 for development and 99 for testing), while Muis and Lu (2016) and Wang and Lu (2019) used a subset of the union of the CLEF dataset and SemEval 2014 Task 7 (Pradhan et al., 6 Please refer to Appendix C for the effect of multi-task learning. 2014).",
"Concretely, they used the training set and test set of the ShARe/CLEF eHealth Evaluation Lab 2013 as the training and development set, and they also used the development set of the SemEval 2014 Task 7 as the test set.",
"In addition, they selected only the sentences that contain at least one discontinuous entity.",
"Finally, the training, development and test sets contain 534, 303 and 430 sentences, respectively.",
"We call this dataset as CLEF-Dis in this paper.",
"Moreover, we also follow Dai et al. (2020) to evaluate models using the CADEC dataset proposed by Karimi et al. (2015).",
"We follow the setting of Dai et al. (2020) to split the dataset and conduct experiments.",
"To show our model is comparable with the state-of-the-art models for overlapped NER, we conduct experiments on GENIA (Kim et al., 2003) and ACE05 .",
"For the GENIA and ACE05 datasets, we employ the same experimental setting in previous works (Lu and Roth, 2015; Muis and Lu, 2017; Wang and Lu, 2018; Luan et al., 2019), where 80%, 10% and 10% sentences in 1,999 GENIA documents, and the sentences in 370, 43 and 51 ACE05 documents are used for training, development and test, respectively.",
"The statistics of all the datasets we use in this paper is shown in Table",
"1. Evaluation Metrics: In terms of evaluation metrics, we follow prior work (Lu and Roth, 2015; Muis and Lu, 2016; Wang and Lu, 2018, 2019) and employ the precision (P), recall (R) and F1-score (F1).",
"A predicted entity is counted as true-positive if its boundary and type match those of a gold entity.",
"For a discontinuous entity, each span should match a span of the gold entity.",
"All F1 scores reported in Section 5 are the mean values from five runs of the same setting.",
"Table 2 shows the results on the CLEF dataset.",
"As seen, Tang et al. (2013) and Tang et al. (2015) adapted the CRF model, which is usually used for flat NER, to overlapped and discontinuous NER.",
"They modified the BIO label scheme to BIOHD and BIOHD1234 , which use H to label overlapped entity segments and D to label discontinuous entity segments.",
"Surprisingly, the recently-proposed transition-based model (Dai et al., 2020) does not perform better than the CRF model (Tang et al., 2015), which may be because Tang et al. (2015) have conducted elaborate feature engineering for their model.",
"In contrast, our model outperforms all the strong baselines with at least about 5% margin in F1.",
"Our model does not rely on feature engineering or manually-designed transitions, which is more suitable for modern end-to-end learning.",
"We further perform ablation studies to investigate the effect of dependency-guided GCN and the overlapping relation, which can be removed without influencing our major goal.",
"As shown in Table 2, after removing either of them, the F1 scores 7 Dai et al. (2020) found that BERT did not perform better than ELMo in their experiments.",
"go down by 0.7% and 1.0%.",
"The observation suggests that both dependency-guided GCN and the overlapping relation are effective for our model.",
"Moreover, after we replace BERT with the word embeddings pretrained on PubMed (Chiu et al., 2016), the F1 score goes down by 4.6%, which demonstrates that BERT plays an important role in our model.",
"Table 3 shows the results on the CLEF-Dis dataset.",
"As seen, our model outperforms the previous best model (Dai et al., 2020) by 0.4% in F1, which indicates that our model is very competitive, leading to a new state-of-the-art result on the dataset.",
"Similarly, we further perform ablation studies to investigate the effect of dependency-guided GCN, the overlapping relation and BERT on this dataset.",
"As shown, after removing either of the GCN or overlapping relation, the F1 score decreases by 0.4% or 0.7%, which is consistent with the observations in Table",
"2. In addition, to fairly compare with Wang and Lu (2019), we also replace BERT with the word embeddings pretrained on PubMed (Chiu et al., 2016).",
"As we can see, our model also outperforms their model by 0.3%.",
"As shown in Table 4, Metke-Jimenez and Karimi (2016) employed the similar method in (Tang et al., 2013) by expanding the BIO label scheme to BIOHD .",
"Tang et al. (2018) also experimented the BIOHD label scheme, but they found that the result of the BIOHD -based method was slightly worse than that of the Multilabel method (65.5% vs. 66.3% in F1).",
"Compared with the method in (Metke-Jimenez and Karimi, 2016), the performance improvement might be mainly because they used deep neural networks (e.g., LSTM) instead of shallow non-neural models.",
"Compared with the above baselines, the transition-based model Dai et al. (2020) is still the best.",
"Our full model slightly outperforms the transition-based model by 0.5%.",
"In this dataset, we do not observe mutual benefit between the dependency-guided GCN and overlapped relation prediction modules, since our model achieves better results when using them separately (69.9%) than using them jointly (69.5%).",
"However, when using them separately, the F1 is still 0.6% higher than the one using neither of them.",
"Without BERT, the performance of our model drops by about 3% but it is still comparable with the performances of the methods without contextualized representations.",
"Comparing with BiLSTM-CRF To show the necessity of building one model to recognize regular, overlapped and discontinuous entities simultaneously, we analyze the predicted entities in the CLEF-Dis dataset and classify them based on their types, as shown in Figure",
"4. In addition, we compare our model with BiLSTM-CRF (Lample et al., 2016; Ma and Hovy, 2016; Yang et al., 2018), to show our model does not influence the performance of regular NER significantly.",
"For a fair comparison, we replace BERT with Glove (Pennington et al., 2014) and keep the setting of our model the same with the setting of the BiLSTM-CRF model used in previous work (Yang et al., 2018).",
"As seen, if only considering regular entities, the 8 Many discontinuous entities are also overlapped, but we do not count them as overlapped entities in this figure.",
"BiLSTM-CRF model can achieve a better performance compared with our model, especially the precision value is much higher.",
"One likely reason might be that the BiLSTM-CRF model is capable of using the label dependence to detect entity boundaries accurately, ensuring the correctness of the recognized entities, which is closely related to the precision.",
"Nevertheless, our model can lead to higher recall, which reduces the gap between the two models.",
"If considering both regular and overlapped entities, the recall of our model is greatly boosted, and thus the F1 increases concurrently.",
"If both regular and discontinuous entities are included, the performance of our model rises significantly to 50.9% due to the large scale of discontinuous entities.",
"When all types of entities are concerned, the F1 of our model further increases by 0.8%, indicating the effectiveness of our model in joint recognition of overlapped, discontinuous and regular entities.",
"Comparing with the Transition-Based Model As shown in Figure 5, we also compare our model with the transition-based model (Dai et al., 2020) based on entity types by analyzing the results from one run of experiments.",
"Note that since we do not tune the hyper-parameters of the transition-based model elaborately, the performance is not as good as the one that they have reported.",
"As seen, our model performs better in all of the four groups, namely regular, regular+overlapped, regular+discontinuous, regu-lar+overlapped+discontinuous entity recognition.",
"However, based on the observation on the bars in different groups, we find that the main superiority Related Work Method GENIA ACE05 Finkel and Manning (2009) Constituency parsing 70.3 Lu and Roth (2015) Hypergraph 70.3 58.7 Muis and Lu (2017) Hypergraph 70.8 61.3 Katiyar and Cardie (2018) Hypergraph, RNN 73.8 70.5 Wang et al. (2018) Transition-based parsing, RNN 73.9 73.0 Ju et al. (2018) Dynamically stacking, RNN 74.7 72.2 Zheng et al. (2019) Boundary detection, RNN 74.7 Lin et al. (2019a) Anchor-region detection, RNN, CNN 74.8 74.9 Wang and Lu (2018) Hypergraph, RNN 75.1 74.5 Xia et al. (2019) Multi-grained detection, RNN, ELMo 78.2 Fisher and Vlachos (2019) Merge and label, BERT 82.4 Luan et al. (2019) Span-based, ELMo, Coref 76.2 82.9 Wadden et al. (2019) Span-based, BERT, Coref 77.9 Strakova et al. (2019) Seq2Seq, ELMo, BERT, Flair 78.3 84.3 Our Model Span-based, BERT 77.8 83.0 0 Dep-guided GCN 77.4 82.6 0 Overlap Relation 77.4 82.7 Table 5: Comparisons with prior work on the GENIA and ACE05 datasets.",
"of our model comes from regular entity recognition.",
"In recognizing overlapped entities, our model is comparable with the transition-based model, but in recognizing discontinuous entities, our model performs slightly worse than the transition-based model.",
"This suggests that a combination of span-based and transition-based models may be a potential method for future research.",
"Table 5 shows the results of the GENIA and ACE05 datasets, which include only regular and overlapped entities.",
"Our final model achieves 77.8% and 83.0% F1s in the GENIA and ACE05 datasets, respectively.",
"By removing the dependency-guided GCN, the model shows an averaged decrease of 0.4%, indicating the usefulness of dependency syntax information.",
"The finding is consistent with that of the CLEF dataset.",
"Interestingly, we note that the overlapping relation also brings a positive influence in this setting.",
"Actually, the relation extraction architecture is not necessary for only regular and overlapped entities, because the decoding can be finished after the first entity fragment recognition step.",
"The observation doubly demonstrates the advantage of our final model.",
"We also compare our results with several state-of-the-art results of the previous work on the two datasets in Table",
"5. Only the studies with the same training, development and test divisions are listed.",
"We can see that our model can achieve very competitive performances on both datasets.",
"Note that Luan et al. (2019) and Wadden et al. (2019) use extra coreference resolution information, and Strakova et al. (2019) exploit much richer word representations by a combination of ELMo, BERT and Flair.",
"In this work, we proposed an efficient and effective model to recognize both overlapped and discontinuous entities simultaneously, which can be applied to any NER dataset theoretically, since no extra assumption is required to limit the type of named entities.",
"First, we enumerate all spans in a given sentence to determine whether they are valid entity fragments, and then relation classifications are performed to check the relationships between all fragment pairs.",
"The results show that our model is highly competitive to the state-of-the-art models for overlapped or discontinuous NER.",
"We have conducted detailed studies to help comprehensive understanding of our model.",
"We thank the reviewers for their comments and recommendation.",
"This work is supported by the National Natural Science Foundation of China (No. 61772378), the National Key Research and Development Program of China (No. 2017YFC1200500), the Research Foundation of Ministry of Education of China (No. 18JZD015)."
]
| [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"result",
"method",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"objective",
"objective",
"result",
"method",
"other",
"other"
]
|
[
"Language has the power to reinforce stereotypes and project social biases onto others.",
"At the core of the challenge is that it is rarely what is stated explicitly, but rather the implied meanings, that frame people's judgments about others.",
"For example, given a statement that we shouldn't lower our standards to hire more women, most listeners will infer the implicature intended by the speaker that women (candidates) are less qualified.",
"Most semantic formalisms, to date, do not capture such pragmatic implications in which people express social biases and power differentials in language.",
"We introduce SOCIALBIASFRAMES , a new conceptual formalism that aims to model the pragmatic frames in which people project social biases and stereotypes onto others.",
"In addition, we introduce the Social Bias Inference Corpus to support large-scale modelling and evaluation with 150k structured annotations of social media posts, covering over 34k implications about a thousand demographic groups.",
"We then establish baseline approaches that learn to recover SOCIALBIASFRAMES from unstructured text.",
"We find that while state-of-the-art neural models are effective at high-level categorization of whether a given statement projects unwanted social bias (80% F 1 ), they are not effective at spelling out more detailed explanations in terms of SOCIALBIASFRAMES .",
"Our study motivates future work that combines structured pragmatic inference with commonsense reasoning on social implications.",
"Language has enormous power to project social biases and reinforce stereotypes on people (Fiske,",
"1993).",
"The way such biases are projected is rarely in what is stated explicitly, but in all the implied layers of meanings that frame and influence people's judgments about others.",
"For example, on hearing a statement that an all-Muslim movie was a box office bomb, most people can instantly post off?",
"recognize the implied demonizing stereotype that Muslims are terrorists (Figure 1).",
"Understanding these biases with accurate underlying explanations is necessary for AI systems to adequately interact in the social world (Pereira et al., 2016), and failure to do so can result in the deployment of harmful technologies (e.g., conversational AI systems turning sexist and racist; Vincent, 2016).",
"Most previous approaches to understanding the implied harm in statements have cast this task as a simple toxicity classification (e.g., Waseem and Hovy, 2016; Founta et al., 2018; Davidson et al., 2017).",
"However, simple classifications run the risk of discriminating against minority groups, due to high variation and identity-based biases in annotations (e.g., which cause models to learn associations between dialect and toxicity; Sap et al., 2019a; Davidson et al., 2019).",
"In addition, detailed explanations are much more informative for people to understand and reason about why a statement is potentially harmful against other people (Gregor and Benbasat, 1999; Ribeiro et al., 2016).",
"Thus, we propose SOCIALBIASFRAMES , a novel conceptual formalism that aims to model pragmatic frames in which people project social biases and stereotypes on others.",
"Compared to semantic frames (Fillmore and Baker, 2001), the meanings projected by pragmatic frames are richer, and thus cannot be easily formalized using only categorical labels.",
"Therefore, as illustrated in Figure 1, our formalism combines hierarchical categories of biased implications such as intent and offensiveness with implicatures described in free-form text such as groups referenced and implied statements .",
"In addition, we introduce SBIC, 1 a new corpus collected using a novel crowdsourcing framework.",
"SBIC supports large-scale learning and evaluation with over 150k structured annotations of social media posts, spanning over 34k implications about a thousand demographic groups.",
"We then establish baseline approaches that learn to recover SOCIALBIASFRAMES from unstructured text.",
"We find that while state-of-the-art neural models are effective at making high-level categorization of whether a given statement projects unwanted social bias (80% F 1 ), they are not effective at spelling out more detailed explanations by accurately decoding SOCIALBIASFRAMES .",
"Our study motivates future research that combines structured pragmatic inference with commonsense reasoning on social implications.",
"Important implications of this study.",
"We recognize that studying SOCIALBIASFRAMES necessarily requires us to confront online content that may be offensive or disturbing (see 7 for further discussion on the ethical implications of this study).",
"However, deliberate avoidance does not eliminate such problems.",
"Therefore, the important premise we take in this study is that assessing social media content through the lens of SOCIAL 1 SBIC: S ocial B ias I nference C orpus, available at http://tinyurl.com/social-bias-frames .",
"BIASFRAMES is important for automatic flagging or AI-augmented writing interfaces, where potentially harmful online content can be analyzed with detailed explanations for users or moderators to consider and verify.",
"In addition, the collective analysis over large corpora can also be insightful for educating people on reducing unconscious biases in their language.",
"To better enable models to account for socially biased implications of language, 2 we design a new pragmatic formalism that distinguishes several related but distinct inferences, shown in Figure 1.",
"Given a natural language utterance, henceforth, post , we collect both categorical as well as free text inferences (described below), inspired by recent efforts in free-text annotations of commonsense knowledge (e.g., Speer and Havasi, 2012; Rashkin et al., 2018; Sap et al., 2019b) and argumentation (Habernal and Gurevych, 2016; Becker et al., 2017).",
"The free-text explanations are crucial to our formalism, as they can both increase trust in predictions made by the machine (Kulesza et al., 2012; Bussone et al., 2015; Nguyen et al., 2018) and encourage a poster's empathy towards a targeted group, thereby combating biases (Cohen-Almagor, 2014).",
"We base our initial frame design on social science literature of pragmatics (Lakoff, 1973; de Marneffe et al., 2012) and impoliteness (Kasper, 1990; Gabriel, 1998; Dynel, 2015; Vonasch and Baumeister, 2017).",
"We then refine the frame structure (including number of possible answers to questions) based on the annotator (dis)agreement in multiple pilot studies.",
"We describe each of the included variables below.",
"Offensiveness is our main categorical annotation, and denotes the overall rudeness, disrespect, or toxicity of a post.",
"We consider whether a post could be considered offensive to anyone, as previous work has shown this to have higher recall (Sap et al., 2019a).",
"This is a categorical variable with three possible answers ( yes , maybe , no ).",
"2 In this work, we employ the U.S. sociocultural lens when discussing bias and power dynamics among demographic groups.",
"1990; Dynel, 2015), yet distinct from offensiveness (Gabriel, 1998; Daly, 2018).",
"This is a categorical variable with four possible answers ( yes , probably , probably not , no ).",
"Lewd or sexual references are a key subcategory of what constitutes potentially offensive material in many cultures, especially in the United States (Strub, 2008).",
"This is a categorical variable with three possible answers ( yes , maybe , no ).",
"Group implications are distinguished from individual-only attacks or insults that do not invoke power dynamics between groups (e.g., F*ck you vs. F*ck you, f*ggot).",
"This is a categorical variable with two possible answers: individual-only ( no ), group targeted ( yes ).",
"Targeted group describes the social or demographic group that is referenced or targeted by the post.",
"Here we collect free-text answers , but provide a seed list of demographic or social groups to encourage consistency.",
"Implied statement represents the power dynamic or stereotype that is referenced in the post.",
"We collect free-text answers in the form of simple Hearst-like patterns (e.g., women are ADJ, gay men VBP ; Hearst, 1992).",
"In-group language aims to capture whether the author of a post may be a member of the same so-cial/demographic group that is targeted, as speaker identity changes how a statement is perceived (O'Dea et al., 2015).",
"Specifically, in-group language (words or phrases that (re)establish belonging to a social group; Eble, 1996) can change the perceived offensiveness of a statement, such as reclaimed slurs (Croom, 2011; Galinsky et al., 2013) or self-deprecating language (Greengross and Miller, 2008).",
"Note that we do not attempt to categorize the identity of the speaker.",
"This variable takes three possible values ( yes , maybe , no ).",
"To create SBIC, we design a crowdsourcing framework to distill the biased implications of posts at a large scale.",
"We draw from various sources of potentially biased online content, shown in Table 2, to select",
"posts to annotate.",
"Since online toxicity can be relatively scarce (Founta et al., 2018), 3 we start by annotating English Reddit posts, specifically three intentionally offensive subReddits and a corpus of potential microaggressions from Breitfeller et al. (2019).",
"By nature, the three offensive subreddits are very likely to have harmful implications, as posts are often made with intents to deride adversity or social inequality (Bicknell, 2007).",
"Microaggressions, on the other hand, are likely to contain subtle biased implicationsa natural fit for SOCIALBIASFRAMES .",
"In addition, we include posts from three existing English Twitter datasets annotated for toxic or abusive language, filtering out @-replies, retweets, and links.",
"We mainly annotate tweets released by Founta et al. (2018), who use a bootstrapping approach to sample potentially offensive tweets.",
"We also include tweets from Waseem and Hovy (2016) and Davidson et al. (2017), who collect datasets of tweets containing racist or sexist hashtags and slurs, respectively.",
"Finally, we include posts from known English hate communities: Stormfront (de Gibert 3 Founta et al. (2018) find that the prevalence of toxic content online is < 4%.",
"et al., 2018) and Gab, 4 which are both documented white-supremacist and neo-nazi communities (Bowman-Grieve, 2009; Hess, 2016), and two English subreddits that were banned for inciting violence against women (r/Incels and r/MensRights; Fingas, 2017; Center, 2012).",
"We design a hierarchical annotation framework to collect biased implications of a given post (snippet shown in Figure 2) on Amazon Mechanical Turk (MTurk).",
"The full task is shown in the appendix (Figure 4).",
"For each post, workers indicate whether the post is offensive, whether the intent was to offend, and whether it contains lewd or sexual content.",
"Only if annotators indicate potential offensiveness do they answer the group implication question.",
"If the post targets or references a group or demographic, workers select or write which one(s); per selected group, they then write two to four stereotypes.",
"Finally, workers are asked whether they think the speaker is part of one of the minority groups referenced by the post.",
"We collect three annotations per post, and restrict our worker pool to the U.S. and Canada.",
"We ask workers to optionally provide coarse-grained demographic information.",
"5 4 https://files.pushshift.io/gab/ GABPOSTS_CORPUS.xz 5 This study was approved by our institutional review board.",
"Annotator demographics In our final annotations, our worker pool was relatively gender-balanced and age-balanced (55% women, 42% men, < 1% non-binary; 36 10 years old), but racially skewed (82% White, 4% Asian, 4% Hispanic, 4% Black).",
"Annotator agreement Overall, the annotations in SBIC showed 82.4% pairwise agreement and Krippendorf's =0.45 on average, which is substantially higher than previous work in toxic language detection (e.g., =0.22 in Ross et al., 2017).",
"Broken down by each categorical question, workers agreed on a post being offensive at a rate of 76% (Krippendorf's =0.51), its intent being to offend at 75% ( =0.46), and it having group implications at 74% ( =0.48).",
"For categorizing posts as lewd, workers agreed substantially (94%, =0.62).",
"However, flagging potential ingroup speech had lower agreement, likely because this is a very nuanced annotation, and because highly skewed categories (only 5% yes; see Table 3) lead to low s (here, =0.17 with agreement 94%).",
"6 Finally, workers agreed on the exact same targeted group 80.2% of the time ( =0.50).",
"After data collection, SBIC contains 150k structured inference tuples, covering 34k free text group-implication pairs (see Table 3).",
"We show example inference tuples in Table 1.",
"Additionally, we show a breakdown of the types of targeted groups in Figure 3.",
"While SBIC covers a variety of types of biases, gender-based, race-based, and culture-based biases are the most represented, which parallels the types of discrimination happening in the real world (RWJF, 2017).",
"We find that our dataset is predominantly written in White-aligned English (78% of posts), as measured by a lexical dialect detector by Blodgett et al. (2016), with < 10% of posts having indicators of African-American English.",
"We caution researchers to consider the potential for dialector identity-based biases in labelling (Davidson et al., 2019; Sap et al., 2019a) before deploying technology based on SBIC (see Section 7).",
"Given a post, we establish baseline performance of models at inferring SOCIALBIASFRAMES .",
"An ideal model should be able to both generate the implied power dynamics in textual form, as well as classify the post's offensiveness and other categorical variables.",
"Satisfying these conditions, we use the OpenAI-GPT transformer networks (Vaswani et al., 2017; Radford et al., 2018, 2019) as a basis for our experiments, given their recent successes at model offensive intent lewd group in-group 42.2% pos.",
"classification, commonsense generation, and conditional generation (Bosselut et al., 2019; Keskar et al., 2019).",
"Training We cast our frame prediction task as a hybrid classification and language generation task, where we linearize the variables following the frame hierarchy.",
"7 At training time, our model takes as input a sequence of N tokens: x = { [ STR ] , w 1 , w 2 , ..., w n , [ SEP ] , w [ lewd ] , w [ off ] , w [ int ] , w [ grp ] , [ SEP ] , w [ G ] 1 , w [ G ] 2 , ..., [ SEP ] , w [ S ] 1 , w [ S ] 2 , ..., [ SEP ] , w [ ing ] , [ END ] } (1) where [ STR ] is our start token, w 1: n is the sequence of tokens in a post, w [ G ] i the tokens representing the group, and w [ S ] i the implied statement.",
"We add two task-specific vocabulary items for each of our five classification tasks ( w [ lewd ] , w [ off ] , w [ int ] , w [ grp ] , w [ ing ] ), each representing the negative and positive values of the class (e.g., for offensiveness, [offY] and [offN] ).",
"8 The model relies on a stack of transformer blocks of multi-headed attention and fully connected layers to encode the input tokens (for a detailed modelling description, see Radford et al., 2018, 2019).",
"Since GPT is a forward-only language model, the attention is only computed over preceding tokens.",
"At the last layer, the model projects the embedding into a vocabulary-sized vector, which is turned into a probability distribution over the vocabulary using a softmax layer.",
"We minimize the cross-entropy of the contextual probability of the correct token in our full linearized frame objective (of length N ): L = 1 N (cid:88) i log p GPT ( w i | w 0: i 1 ) During training, no loss is incurred for lower-level variables with no values, i.e., variables that cannot take values due to earlier variable values (e.g., there is no targeted group for posts marked as non-offensive).",
"In our experiments we use pretrained versions of OpenAI's GPT and GPT2 (Radford et al., 2018, 2019) for our model variants, named SBF-GPT 1 and SBF-GPT 2 , respectively.",
"While their architectures are similar (stack of Transformers), GPT was trained on a large corpus of fiction books, whereas GPT2 was trained on 40Gbs of English web text.",
"Inference We frame our inference task as a conditional language generation task.",
"Conditioned on the post, we generate tokens one-by-one either by greedily selecting the most probable one, or by sampling from the next word distribution, and appending the selected token to the output.",
"We stop when the [ END ] token is generated, at which point our entire frame is predicted.",
"For greedy decoding, we only generate our frames once, but for sampling, we repeat the generation procedure to yield ten candidate frame predictions and choose the highest scoring one under our model.",
"In contrast to training time, where all inputs are consistent with our frames' structure, at test time, our model can sometimes predict combinations of variables that are inconsistent with the constraints of the frame (e.g., predicting a post to be inoffensive, but still predict it to be offensive to a group).",
"To mitigate this issue, we also experiment with a constrained decoding algorithm (denoted con-str) that considers various global assignments of group targeted implied statement BLEU Rouge-L WMD BLEU Rouge-L WMD dev.",
"variables.",
"Specifically, after greedy decoding, we recompute the probabilities of each of the categorical variables, and search for the most probable assignment given the generated text candidate and variable probabilities.",
"9 This can allow variables to be assigned an alternative value that is more globally optimal.",
"10 4.1 Evaluation We evaluate performance of our models in the following ways.",
"For classification, we report precision, recall, and F 1 scores of the positive class.",
"Following previous generative inference work (Sap et al., 2019b), we use automated metrics to evaluate model generations.",
"We use BLEU-2 and RougeL ( F 1 ) scores to capture word overlap between the generated inference and the references, which captures quality of generation (Gal-ley et al., 2015; Hashimoto et al., 2019).",
"We additionally compute word mover's distance (WMD; Kusner et al., 2015), which uses distributed word representations to measure similarity between the generated and target text.",
"11 4.2 Training Details As each post can contain multiple annotations, we define a training instance as containing one post-group-statement triple (along with the five categorical annotations).",
"We then split our dataset into",
"train/dev./test (75:12.5:12.5), ensuring that no post is present in multiple splits.",
"For evaluation (dev., test), we combine the categorical variables by averaging their binarized values and re-binarizing using a .5 threshold, and compare the generated 9 We only use the possible assignments in the same for-ward pass; we do not use assignments from different samples.",
"inferences (hypotheses) to all targeted groups and implied statements (references).",
"All experiments are carried out using Hugging-Face's Transformers library.",
"12 We tune hyperpa-rameters on the dev.",
"set, and report performance for the best performing setting (according to average F 1 ).",
"We train or finetune our models using a batch size of 4, a learning rate of 5 10 6 for GPT and 10 5 for GPT2 (both with linear warm up), and consider training for e { 1 , 2 , 5 } epochs.",
"Listed in Tables 4 and 5, our modelling results indicate that making inferences about social biases in language is challenging for these models.",
"Classification Shown in Table 4, models perform well on higher-level variables such as offensiveness and lewdness, despite the latter being heavily skewed.",
"We hypothesize that correctly predicting lewdness might require more lexical matching (e.g., detecting words with sexual con-notations).",
"Whether a group is targeted is slightly less easy for models to predict, and whether the language is in-group is even more challenging, with most of the models defaulting to never predicting it.",
"This highly skewed category poses a challenge for all models, likely due to subtlety of the task and the lack of positive instances.",
"SBF-GPT 2 -gdy is the only model that predicts positive values for in-group language, for which it benefits from constrained decoding with a 1.9% improvement in F 1 score (we show results with all constrained decoding variants in Table 7 in the appendix).",
"12 https://github.com/huggingface/ transformers post predictedgroup predictedimplication referencegroups reference implications",
"Generation When evaluating our models on the generation tasks (i.e., targeted group and implied statement), we find that no one model outperforms others across all metrics (Table 5).",
"Overall, models do well at generating the targeted groups, likely because of the more limited generation space (there are only 1.4k possible groups in SBIC).",
"Conversely, for implied statement generation (where output space is much larger), model performance is slightly worse.",
"Similar to the classification tasks, SBF-GPT 2 gdy shows a slight increase in RougeL score when using constrained decoding, but we see a slight drop in BLEU scores.",
"Error analysis Since small differences in automated evaluation metrics for text generation sometimes only weakly correlate with human judgments (Liu et al., 2016), we manually perform an error analysis on a manually selected set of generated development-set examples from the SBF-GPT 2 -gdy-constr model (Table 6).",
"Overall, the model seems to struggle with generating textual implications that are relevant to the post, instead generating very generic stereotypes about the demographic groups (e.g., in examples b and",
"c).",
"The model generates the correct stereotypes when there is high lexical overlap with the post (e.g., examples d and",
"e).",
"This is in line with previous research showing that large language models rely on correlational patterns in data (Sap et al., 2019c; Sakaguchi et al., 2020).",
"Bias and toxicity detection Detection of hateful, abusive, or other toxic language has received increased attention recently (Schmidt and Wie-gand, 2017), and most dataset creation work has cast this detection problem as binary classification (Waseem and Hovy, 2016; Davidson et al., 2017; Founta et al., 2018).",
"Moving beyond a single binary label, Wulczyn et al. (2017) and the PerspectiveAPI use a set of binary variables to annotate Wikipedia comments for several toxicity-related categories (e.g., identity attack, profanity).",
"Similarly, Zampieri et al. (2019) hierarchically annotate a dataset of tweets with offensiveness and whether a group or individual is targeted.",
"Most related to our work, Ousidhoum et al. (2019) create a multilingual dataset of 13k tweets annotated for five different emotionand toxicity-related aspects, including a 16-class variable representing social groups targeted.",
"In comparison, SOCIALBIASFRAMES not only captures binary toxicity and hierarchical information about whether a group is targeted, but also free-text implications about 1.4k different targeted groups and the implied harm behind statements.",
"Similar in spirit to this paper, recent work has tackled more subtle bias in language, such as microaggressions (Breitfeller et al., 2019) and condescension (Wang and Potts, 2019).",
"These types of biases are in line with the biases covered by SOCIALBIASFRAMES , but more narrowly scoped.",
"Inference about social dynamics Various work has tackled the task of making inferences about power and social dynamics.",
"Particularly, previous work has analyzed power dynamics about spe-cific entities, either in conversation settings (Prab-hakaran et al., 2014; Danescu-Niculescu-Mizil et al., 2012) or in narrative text (Sap et al., 2017; Field et al., 2019; Antoniak et al., 2019).",
"Additionally, recent work in commonsense inference has focused on mental states of participants of a situation (e.g., Rashkin et al., 2018; Sap et al., 2019b).",
"In contrast to reasoning about particular individuals, our work focuses on biased implications of social and demographic groups as a whole.",
"Risks in deployment Automatic detection of offensiveness or reasoning about harmful implications of language should be done with care.",
"When deploying such algorithms, ethical aspects should be considered including which performance metric should be optimized (Corbett-Davies et al., 2017), as well as the fairness of the model on speech by different demographic groups or in different varieties of English (Mitchell et al., 2019).",
"Additionally, deployment of such technology should discuss potential nefarious side effects, such as censorship (Ullmann and Tomalin, 2019) and dialect-based racial bias (Sap et al., 2019a; Davidson et al., 2019).",
"Finally, offensiveness could be paired with promotions of positive online interactions, such as emphasis of community standards (Does et al., 2011) or counter-speech (Chung et al., 2019; Qian et al., 2019).",
"Risks in annotation Recent work has highlighted various negative side effects caused by annotating potentially abusive or harmful content (e.g., acute stress; Roberts, 2016).",
"We mitigated these by limiting the number of posts that one worker could annotate in one day, paying workers above minimum wage ($712), and providing crisis management resources to our annotators.",
"13 Additionally, we acknowledge the implications of using data available on public forums for research (Zimmer, 2018) and urge researchers and practitioners to respect the privacy of the authors of posts in SBIC (Ayers et al., 2018).",
"To help machines reason about and account for societal biases, we introduce SOCIALBIASFRAMES , a new structured commonsense formalism that distills knowledge about the biased implications of language.",
"Our frames combine categorical knowledge about the offensiveness, intent, and targets of statements, as well as free-text inferences about which groups are targeted and biased implications or stereotypes.",
"We collect a new dataset of 150k annotations on social media posts using a new crowdsourcing framework and establish baseline performance of models built on top of large pretrained language models.",
"We show that while classifying the offensiveness of statements is easier, current models struggle to generate relevant social bias inferences, especially when implications have low lexical overlap with posts.",
"This indicates that more sophisticated models are required for SOCIALBIASFRAMES inferences.",
"We thank the anonymous reviewers for their insightful comments.",
"Additionally, we are grateful to Hannah Rashkin, Lucy Lin, Jesse Dodge, Hao Peng, and other members of the UW NLP community for their helpful comments on the project.",
"This research was supported in part by NSF (IIS-1524371, IIS-1714566), DARPA under the CwC program through the ARO (W911NF-15-1-0543), and DARPA under the MCS program through NIWC Pacific (N66001-19-2-4031)."
]
| [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"abstain",
"method",
"result",
"objective",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"method",
"objective",
"result",
"abstain",
"other",
"other",
"other"
]
|
[
"Generating explanations for neural networks has become crucial for their applications in real-world with respect to reliability and trustworthiness.",
"In natural language processing, existing methods usually provide important features which are words or phrases selected from an input text as an explanation, but ignore the interactions between them.",
"It poses challenges for humans to interpret an explanation and connect it to model prediction.",
"In this work, we build hierarchical explanations by detecting feature interactions.",
"Such explanations visualize how words and phrases are combined at different levels of the hierarchy, which can help users understand the decision-making of blackbox models.",
"The proposed method is evaluated with three neural text classifiers (LSTM, CNN, and BERT) on two benchmark datasets, via both automatic and human evaluations.",
"Experiments show the effectiveness of the proposed method in providing explanations that are both faithful to models and interpretable to humans.",
"Deep neural networks have achieved remarkable performance in natural language processing (NLP) (Devlin et al., 2018; Howard and Ruder, 2018; Peters et al., 2018), but the lack of understanding on their decision making leads them to be characterized as blackbox models and increases the risk of applying them in real-world applications (Lipton, 2016; Burns et al., 2018; Jumelet and Hupkes, 2018; Jacovi et al., 2018).",
"Understanding model prediction behaviors has been a critical factor in whether people will trust and use these blackbox models (Ribeiro et al., 2016).",
"A typical work on understanding decision-making is to generate prediction explanations for each input example, called local explanation generation.",
"In NLP, most of existing work on local explanation generation focuses on producing word-level or phrase-level explanations by quantifying contributions of individual words or phrases to a model prediction (Ribeiro et al., 2016; Lundberg and Lee, 2017; Lei et al., 2016; Plumb et al., 2018).",
"Figure 1",
"(a) and",
"(b) present a word-level and a phrase-level explanation generated by the LIME (Ribeiro et al., 2016) and the Contextual Decomposition (CD) (Murdoch et al., 2018) respectively for explaining sentiment classification.",
"Both explanations provide scores to quantify how a word or a phrase contributes to the final prediction.",
"For example, the explanation generated by LIME captures a keyword waste and the explanation from CD identifies an important phrase waste of .",
"However, neither of them is able to explain the model decision-making in terms of how words and phrases are interacted with each other and composed together for the final prediction.",
"In this example, since the final prediction is NEGATIVE , one question that we could ask is that how the word good or a phrase related to the word good contributes to the model prediction.",
"An explanation being able to answer this question will give users a better understanding on the model decision-making and also more confidence to trust the prediction.",
"The goal of this work is to reveal prediction behaviors of a text classifier by detecting feature (e.g., words or phrases) interactions with respect to model predictions.",
"For a given text, we propose a model-agnostic approach, called HEDGE (for Hierarchical Explanation via Divisive Generation), to build hierarchical explanations by recursively detecting the weakest interactions and then dividing large text spans into smaller ones based on the interactions.",
"As shown in Figure 1",
"(c), the hierarchical structure produced by HEDGE provides a comprehensive picture of how different granularity of features interacting with each other within the model.",
"For example, it shows how the word good is dominated by others in the model prediction, which eventually leads to the correct prediction.",
"Furthermore, the scores of text spans across the whole hierarchy also help identify the most important feature waste of good , which can be served as a phrase-level explanation for the model prediction.",
"The contribution of this work is three-fold: (1) we design a top-down model-agnostic method of constructing hierarchical explanations via feature interaction detection; (2) we propose a simple and effective scoring function to quantify feature contributions with respect to model predictions; and (3) we compare the proposed algorithm with several competitive methods on explanation generation via both automatic and human evaluations.",
"The experiments were conducted on sentiment classification tasks with three neural network models, LSTM (Hochreiter and Schmidhuber, 1997), CNN (Kim, 2014), and BERT (Devlin et al., 2018), on the SST (Socher et al., 2013) and IMDB (Maas et al., 2011) datasets.",
"The comparison with other competitive methods illustrates that HEDGE provides more faithful and human-understandable explanations.",
"Our implementation is available at https:// github.com/UVa-NLP/HEDGE .",
"Over the past years, many approaches have been explored to interpret neural networks, such as contextual decomposition (CD) for LSTM (Mur-doch et al., 2018) or CNN model (Godin et al., 2018), gradient-based interpretation methods (Hechtlinger, 2016; Sundararajan et al., 2017), and attention-based methods (Ghaeini et al., 2018; Lee et al., 2017; Serrano and Smith, 2019).",
"However, these methods have limited capacity in real-world applications, as they require deep understanding of neural network architectures (Murdoch et al., 2018) or only work with specific models (Alvarez-Melis and Jaakkola, 2018).",
"On the other hand, model-agnostic methods (Ribeiro et al., 2016; Lundberg and Lee, 2017) generate explanations solely based on model predictions and are applicable for any black-box models.",
"In this work, we mainly focus on model-agnostic explanations.",
"The core of generating model-agnostic explanations is how to efficiently evaluate the importance of features with respect to the prediction.",
"So far, most of existing work on model-agnostic explanations focus on the word level.",
"For example, Li et al. (2016) proposed Leave-one-out to probe the black-box model by observing the probability change on the predicted class when erasing a certain word.",
"LIME proposed by Ribeiro et al. (2016) estimates individual word contribution locally by linear approximation from perturbed examples.",
"A line of relevant works to ours is Shapley-based methods, where the variants of Shapley values (Shapley, 1953) are used to evaluate feature importance, such as SampleShapley (Kononenko et al., 2010), KernelSHAP (Lundberg and Lee, 2017), and L/C-Shapley (Chen et al., 2018).",
"They are still in the category of generating word-level explanations, while mainly focus on addressing the challenge of computational complexity of Shapley values (Datta et al., 2016).",
"In this work, inspired by an extension of Shapley values (Owen, 1972; Grabisch, 1997; Fujimoto et al., 2006), we design a function to detect feature interactions for building hierarchical model-agnostic explanations in subsection 3.1.",
"While, different from prior work of using Shapley values for feature importance evaluation, we propose an effective and simpler way to evaluate feature importance as described in subsection 3.3, which outperforms Shapley-based methods in selecting important words as explanations in subsection 4.2.",
"Addressing the limitation of word-level explanations (as discussed in section 1) has motivated the work on generating phrase-level or hierarchical explanations.",
"For example, Tsang et al. (2018) generated hierarchical explanations by considering the interactions between any features with exhaustive search, which is computationally expensive.",
"Singh et al. (2019) proposed agglomerative contextual decomposition (ACD) which utilizes CD scores (Murdoch et al., 2018; Godin et al., 2018) for feature importance evaluation and employ a hierarchical clustering algorithm to aggregate features together for hierarchical explanation.",
"Furthermore, Jin et al. (2019) indicated the limitations of CD and ACD in calculating phrase interactions in a formal context, and proposed two explanation algorithms by quantifying context independent importance of words and phrases.",
"A major component of the proposed method on feature interaction detection is based on the Shapley interaction index (Owen, 1972; Grabisch, 1997; Fujimoto et al., 2006), which is extended in this work to capture the interactions in a hierarchical structure.",
"Lundberg et al. (2018) calculated features interactions via SHAP interaction values along a given tree structure.",
"Chen and Jordan (2019) suggested to utilize a linguistic tree structure to capture the contributions beyond individual features for text classification.",
"The difference with our work is that both methods (Lundberg et al., 2018; Chen and Jordan, 2019) require hierarchical structures given, while our method constructs structures solely based on feature interaction detection without resorting external structural information.",
"In addition, different from Singh et al. (2019), our algorithm uses a top-down fashion to divide long texts into short phrases and words based on the weakest interactions, which is shown to be more effective and efficient in the experiments in section 4.",
"This section explains the proposed algorithm on building hierarchical explanations (subsection 3.1) and two critical components of this algorithm: detecting feature interaction (subsection 3.2) and",
"quantifying feature importance (subsection 3.3).",
"Algorithm 1 Hierarchical Explanation via Divisive Generation 1: Input : text x with length n , and predicted label y 2: Initialize the original partition P 0 { x (0 ,n ] } 3: Initialize the contribution set C 0 = 4: Initialize the hierarchy H = [ P 0 ] 5: for t = 1 , . . . , n 1 do 6: Find x ( s i ,s i +1 ] and j by solving Equation 1 7: Update the partition P (cid:48) t P t 1 \\{ x ( s i ,s i +1 ] } P t P (cid:48) t { x ( s i ,j ] , x ( j,s i +1 ] } 8: H",
".add ( P t ) 9: Update the contribution set C with C (cid:48) t C t 1 { ( x ( s i ,j ] , ( x ( s i ,j ] )) } C t C (cid:48) t { ( x ( j,s i +1 ] , ( x ( j,s i +1 ] )) } 10: end for 11: Output : C n 1 , H 3.1 Generating Hierarchical Explanations For a classification task, let x = ( x 1 , . . . , x n ) denote a text with n words and y be the prediction label from a well-trained model.",
"Furthermore, we define P = { x (0 ,s 1 ] , x ( s 1 ,s 2 ] , . . . , x ( s P 1 ,n ] } be a partition of the word sequence with P text spans, where x ( s i ,s i +1 ] = ( x s i +1 , . . . , x s i +1 ) .",
"For a given text span x ( s i ,s i +1 ] , the basic procedure of HEDGE is to divide it into two smaller text spans x ( s i ,j ] and x ( j,s i +1 ] , where j is the dividing point ( s i < j < s i +1 ), and then evaluate their contributions to the model prediction y .",
"Algorithm 1 describes the whole procedure of dividing x into different levels of text spans and evaluating the contribution of each of them.",
"Starting from the whole text x , the algorithm first divides x into two segments.",
"In the next iteration, it will pick one of the two segments and further split it into even smaller spans.",
"As shown in algorithm 1, to perform the top-down procedure, we need to answer the questions: for the next timestep, which text span the algorithm should pick to split and where is the dividing point?",
"Both questions can be addressed via the following optimization problem: min x ( si,si +1] P min j ( s i ,s i +1 ) ( x ( s i ,j ] , x ( j,s i +1 ] | P ) , (1) where ( x ( s i ,j ] , x ( j,s i +1 ] | P ) defines the interaction score between x ( s i ,j ] and x ( j,s i +1 ] given the current partition P .",
"The detail of this score function will be explained in subsection 3.2.",
"For a given x ( s i ,s i +1 ] P , the inner optimization problem will find the weakest interaction point to split the text span x ( s i ,s i +1 ] into two smaller ones.",
"It answers the question about where the dividing point should be for a given text span.",
"A trivial case of the inner optimization problem is on a text span with length 2, since there is only one possible way to divide it.",
"The outer optimization answers the question about which text span should be picked.",
"This optimization problem can be solved by simply enumerating all the elements in a partition P .",
"A special case of the outer optimization problem is at the first iteration t = 1 , where P 0 = { x (0 ,n ] } only has one element, which is the whole input text.",
"Once the partition is updated, it is then added to the hierarchy H .",
"The last step in each iteration is to evaluate the contributions of the new spans and update the contribution set C as in line 9 of the algorithm 1.",
"For each, the algorithm evaluates its contribution to the model prediction with the feature importance function ( ) defined in Equation 5.",
"The final output of algorithm 1 includes the contribution set C n 1 which contains all the produced text spans in each timestep together with their importance scores, and the hierarchy H which contains all the partitions of x along timesteps.",
"A hierarchical explanation can be built based on C n 1 and H by visualizing the partitions with all text spans and their importance scores along timesteps, as Figure 1",
"(c) shows.",
"Note that with the feature interaction function ( , ) , we could also design a bottom-up approach to merge two short text spans if they have the strongest interaction.",
"Empirically, we found that this bottom-up approach performs worse than the algorithm 1, as shown in Appendix A. 3.2 Detecting Feature Interaction For a given text span x ( s i ,s i +1 ] P and the dividing point j , the new partition will be N = P\\{ x ( s i ,s i +1 ] } { x ( s i ,j ] , x ( j,s i +1 ] } = { x (0 ,s 1 ] , . . . , x ( s i ,j ] , x ( j,s i +1 ] , . . . , x ( s P 1 ,n ] } .",
"We consider the effects of other text spans in N when calculate the interaction between x ( s i ,j ] and x ( j,s i +1 ] , since the interaction between two words/phrases is closely dependent on the context (Hu et al., 2016; Chen et al., 2016).",
"We adopt the Shapley interaction index from coalition game theory (Owen, 1972; Grabisch, 1997; Fujimoto et al., 2006) to calculate the interaction.",
"For simplicity, we denote x ( s i ,j ] and x ( j,s i +1 ] as j 1 and j 2 respectively.",
"The interaction score is defined as (Lund-berg et al., 2018), ( j 1 ,j 2 |P )= (cid:88) S N\\{ j 1 ,j 2 } | S |",
"( j 1 ,j 2 ,S ) = E [ f ( x (cid:48) ) | S { j 1 ,j 2 } ] E [ f ( x (cid:48) ) | S { j 1 } ] E [ f ( x (cid:48) ) | S { j 2 } ] + E [ f ( x (cid:48) ) | S ] , (3)",
"where x (cid:48) is the same as x except some missing words that are not covered by the given subset (e.g. S ), f ( ) denotes the model output probability on the predicted label y , and E [ f ( x (cid:48) ) | S ] is the expectation of f ( x (cid:48) ) over all possible x (cid:48) given S .",
"In practice, the missing words are usually replaced with a special token <pad> , and f ( x (cid:48) ) is calculated to estimate E [ f ( x (cid:48) ) | S ] (Chen et al., 2018; Datta et al., 2016; Lundberg and Lee, 2017).",
"We also adopt this method in our experiments.",
"Another way to estimate the expectation is to replace the missing words with substitute words randomly drawn from the full dataset, and calculate the empirical mean of all the sampling data (Kononenko et al., 2010; Strumbelj and Kononenko, 2014), which has a relatively high computational complexity.",
"With the number of text spans (features) increasing, the exponential number of model evaluations in Equation 2 becomes intractable.",
"We calculate an approximation of the interaction score based on the assumption (Chen et al., 2018; Singh et al., 2019; Jin et al., 2019): a word or phrase usually has strong interactions with its neighbours in a sentence.",
"The computational complexity can be reduced to polynomial by only considering m neighbour text spans of j 1 and j 2 in N .",
"The interaction score is rewritten as ( j 1 ,j 2 |P )= (cid:88) S N m \\{ j 1 ,j 2 } | S |",
"where N m is the set containing j 1 , j 2 and their neighbours, and M = |N m | .",
"In section 4, we set m = 2 , which performs well.",
"The performance can be further improved by increasing m , but at the cost of increased computational complexity.",
"To measure the contribution of a feature x ( s i ,s i +1 to the model prediction, we define the importance score as",
"where f y ( x ( s i ,s i +1 ] ) is the model output on the predicted label y ; max y (cid:48) (cid:54) = y,y (cid:48) Y f y (cid:48) ( x ( s i ,s i +1 ] ) is the highest model output among all classes excluding y .",
"This importance score measures how far the prediction on a given feature is to the prediction boundary, hence the confidence of classifying x ( s i ,s i +1 ] into the predicted label y .",
"Particularly in text classification, it can be interpreted as the contribution to a specific class y .",
"The effectiveness of Equation 5 as feature importance score is verified in subsection 4.2, where HEDGE outperforms several competitive baseline methods (e.g. LIME (Ribeiro et al., 2016), SampleShapley (Kononenko et al., 2010)) in identifying important features.",
"The proposed method is evaluated on text classification tasks with three typical neural network models, a long short-term memories (Hochreiter and Schmidhuber, 1997, LSTM), a convolutional neural network (Kim, 2014, CNN), and BERT (Devlin et al., 2018), on the SST (Socher et al., 2013) and IMDB (Maas et al., 2011) datasets, via both automatic and human evaluations.",
"Datasets.",
"We adopt the SST-2 (Socher et al., 2013) which has 6920/872/1821 examples in the train/dev/test sets with binary labels.",
"The IMDB (Maas et al., 2011) also has binary labels with 25000/25000 examples in the train/test sets.",
"We hold out 10% of the training examples as the development set.",
"Models.",
"The CNN model (Kim, 2014) includes a single convolutional layer with filter sizes ranging from 3 to 5.",
"The LSTM (Hochreiter and Schmidhu-ber, 1997) has a single layer with 300 hidden states.",
"Both models are initialized with 300-dimensional pretrained word embeddings (Mikolov et al., 2013).",
"We use the pretrained BERT model 1 with 12 trans-1 https://github.com/huggingface/ pytorch-transformers former layers, 12 self-attention heads, and the hidden size of 768, which was then fine-tuned with different downstream tasks to achieve the best performance.",
"Table 1 shows the best performance of the models on both datasets in our experiments, where BERT outperforms CNN and LSTM with higher classification accuracy.",
"We adopt two metrics from prior work on evaluating word-level explanations: the area over the perturbation curve (AOPC) (Nguyen, 2018; Samek et al., 2016) and the log-odds scores (Shrikumar et al., 2017; Chen et al., 2018), and define a new evaluation metric called cohesion-score to evaluate the interactions between words within a given text span.",
"The first two metrics measure local fidelity by deleting or masking top-scored words and comparing the probability change on the predicted label.",
"They are used to evaluate Equation 5 in quantifying feature contributions to the model prediction.",
"The cohesion-score measures the synergy of words within a text span to the model prediction by shuf-fling the words to see the probability change on the predicted label.",
"AOPC.",
"By deleting top k % words, AOPC calculates the average change in the prediction probability on the predicted class over all test data as follows, AOPC ( k ) = 1 NN (cid:88) i =1 { p ( y | x i ) p ( y | x ( k ) i ) } , (6) where y is the predicted label, N is the number of examples, p ( y | ) is the probability on the predicted class, and x ( k ) i is constructed by dropping the k % top-scored words from x i .",
"Higher AOPCs are better, which means that the deleted words are important for model prediction.",
"To compare with other word-level explanation generation methods under this metric, we select word-level features from the bottom level of a hierarchical explanation and sort them in the order of their estimated importance to the prediction.",
"Log-odds.",
"Log-odds score is calculated by averaging the difference of negative logarithmic probabilities on the predicted class over all of the test data before and after masking the top r % features with zero paddings, Log-odds ( r ) = 1 NN (cid:88) i =1 log p ( y | x ( r ) i ) p ( y | x i ) .",
"The notations are the same as in Equation 6 with the only difference that x ( r ) i is constructed by replacing the top r % word features with the special token (cid:104) pad (cid:105) in x i .",
"Under this metric, lower log-odds scores are better.",
"Cohesion-score.",
"We propose cohesion-score to justify an important text span identified by HEDGE .",
"Given an important text span x ( a,b ] , we randomly pick a position in the word sequence ( x 1 , . . . , x a , x b +1 , . . . , x n ) and insert a word back.",
"The process is repeated until a shuffled version of the original sentence x is constructed.",
"The cohesion-score is the difference between p ( y | x ) and p ( y | x ) .",
"Intuitively, the words in an important text span have strong interactions.",
"By perturbing such interactions, we expect to observe the output probability decreasing.",
"To obtain a robust evaluation, for each sentence x i , we construct Q different word sequences { x ( q ) i } Qq =1 and compute the average as Cohesion-score = 1 NN (cid:88) i =1 1 QQ (cid:88) q =1 ( p ( y | x i ) p ( y | x ( q ) i )) , (8) where x ( q ) i is the q th perturbed version of x i , Q is set as 100, and the most important text span in the contribution set C is considered.",
"Higher cohesion-scores are better.",
"We compare HEDGE with several competitive baselines, namely Leave-one-out (Li et al., 2016), LIME (Ribeiro et al., 2016), CD (Murdoch et al., 2018), Shapley-based methods, (Chen et al., 2018, L/C-Shapley), (Lundberg and Lee, 2017, Ker-nelSHAP), and (Kononenko et al., 2010, Sample-Shapley), using AOPC and log-odds metrics; and use cohesion-score to compare HEDGE with another hierarchical explanation generation method ACD (Singh et al., 2019).",
"The AOPCs and log-odds scores on different models and datasets are shown in Table 2, where k = r = 20 .",
"Additional results of AOPCs and log-odds changing with different k and r are shown in Appendix B. For the IMDB dataset, we tested on a subset with 2000 randomly selected samples due to computation costs.",
"HEDGE achieves the best performance on both evaluation metrics.",
"Sam-Methods Models Cohesion-score SST IMDB HEDGECNN 0.016 0.012 BERT 0.124 0.103 LSTM 0.020 0.050 ACD LSTM 0.015 0.038 Table 3: Cohesion scores of HEDGE and ACD in interpreting different models on the SST and IMDB datasets.",
"pleShapley also achieves a good performance with the number of samples set as 100, but the computational complexity is 200 times than HEDGE .",
"Other variants, L/C-Shapley and KernelSHAP, applying approximations to Shapley values perform worse than SampleShapley and HEDGE .",
"LIME performs comparatively to SampleShapley on the LSTM and CNN models, but is not fully capable of interpreting the deep neural network BERT.",
"The limitation of context decomposition mentioned by Jin et al. (2019) is validated by the worst performance of CD in identifying important words.",
"We also observed an interesting phenomenon that the simplest baseline Leave-one-out can achieve relatively good performance, even better than HEDGE when k and r are small.",
"And we suspect that is because the criteria of Leave-one-out for picking single keywords matches the evaluation metrics.",
"Overall, experimental results demonstrate the effectiveness of Equation 5 in measuring feature importance.",
"And the computational complexity is only O ( n ) , which is much smaller than other baselines (e.g. SampleShapley, and L/C-Shapley with polynomial complexity).",
"Table 3 shows the cohesion-scores of HEDGE and ACD with different models on the SST and IMDB datasets.",
"HEDGE outperforms ACD with LSTM, achieving higher cohesion-scores on both datasets, which indicates that HEDGE is good at capturing important phrases.",
"Comparing the results of HEDGE on different models, the cohesion-scores of BERT are significantly higher than LSTM and CNN.",
"It indicates that BERT is more sensitive to perturbations on important phrases and tends to utilize context information for predictions.",
"For qualitative analysis, we present two typical examples.",
"In the first example, we compare HEDGE with ACD in interpreting the LSTM model.",
"Figure 2 visualizes two hierarchical explanations, generated by HEDGE and ACD respectively, on a negative movie review from the SST dataset.",
"In this case, LSTM makes a wrong prediction ( POSITIVE ).",
"Figure",
"2(a) shows HEDGE correctly captures the sentiment polarities of bravura and emptiness , and the interaction between them as bravura exercise flips the polarity of in emptiness to positive.",
"It explains why the model makes the wrong prediction.",
"On the other hand, ACD incorrectly marks the two words with opposite polarities, and misses the feature interaction, as Figure",
"2(b) shows.",
"In the second example, we compare HEDGE in interpreting two different models (LSTM and BERT).",
"Figure 3 visualizes the explanations on a positive movie review.",
"In this case, BERT gives the correct prediction ( POSITIVE ), while LSTM makes",
"a wrong prediction ( NEGATIVE ).",
"The comparison between Figure",
"3(a) and",
"3(b) shows the difference of feature interactions within the two models and explains how a correct/wrong prediction was made.",
"Specifically, Figure",
"3(b) illustrates that BERT captures the key phrase not a bad at step 1, and thus makes the positive prediction, while LSTM (as shown in Figure",
"3(a)) misses the interaction between not and bad , and the negative word bad pushes the model making the NEGATIVE prediction.",
"Both cases show that HEDGE is capable of explaining model prediction behaviors, which helps humans understand the decision-making.",
"More examples are presented in Appendix C due to the page limitation.",
"We had 9 human annotators from the Amazon Mechanical Turk (AMT) for human evaluation.",
"The features (e.g., words or phrases) with the highest importance score given by HEDGE and other baselines are selected as the explanations.",
"Note that HEDGE and ACD can potentially give very long top features which are not user-friendly in human evaluation, so we additionally limit the maximum length of selected features to five.",
"We provided the input text with different explanations in the user interface (as shown in Appendix D) and asked human annotators to guess the model's prediction (Nguyen, 2018) from { Negative, Positive, N/A } based on each explanation, where N/A was selected when annotators cannot guess the model's prediction.",
"We randomly picked 100 movie reviews from the IMDB dataset for human evaluation.",
"There are two dimensions of human evaluation.",
"We first compare HEDGE with other baselines using the predictions made by the same LSTM model.",
"Second, we compare the explanations generated by HEDGE on three different models: LSTM, CNN, and BERT.",
"We measure the number of human annotations that are coherent with the actual model predictions, and define the coherence score as the ratio between the coherent annotations and the total number of examples.",
"Table 4 shows the coherence scores of eight different interpretation methods for LSTM on the IMDB dataset.",
"HEDGE outperforms other baselines with higher coherence score, which means that HEDGE can capture important features which are highly consistent with human interpretations.",
"LIME is still a strong baseline in providing interpretable explanations, while ACD and Shapley-based methods perform worse.",
"Table 5 shows both the accuracy and coherence scores of different models.",
"HEDGE succeeds in interpreting black-box models with relatively high coherence scores.",
"Moreover, although BERT can achieve higher prediction accuracy than the other two models, its coherence score is lower, manifesting a potential tradeoff between accuracy and interpretability of deep models.",
"In this paper, we proposed an effective method, HEDGE , building model-agnostic hierarchical interpretations via detecting feature interactions.",
"In Methods Coherence Score Leave-one-out 0.82 ACD 0.68 LIME 0.85 L-Shapley 0.75 C-Shapley 0.73 KernelSHAP 0.56 SampleShapley 0.78 HEDGE 0.89 Table 4: Human evaluation of different interpretation methods with LSTM model on the IMDB dataset.",
"this work, we mainly focus on sentiment classification task.",
"We test HEDGE with three different neural network models on two benchmark datasets, and compare it with several competitive baseline methods.",
"The superiority of HEDGE is approved by both automatic and human evaluations."
]
| [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain"
]
|
[
"Reliably evaluating Machine Translation (MT) through automated metrics is a long-standing problem.",
"One of the main challenges is the fact that multiple outputs can be equally valid.",
"Attempts to minimise this issue include metrics that relax the matching of MT output and reference strings, and the use of multiple references.",
"The latter has been shown to significantly improve the performance of evaluation metrics.",
"However, collecting multiple references is expensive and in practice a single reference is generally used.",
"In this paper, we propose an alternative approach: instead of modelling linguistic variation in human reference we exploit the MT model uncertainty to generate multiple diverse translations and use these: ( i ) as surrogates to reference translations; ( ii ) to obtain a quantification of translation variability to either complement existing metric scores or ( iii ) replace references altogether.",
"We show that for a number of popular evaluation metrics our variability estimates lead to substantial improvements in correlation with human judgements of quality by up 15%.",
"Translation is an open-ended task with multiple valid solutions.",
"There are often multiple equivalent translations for the same source sentence.",
"This is due to inherent differences between languages and various sources of ambiguity, which is often impossible to solve without access to additional context.",
"Furthermore, the source might suffer substantial changes in translation due to translator's need to adapt it to the target audience.",
"With rare exceptions, translations are not literal, they can differ from the source text at any linguistic level lexical, syntactic, semantic or even discourse and still be considered correct.",
"The ability to produce non-literal, more natural translations is one of the goals in the field of Machine Translation (MT).",
"Neural MT (NMT) approaches have certainly made significant progress in this direction.",
"However, the diversity of possible outcomes makes it harder to evaluate MT models.",
"Evaluation metrics (or humans in the case of monolingual manual evaluation) are given a single reference translation against which to compare the MT output.",
"Fomicheva and Specia (2016) found differences of up to 1 point on a 1-5 point quality scale (i.e. 20%) between groups of annotators who use different references for manual evaluation.",
"In automatic evaluation, which computes a similarity score between MT output and human reference, they found differences of up to 6 BLEU points depending on the reference used, showing that metrics strongly penalise perfectly correct translations that happen to be different from the reference provided.",
"Dreyer and Marcu (2012) showed that if multiple human translations are used, any automatic MT evaluation metric achieves a substantially higher correlation with human judgments.",
"However, multiple translations are hardly ever available in practice due to the cost of collecting them.",
"Alternatives strategies for modelling linguistic variation in automatic MT evaluation include using paraphrasing, synonyms, or comparing linguistic structures of MT output and the reference translation (e.g. semantic role labels) instead of surface forms ( 2).",
"It is worth noticing that this line of work focuses on varying the reference translation.",
"No existing work accounts for the diversity of possible MT outputs.",
"Instead of using multiple references or relaxing the string matching process, we use the MT system to generate multiple additional hypotheses representing potentially valid translation variations.",
"We do so by exploring model uncertainty in output probability distributions.",
"1 To generate a diverse set 1 We focus on sentence-level evaluation, as system-level Figure 1: Hypothetical similarity space where a low-quality MT output (left) and a high-quality MT output (right) are equally distant from the reference but can be distinguished based on the similarity to additional MT hypotheses.",
"of hypotheses from neural MT (NMT) systems, we leverage recent work on uncertainty quantification for neural networks ( 3).",
"The additional hypotheses produced for a given source sentence are then used for evaluation with or without human references.",
"Intuitively, if some of the hypotheses match the reference, it is probable that the MT output under evaluation is also of high quality.",
"2 Furthermore, we posit that the differences between system hypotheses produced for the same source capture uncertainty.",
"The more similar they are among themselves, the higher the confidence of the model.",
"As illustrated in Figure 1, this could provide additional information for discriminating translation quality when measuring the distance to the reference translation does not suffice.",
"We devise various new metrics based on this intuition and obtain large improvements in correlation with human judgments over traditional reference-based metrics.",
"Our main contributions are as follows: (1) We study different ways to generate additional MT hypotheses by exploring uncertainty in NMT models.",
"We show that a light-weight Bayesian approximation method Monte Carlo Dropout , which allows for uncertainty quantification by using dropout at inference time (Gal and Ghahramani, 2016) works the best for the purpose of automatic MT evaluation; (2) We devise methods to effectively explore multiple MT hypotheses to better evaluate MT output quality with existing evaluation metrics.",
"On two different datasets, we achieve a large improvement in correlation with human judgments automatic evaluation can be by and large considered a solved problem (Ma et al., 2019a).",
"2 The goal of this paper is not to evaluate the search space of the MT system, but to improve the evaluation of the given MT output by using additional hypotheses.",
"Evaluating the NMT search space beyond the generated output could be an interesting direction to explore in future work.",
"over using both single reference and multiple references.",
"To the best of our knowledge, this is the first work to leverage NMT model uncertainty for automatic MT evaluation.",
"Meteor (Banerjee and Lavie, 2005) was the first MT evaluation metric to relax the exact match constraint between MT system output and reference translation by allowing matching of lemmas, synonyms or paraphrases.",
"However, this requires linguistic resources which do not exist for most languages.",
"Character-based metrics (Popovic, 2015; Wang et al., 2016) also relax the exact word match constraint by allowing the matching of characters.",
"However, ultimately they still assume a surface-level similarity between reference and MT. A more recent direction compares MT and reference sentences in the embedding space.",
"Chen and Guo (2015) extract word embedding representations for the two sentences and measure the (cosine) similarity between them.",
"Similarly, in (Fomicheva et al., 2015; Servan et al., 2016; Tattar and Fishel, 2017) two words are considered to match if their cosine distance in the embedding space is above a certain threshold.",
"The embeddings are thus used to provide a binary decision.",
"MEANT 2.0 (Lo, 2017) and YISI (Lo, 2019) also relies on matching of words in the embedding space, but this is only used to score the similarity between pairs of words that have already been aligned based on their semantic roles, rather than to find the alignments between words.",
"Finally, Chow et al. (2019) and Echizen'ya et al. (2019) perform the alignment in the embedding space using Earth Mover's Distance with some special treatment for word order.",
"All of these metrics are however still limited to variance in the words used (even in the continuous space), rather than more general stylistic or structural variations which can only be captured with multiple references.",
"Another way of incorporating linguistic variation is pseudo-reference approach by Albrecht and Hwa (2007).",
"They leverage various off-the-shelf MT systems to generate additional imperfect references and use them instead or alongside the original reference during evaluation.",
"Evaluation scores obtained using each of the pseudo references and the available human references are combined as features by training a classifier to predict human judgments.",
"Thus, this line of work implicitly learns the quality of the MT systems used to generate pseudo references.",
"We revisit this idea in our paper by having pseudo-references as one type of diverse MT output.",
"We posit that using multiple MT hypotheses can help automatic MT evaluation in two ways.",
"First, the difference between them may reflect model confidence and potential ambiguity or complexity of the source.",
"Second, they provide an additional point of comparison with the reference, such that if the initial MT output is different from the provided reference due to acceptable linguistic variation, the risk of over-penalising this translation is lower.",
"Most recent work on NMT is based on the sequence-to-sequence approach with encoder and decoder networks (Bahdanau et al., 2014; Luong et al., 2015; Vaswani et al., 2017b).",
"In these models probability of generating the output sequence (cid:126)y given the input sequence (cid:126)x is decomposed as follows: p ( (cid:126)y | (cid:126)x, ) = J (cid:89) j =1 p ( y j | (cid:126)y <j , (cid:126)x, ) where represents model parameters.",
"The decoder produces the probability distribution p ( y j | (cid:126)y <j , (cid:126)x, ) over system vocabulary at each time step using softmax function .",
"In this work we use state-of-art Transformer architecture proposed by Vaswani et al. (2017b), an encoder-decoder model that uses stacked self-attention and fully connected layers for both encoder and decoder.",
"One way to obtain multiple MT hypotheses is by taking top MT hypotheses resulting from the search algorithms used in NMT for decoding.",
"Beam Search.",
"Hypotheses spaces in NMT are very large and it is not feasible to explore them exhaustively.",
"Beam search is traditionally used for decoding in NMT by exploring the search space in a greedy left-to-right manner retaining the top-N candidates with the highest probability.",
"While effective to select a likely translation, beam search tends to result in a list of N-best translations which lack linguistic diversity (Vijayakumar et al., 2016).",
"Diverse Beam Search.",
"Vijayakumar et al. (2016) proposed the Diverse Beam Search algorithm to improve the diversity of top hypotheses.",
"The algorithm promotes diversity by optimising a diversity-augmented objective.",
"We propose that a better method for obtaining diverse MT hypotheses for automatic MT evaluation is by exploiting uncertainty in NMT.",
"For the intuition, consider three different cases.",
"First, if there is only one correct translation at each time step, the output probabilities will have peakier distributions with low entropy and a single word receiving a large portion of the probability mass.",
"In this case, there is very little variation in the hypotheses space.",
"Second, if there are various correct translation options at a given generation step, the output probability distribution will have higher entropy, with multiple target words receiving similar probabilities.",
"In this case, generating hypotheses from the model will result in similar sentences containing synonyms or paraphrases.",
"Finally, if the NMT model has not seen enough data during training for a given combination of words, we would expect output probabilities to exhibit high entropy, approximating a uniform probability distribution.",
"In this case, generating MT hypotheses from the model should result in a highly diverse set with lower quality translations.",
"Below we explore various approaches to uncertainty quantification in neural networks in order to generate a set of additional hypotheses for MT evaluation.",
"Monte Carlo Dropout.",
"It has been shown that softmax function used in neural networks to generate output probability distribution does not properly capture uncertainty as it produces overconfident predictions (Gal and Ghahramani, 2016).",
"Most of the work on uncertainty quantification in deep learning relies on Bayesian formalism (MacKay, 1992; Graves, 2011; Welling and Teh, 2011; Gal and Ghahramani, 2016; Tran et al., 2019).",
"Representing uncertainty through Bayesian neural networks usually comes with prohibitive computational costs and various approximations have been developed to alleviate this issue.",
"One such approximation by Gal and Ghahramani (2016) is called Monte Carlo (MC) dropout.",
"Dropout is a method developed to reduce overfitting when training neural models Srivastava et al. (2014).",
"It consists in randomly masking neurons to zero based on Bernoulli distribution.",
"Gal and Ghahramani (2016) use dropout at test time before every weight layer.",
"They perform N forward passes through the network and collect posterior probabilities generated by the model with parameters perturbed by dropout: { p ( (cid:126)y | (cid:126)x, ) Ni =1 } where represents the perturbed parameters.",
"They show that this is equivalent to an approximation to the probabilistic deep Gaussian process.",
"Previous work has applied this method to quantify model uncertainty by taking the variance of the resulting probability distribution (Dong et al., 2018; Wang et al., 2019).",
"We instead look at the linguistic differences between MT hypotheses generated as a result of N forward passes through the model with perturbed parameters.",
"If the top MT output for a given source sentence is of high quality, it is probable that other hypotheses will be similar.",
"Ensembling.",
"Ensemble model combination is another strategy commonly used for estimating predictive uncertainty (Lakshminarayanan et al., 2017; Pearce et al., 2018; Liu et al., 2019).",
"We take an ensembling strategy typically applied in NMT to improve translation quality: we train four NMT models initialised with different random seeds.",
"At decoding time, prediction distributions from the four models are combined by averaging.",
"To generate additional hypotheses, the four models in the ensemble are used separately, each generating an independent set of translations.",
"Mixture of Experts.",
"Shen et al. (2019) applied mixture of experts (MoE) framework to capture the inherent uncertainty of the MT task and generate diverse hypotheses.",
"A mixture model introduces a multinomial latent variable z 1 , ..., K .",
"The marginal likelihood is then decomposed as: p ( (cid:126)y | (cid:126)x ; ) = K (cid:88) z =1 p ( (cid:126)y, z | (cid:126)x ; ) = K (cid:88) z =1 p ( z | (cid:126)x ; ) p ( (cid:126)y | z, (cid:126)x ; ) The model is trained with the EM algorithm where the E-step estimates the responsibilities of each mixture component (expert) and M-step updates parameters with gradients weighted by their responsibilities.",
"For our experiments, one of the mixture components was randomly selected to produce the MT output for human evaluation and the rest of them were used for the generation of additional hypotheses ( 5).",
"Here we revisit the approach previously used for statistical MT (Albrecht and Hwa, 2007) where outputs of other off-the-shelf MT systems are used as additional reference translations, with some differences.",
"First, NMT outputs on average have substantially higher quality.",
"Second, to avoid the need for labelled data, we do not rely on supervised training and treat the outputs of other MT systems in the same way we treat additional hypotheses that were produced using the methods described in the previous sections.",
"We use publicly available online NMT systems ( 5).",
"Using the methods described above we are able to produce a set of MT hypotheses for each given source segment.",
"The final dataset which we use for evaluation contains a human reference translation ( r ), the top MT output ( o ) and this set of alternative N MT hypotheses ( H = { h 1 ..h N } ).",
"We devise the following ways of combining similarities between possible translations and between these and the reference to obtain more accurate evaluation.",
"This accuracy will be measured by Pearson correlation with a direct assessment (DA) score collected for the o translation, as is common practice in the evaluation metrics field (Ma et al., 2019b).",
"Here we compute the similarity against the reference translation for the set of all generated translation candidates, including the initial MT output and additional hypotheses, and take the average",
"similarity score (micro-average).",
"If the MT output is of high quality but does not match the provided human reference due to acceptable linguistic variation, other hypotheses may serve as paraphrases to match the reference.",
"However, it is important to assign a higher weight to the MT output that was actually evaluated ( o ), as compared to the alternative MT hypotheses.",
"This is done using a simple variant of the above metric where we first take an average of the hypotheses-reference similarities, and then average this score with the MT output-reference similarity score (macro-average).",
"This results in two metrics: hyp ref micro = N 1 N +1 (cid:88) i =1 sim ( h (cid:48) i , r ) , h (cid:48) i H (cid:48) hyp ref macro = N 1 (cid:80) Ni =1 sim ( h i , r )+ sim ( o, r ) 2 where H (cid:48) = { h (cid:48) 1",
"..h (cid:48) N , o } is a set including additional hypotheses and the MT output, and sim corresponds to a similarity function of choice ( 4.3).",
"The represents different ways of combining hypotheses-reference similarities: average (as shown in the equations above), minimum (i.e. choosing the score for the most distant hypothesis) and maximum (i.e. choosing the score of the closest hypotheses).",
"As discussed in Section 3.3, similarity between translation hypotheses capture model confidence and could thus be indicative of translation quality.",
"We propose two metric variants to capture this idea.",
"First, we compute the similarity between all translations candidates including the additional hypotheses and the MT output: hyp self = 1 C | H (cid:48) | (cid:88) i =1 | H (cid:48) | (cid:88) j =1 sim ( h (cid:48) i , h (cid:48) j ) where h (cid:48) i H (cid:48) , i (cid:54) = j and C = 2 1 | H (cid:48) | ( | H (cid:48) | 1) is the number of pairwise comparisons for H (cid:48) hypotheses.",
"As before, corresponds to different ways of combining similarity scores: average, minimum and maximum.",
"Second, as before, we give a higher weight to the MT output whose quality we wish to evaluate ( o ).",
"To that end we compare the MT output against additional generated hypotheses.",
"This comparison indirectly captures the similarity between MT hypotheses themselves: hyp mt = N 1 N (cid:88) i =1 sim ( h i , o ) Both of these variants can be used with and without reference translation.",
"Interestingly, as will be shown in 6.2, they perform comparably to other methods even without the reference, putting into question the need for human reference in MT evaluation.",
"As in the previous section, to add human reference translations into the mix, we average the results as follows: hyp mt ref = N 1 (cid:80) Ni =1 sim ( h i , o )+ sim ( o, r ) 2 Figure 2 summarises the methods discussed above.",
"To measure similarity amongst hypotheses and against the reference(s), we experiment with the following standard MT evaluation metrics: 3",
"sentBLEU (Papineni et al., 2002).",
"BLEU measures the similarity between MT and the reference translation based on the number of matching n-grams.",
"We use a smoothed version of BLEU as described by Lin and Och (2004) with N = 4.",
"3 We use these metrics out of the box.",
"Better results could possibly be achieved by adapting them to our settings, e.g. by changing the weight of precision and recall depending on the direction of the comparison between MT output, hypotheses and the reference.",
"For instance, when using BLEU as similarity function for computing hyp mt , we are evaluating recall on the MT output, whereas BLEU is designed as a precision-oriented metric.",
"But the choice of similarity function is orthogonal to the goal of this paper, and we leave further refinements in this direction to future work.",
"TER (Translation Edit Rate) (Snover et al., 2006).",
"TER computes the edit distance defined as the minimum number of word substitutions, deletions, insertions and shifts that are needed to convert MT into the reference.",
"ChrF (Popovic, 2015).",
"ChrF calculates the F-score of character n-grams of maximum length 6.",
"Meteor (Denkowski and Lavie, 2014).",
"Meteor aligns MT output to the reference translation using synonyms and paraphrases besides exact word matching.",
"The similarity is based on the proportion of aligned words in the candidate and in the reference and a fragmentation penalty.",
"BERTScore.",
"(Zhang et al., 2019).",
"We also looked at this very recent metric (published after the submission of this paper), which uses powerful pre-trained embeddings.",
"BERTScore computes a cosine similarity score for each token in the MT output with each token in the reference sentence using contextual embeddings from BERT (Devlin et al., 2019), which can generate different vector representations for the same word depending on the context, thus better capturing meaning.",
"Maximum similarity values for MT and reference words are then used to compute a soft F1-score.",
"We use the implementation available at https://github.com/Tiiiger/bert score.",
"To test whether our methods improve correlation with human judgments, we need to have access to the NMT model and human judgments for the translations generated by this model.",
"This data is not generally readily available in evaluation campaigns such as Metrics Task at WMT conferences.",
"Below we describe two datasets that satisfy these conditions.",
"They cover two different language pairs and two different domains.",
"News English-Czech dataset.",
"We use available data from the WMT19 News Translation Task.",
"We focus on the University of Edinburgh's submission (Bawden et al., 2019) to the English-Czech translation task, since its NMT model is available.",
"The system was trained using the MarianMT toolkit with a standard Transformer architecture (Vaswani et al., 2017a).",
"Details on model training and architecture are described in (Bawden et al., 2019).",
"For producing pseudo-references, we use all five online systems whose submissions were provided as part of the WMT19 Translation Task.",
"Human judgments were collected in the form of Direct Assessments (DA) following the methodology proposed by Graham et al. (2015), which suggests that 15 segment-level DA judgements are required for trustworthy correlation analysis.",
"However the number of DA judgements in the WMT19 Metrics Task was much smaller.",
"We select segments with at least two DA annotations (795 segments with an average DA score of 80.22) to minimise this issue, but the results reported here for English-Czech should be interpreted with caution.",
"Wikipedia Estonian-English dataset.",
"This is a new dataset we collected which contains 1K sentences randomly selected from Wikipedia articles in Estonian and translated into English.",
"Two human reference translations were generated independently by two professional translators.",
"All the NMT models were trained using the Fairseq toolkit based on the standard Transformer architecture (Vaswani et al., 2017a) and the training settings described in Ott et al. (2018).",
"We used publicly available parallel datasets for training the models: the Rapid corpus of EU press releases (Rozis and Skadins, 2017) and Europarl (Koehn, 2005), which amount to around 4M parallel sentences in total.",
"A set of 400 segments were translated by the model variants described in 3 to assess the im-pact of uncertainty types.",
"The following settings were used for model variants.",
"For MC dropout we use dropout rate of 0.3, same as for training the basic Transformer model.",
"Additional hypotheses were produced by performing N stochastic forward passes through the network with dropout, as described in 3.",
"For this analysis we use N = 30 , which was shown to perform well for uncertainty quantification (Dong et al., 2018).",
"We also test how the number of hypotheses affects the results (see Appendix B).",
"For MoE we use hard mixture model with uniform prior and K = 5 mixture components.",
"To produce the translations we generate from a randomly chosen component with greedy search following the settings in Shen et al. (2019).",
"For generating additional hypotheses with beam search the top-K sentences K [2..5] from the beam were used ( K = 1 corresponds to the initial MT output).",
"For pseudo-reference approach we use three online systems: Systran, Google and Bing.",
"Human judgements were given by professional translators following the FLORES setup (Guzman et al., 2019) which presents a form of DA judgements (Graham et al., 2013).",
"The annotators were asked to rate each sentence from 0100 according to the perceived translation quality.",
"Specifically, the 010 range represents an incorrect translation; 1129, a translation with few correct keywords, but the overall meaning is different from the source; 3050, a translation with major mistakes; 5169, a translation which is understandable and conveys the overall meaning of the source but contains typos or grammatical errors; 7090, a translation that closely preserves the semantics of the source sentence; and 90100, a perfect translation.",
"Each segment was annotated by up to 6 translators.",
"Raw scores were converted into z-scores , i.e. standardised according to each individual annotator's overall mean and standard deviation.",
"The scores collected for each segment were averaged to obtain the final score.",
"The judgments were collected for the 1K segments translated by the standard Transformer model and for the 400 segments produced by four MT model variants in 3, resulting in a total of 1000 + 4 400 = 2600 source-MT pairs annotated with DA judgments.",
"The distribution of DA scores for English-Czech and 1K Estonian-English datasets is shown in the Appendix A. 4 6 Results In this section, we present the results of our experiments for generating additional MT hypotheses ( 6.1) and the methods for exploiting similarities between them ( 6.2).",
"We start by comparing the different strategies for generating multiple MT hypotheses described in 3 for the Estonian-English dataset.",
"Note that some variants also produce different top MT outputs ( o ), as they were trained using different architectures or decoding algorithms.",
"As a result we have four sets of DA annotations collected for 400 segments for system variants with different MT outputs: standard Transformer, Transformer with diverse beam search, MoE and ensembling.",
"MT outputs for beam search and MC dropout variants correspond to the same underlying NMT model.",
"4 The dataset and the NMT models required to reproduce our results are available at https://github.com/ facebookresearch/mlqe/tree/master/data-multi-hyp.",
"Table 1 presents the results.",
"First, beam search performs the poorest.",
"This is in line with the well known fact that beam suffers from low diversity of produced hypotheses (Vijayakumar et al., 2016).",
"As expected, diverse beam search results in a higher difference in correlation compared to mt-ref .",
"However, it is still outperformed by all other methods that capture model uncertainty, with MC dropout achieving the highest difference in correlation against mt-ref.",
"We note that this is not related to the number of generated hypotheses (see Appendix B for details).",
"We suggest that this is due to the fact that linguistic differences between additional hypotheses for high vs. low-quality MT outputs is more discriminating when the hypotheses are generated using MC dropout for representing model uncertainty (see example in Table 3).",
"The difference in correlation observed between different system variants is not related with the quality of MT outputs, as demonstrated by the average DA scores in Table 1.",
"Pseudo-references also perform very well, potentially due to the high quality of the MT systems used to generate them.",
"We select MC dropout and pseudo-references as the two best performing options to conduct a more detailed analysis below.",
"Table 2 shows the results for the 1K Estonian-English dataset and for English-Czech dataset.",
"5 mt-ref stands for the standard reference-based evaluation.",
"The remaining methods correspond to those described in 4.",
"The methods pseudo-mt-max and pseudo-mt-max-ref are equivalent to the hyp-mt-* and hyp-mt-*-ref but instead of dropout-based hypotheses, the outputs of other MT systems are used.",
"For Estonian-English, since we have two human references we compute the correlation for each of them separately ( mt-ref-1 and mt-ref-2 ), as well as in a multi-reference scenario ( mt-ref-multi ).",
"6 We use mt-ref-1 to calculate all the remaining methods that involve a reference translation.",
"Significance of the differences in correlation for the proposed methods with respect to mt-ref-1 and mt-ref-multi is assessed using Hotelling-Williams 5 For the full set of results see the Appendix C. 6 In the multi-reference scenario, BLEU score is computed by counting the n-gram matches between the MT output and all references as in (Papineni et al., 2002).",
"For the rest of the metrics, the closest reference is used for each segment to compute the score, as in (Denkowski and Lavie, 2014).",
"test (Williams, 1959), as described in Graham et al. (2015).",
"First, we observe that the methods based on the similarity against the reference ( hyp-ref-* ) do not perform as well as those relying more on the relation between MT hypotheses ( hyp-mt-* ).",
"As discussed in 4, the latter capture the uncertainty of NMT models when generating the output for a given source sentence.",
"Overall, hyp-mt-avg-ref consistently outperforms all the other variants by a large margin, for all automatic evaluation metrics considered.",
"Logically, the improvement is larger for exact-matching metrics, but also significant for Meteor, ChrF and BERTScore, which attempt to capture linguistic variation.",
"Surprisingly, hyp-mt-avg-ref performs better than the mt-ref-multi .",
"Reasons may be that it can potentially cover a larger number of paraphrases than one additional reference translation, and that besides computing similarity to a reference translation, it incorporates information on model uncertainty.",
"Interestingly, our reference-free metric hyp-mt-avg , which only compares the MT output against additional generated hypotheses and does not rely on human references, also performs competitively.",
"This result confirms the important role played by the model confidence component in measuring MT quality.",
"Note that for Estonian-English dataset it performs better than the evaluation with single reference, indicating that model confidence alone can be more reliable for assessing MT quality than using a single reference translation.",
"Finally, we observe that using translations from online MT systems also outperforms reference-based evaluation.",
"The differences are larger for Estonian-English.",
"This could be because for into-English translation the quality of pseudo-references is higher, making them as good as actual reference translations, while yet closer to the MT output under evaluation.",
"For English-Czech, pseudo-references are closer to mt-ref and generally worse than hyp-mt-avg-ref .",
"Table 3 illustrates the advantage of our uncertainty-aware evaluation over standard reference-based scoring.",
"We show MC dropout and top beam hypotheses for a high quality and for a low quality MT output.",
"First, note that MC dropout hypotheses are very different for a low-quality MT output and fairly similar for good-quality translation.",
"By contrast, beam hypotheses are similar or the same in both cases.",
"Second, the evaluation scores obtained using MC dropout hypotheses result in a large difference between low-quality and high-quality MT outputs, whereas Meteor assigns a higher score to the low-quality example due to surface word and synonym matches that are in this case not indicative of MT quality.",
"The proposed approach has some limitations.",
"First, it requires access to the NMT system that was used to generate the translations.",
"Second, we note that this idea works better if the NMT model is reasonably well trained, as additional hypotheses could be less informative otherwise.",
"Finally, it is not clear how the methods presented here would work for comparing the output quality of different MT systems, but this is a different application of our proposed approach and we leave this question to future work.",
"We proposed to explore NMT model uncertainty to generate additional hypotheses for MT evaluation.",
"We showed that by exploiting similarities in the space of translation hypotheses generated by the model, along with methods to effectively combine information from these multiple hypotheses, we can achieve more accurate estimation on the quality of MT output than standard reference-based comparison, including cases with multiple references.",
"This suggests that model uncertainty alone can be more reliable for assessing MT quality than standard reference-based evaluation.",
"This work can be extended in numerous ways.",
"First, we plan to test whether similar observations will hold for more language pairs and text domains.",
"Second, the score combination strategies could be improved by learning weights for each component.",
"Finally, we would like to test this approach for comparing different MT systems.",
"Marina Fomicheva and Lucia Specia were supported by funding from the Bergamot project (EU H2020 Grant No. 825303).",
"We thank Mark Fishel (University of Tartu) for collecting one set of Estonian-English references."
]
| [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"result",
"abstain",
"objective",
"objective",
"abstain",
"method",
"other",
"other"
]
|
[
"Concept mapbased multi-document summarization has recently been proposed as a variant of the traditional summarization task with graph-structured summaries.",
"As shown by previous work, the grouping of coreferent concept mentions across documents is a crucial subtask of it.",
"However, while the current state-of-the-art method suggested a new grouping method that was shown to improve the summary quality, its use of pairwise comparisons leads to polynomial runtime complexity that prohibits the application to large document collections.",
"In this paper, we propose two alternative grouping techniques based on locality sensitive hashing, approximate nearest neighbor search and a fast clustering algorithm.",
"They exhibit linear and log-linear runtime complexity, making them much more scalable.",
"We report experimental results that confirm the improved runtime behavior while also showing that the quality of the summary concept maps remains comparable.",
"1 1 Introduction Concept maps are labeled graphs with nodes representing concepts and edges showing relationships between them (Novak and Gowin, 1984).",
"Following earlier work on the automatic extraction of concept maps from text (Rajaraman and Tan, 2002; Valerio and Leake, 2006; Villalon, 2012; Zubrinic et al., 2015), concept maps have recently been promoted as an alternative representation for summaries (Falke and Gurevych, 2017; Handler and O'Connor, 2018).",
"In the corresponding task, concept mapbased multi-document summarization (CM-MDS), a set of documents has to be automatically summarized as a concept map that does not exceed a pre-defined size limit.",
"An important subtask of CM-MDS is concept mention grouping , in which all mentions that refer to a specific concept should be grouped together.",
"Without grouping, duplicates can appear in a summary concept map that make the map harder to understand and that waste valuable space.",
"To approach the mention grouping subtask, Falke et al. (2017) proposed to make pairwise coreference classifications between mentions and to induce a partitioning from those predictions.",
"Their experiments showed that this leads to better summary concept maps, establishing the current state-of-the-art for CM-MDS.",
"However, the computational costs of the approach are high, as it exhibits a O ( n 4 ) worst-case time complexity.",
"When the number of documents that should be summarized is large, applying that technique can quickly become impractical.",
"But exactly for those large document sets, a summary would be most helpful.",
"As the first contribution of this paper, we propose two faster grouping techniques.",
"First, we apply locality sensitive hashing (LSH) (Charikar, 2002) to word embeddings in order to find similar mentions without making all pairwise comparisons.",
"That directly leads to a simple O ( n ) grouping method.",
"Second, we also propose a novel grouping technique that combines the hashing approach with a fast partitioning algorithm called Chinese Whispers (CW) (Biemann, 2006).",
"It has O ( n log n ) time complexity and the advantage of being more transparently controllable.",
"Since the reduced complexity of the two proposed techniques is gained through approximations, the resulting grouping could of course be of lower quality.",
"As the second contribution of this paper, we therefore carry out end-to-end experiments in the context of CM-MDS to analyze this trade-off.",
"We compare both techniques against the state-of-the-art approach in automatic and manual evaluations.",
"For both, we observe orders of magnitude faster runtimes with only small reductions in summary quality.",
"In the future, the techniques could also be applied beyond CM-MDS to speed up other similarity-based partitioning problems in NLP and its applications.",
"Given a set of concept mentions M identified in the input documents, the goal of concept mention grouping is to derive a partitioning C of M such that for every set of mentions in C , the set contains all mentions and only mentions of one unique concept.",
"Let n denote the number of mentions | M | .",
"Previous work on concept map mining used stemming (Villalon, 2012), substring matches (Va-lerio and Leake, 2006) or WordNet (Aguiar et al., 2016) to detect coreferences between mentions.",
"Falke et al. (2017) combined several of those features, including semantic similarities based on WordNet (Miller et al., 1990), latent semantic analysis (Deerwester et al., 1990) and word2vec embeddings (Mikolov et al., 2013), in a log-linear classifier to predict coreferences of mentions.",
"Since such pairwise predictions can be inconsistent, e.g. the model might classify ( m 1 , m 2 ) and ( m 2 , m 3 ) as coreferent, but not ( m 1 , m 3 ) , Falke et al. (2017) further induce a transitive relation from the predictions to obtain a valid partitioning of M .",
"They note that simply ignoring conflicting negative classifications by building the transitive closure over all positive ones typically yields undesired partitionings in which too many mentions are being lumped together.",
"Following previous work on related NLP tasks (Barzilay and Lapata, 2006; Denis and Baldridge, 2007), they instead formulate an integer linear program (ILP) to find the transitive relation that maximally agrees with all pairwise predictions.",
"However, as the resulting ILPs cannot be efficiently solved on the data they work with, they propose a local search algorithm that incrementally improves a greedy solution rather than finding the optimal partitioning, This technique requires making classifications for all pairs of mentions in O ( n 2 ) time and running the local search, which has a worst-case complexity of O ( n 4 ) .",
"As we will show in Section 6, that can quickly become prohibitively expensive.",
"Let u, v be k -dimensional vectors.",
"First, choose d unit random vectors r 1 , . . . , r d of k dimensions by sampling every dimension independently from a standard normal distribution.",
"Then, for a vector u , compute a d -dimensional bit vector h ( u ) , the hash, with the i -th dimension defined as h ( u ) [ i ] = (cid:40) 1 : u r i 0 0 : u r i < 0 , (1) where u r i is the dot product with the i -th random vector.",
"The Hamming distance ham between two hashes h ( u ) and h ( v ) , i.e. the number of differing bits, can then be used to approximate the cosine similarity of u and v (Charikar, 2002): u v | u || v | cos (cid:18) ham( h ( u ) , h ( v )) d (cid:19) (2) The longer the hashes are, i.e. the larger d is, the more accurate is the estimation of the similarity.",
"In the past, LSH has been successfully used to speed up a range of NLP tasks, including noun similarity list construction (Ravichandran et al., 2005), word sense induction (Mouton et al., 2009), gender classification (van Durme, 2012) and text classification (Bollegala et al., 2018).",
"Given the mapping h from vectors to their bit hashes, we can partition a set of vectors by hash identity.",
"Every unique hash becomes a group consisting of all vectors mapped to that hash.",
"Since the hashes reflect similarity, the most similar vectors will be grouped together.",
"The parameter d controls the degree of grouping: the smaller it is, the less unique hashes and thus fewer groups exist.",
"In order to apply this technique to concept mention grouping, every mention m M has to be represented by a vector in a space where the cosine similarity is indicative of coreference.",
"Since the classifier of Falke et al. (2017) already uses cosine similarity of word2vec embeddings as a feature, we also use those vectors for LSH.",
"2 Both the computation of the hashes and building groups can be done with a single pass over the mentions.",
"Assuming d and k to be fixed, the overall time complexity of the grouping technique is thus O ( n ) .",
"2 Following their work, we represent a mention by the mean of the embedding vectors of the mention's tokens.",
"When grouping similar elements together, one typically wants to control the degree of grouping by defining a similarity threshold .",
"For the naive LSH-based partitioning, we can only set d , which does not directly correspond to a similarity.",
"Therefore, we propose a second, more transparent grouping technique with this property.",
"Given vectors and their LSH-based hashes, we can use approximate nearest neighbor search (ANNS) to find pairs with a cosine similarity of at least (Charikar, 2002; Ravichandran et al., 2005) without making all pairwise comparisons:",
"1. Sample q permutations of the bit hashes.",
"2. For each permutation, sort all mentions M according to their permuted hashes.",
"3. In each sorted list, estimate the cosine similarity of each m M with the next b mentions based on the hashes.",
"Keep pairs with a similarity of at least .",
"Since comparing neighbors in a sorted list of bit hashes will primarily find those that differ in the last positions, the random permutations are the key part of the algorithm that ensures similar hashes differing at varying positions are found.",
"Rather than comparing each vector to all others in O ( n 2 ) , only qb comparisons are made for each.",
"The dominant part becomes the sort, resulting in O ( n log n ) time complexity as q and b are constants.",
"Using ANNS we can obtain an undirected graph of mentions connected with edges if their similarity is at least .",
"However, as Falke et al. (2017) observed, simply taking the transitive closure over these pairs tends to yield too big groups that lump many mentions of different concepts together.",
"Rather than relying on the expensive O ( n 4 ) local search of Falke et al. (2017) to address this problem, we here resort to the fast graph partitioning algorithm CW (Biemann, 2006).",
"Given a graph G = ( V, E ) , it proceeds as follows:",
"1. Label nodes initially as l ( v i ) = i v i V .",
"2. Iterate over V in randomized order.",
"For each v V , set l ( v ) to the label most frequent among the nodes reachable via a direct edge.",
"While it cannot be guaranteed in general, the algorithm typically converges to a stable labeling after a few iterations.",
"Then, nodes having the same label form a group of the partitioning.",
"In contrast to the local search, CW does not directly optimize the objective function proposed by Falke et al. (2017), however, we empirically found that it yields partitionings that score very well with regard to that objective.",
"To guarantee termination, the number of iterations is bound by a parameter (cid:15) .",
"Then, CW iterates at most (cid:15) times over n nodes and their at most n 1 edges, resulting in O ( n 2 ) complexity.",
"For concept mention grouping, we combine these techniques as follows: First, we represent each mention with a vector and compute its LSH-based hash.",
"Second, we use ANNS to find pairs with a similarity of at least .",
"Finally, we partition the resulting nearest neighbor graph with CW.",
"That grouping technique has four parameters , d, q and b .",
"While determines the degree of grouping, d influences the quality of the similarity estimates and q and b define the size of the search space explored to find nearest neighbors.",
"Note that the construction of the nearest neighbor graph guarantees that a node has at most qb edges, reducing the runtime of CW to O ( n ) in this setting.",
"The runtime behavior of the combination is therefore dominated by ANNS and thus O ( n log n ) .",
"Data and Metrics We use the benchmark corpus introduced by Falke and Gurevych (2017), the only existing dataset with manually created reference summary concept maps.",
"It provides reference summaries for document sets of web pages on 30 different topics.",
"As metrics, we compute the ROUGE and METEOR variants proposed with the dataset and also perform a human evaluation following the protocol of Falke et al. (2017).",
"Implementation As the reference , we use the state-of-the-art pipeline of Falke et al. (2017).",
"3 We test the naive LSH-based partitioning ( LSH-only ) 3 https://github.com/UKPLab/ ijcnlp2017-cmaps Average Smallest Largest Approach Count Runtime Count Runtime Count Runtime Mentions 5299 2475 13572 Reference 4029 3h 12m 32s 1847 24m 21s 10131 22h 48m 08s LSH-only 3694 1s 1752 1s 7827 2s LSH-CW 4085 23s 1875 11s 9861 58s Table 1: Concept mention grouping runtimes on average and for the smallest and largest set.",
"and the combined approach ( LSH-CW ) by substituting them into that pipeline.",
"For a fair comparison, we use the same 300-dimensional word2vec embeddings (Mikolov et al., 2013) for LSH that have also been used in the log-linear model.",
"Tuning In the reference pipeline, the regularization constant of the scoring SVM was tuned with leave-one-out cross-validation on the training set.",
"For LSH-only, we use the same procedure to tune d (together with regularization) and found d = 17 to be best (testing 10, 11, ..., 25).",
"For LSH-CW, where four hyper-parameters have to be set, running cross-validation for the whole grid is too expensive.",
"We instead evaluate a grid of 130 d / q / b / combinations by concept F1-score after grouping and tune the SVM with cross-validation only for the three best settings, leading to the parameters d = 200 , q = 20 , b = 200 , = .",
"89 .",
"Runtime Table 1 shows the runtimes for grouping concept mentions.",
"4 It demonstrates two problems of the reference: First, even on the smallest document set (37 docs, 50k tokens), the grouping already takes hours.",
"And second, on the biggest set (42 docs, 220k tokens), the runtime grows to almost a day, illustrating the analyzed time com-4 Measured on an Intel Xeon ES-2620 2.1GHz processor.",
"plexity.",
"Applying the technique to more documents quickly becomes infeasible.",
"Our newly proposed techniques, LSH-only and LSH-CW, are orders of magnitude faster in absolute terms and also show a more moderate runtime growth as expected given their preferable time complexity.",
"Quality A crucial question is which price we have to pay for improving runtimes through approximations.",
"Table 2 shows the automatic evaluation results for the created summaries.",
"We included lemma-only , a baseline from previous work using lemmatization for grouping, and w2v-only , a variation of the reference grouping approach that uses embeddings as the only feature in the coreference classifier.",
"The latter is important for comparison, as it uses the same information as the LSH-based techniques.",
"While lemma-only and w2v-only perform significantly worse than the reference, the two LSH-based techniques come much closer to the more expensive reference.",
"Table 3 shows the results of our human evaluation.",
"Following previous work, we collected pairwise preferences among the created summaries via Mechanical Turk (150 per pairing) for the dimensions focus (Fo), grammaticality (Gr), meaningfulness (Me) and non-redundancy (NR).",
"5 As shown, the preferences we collected are almost balanced and annotators repeatedly noted during the study that the summaries are very similar.",
"None of the 12 preferences are significant at = 5 We payed $0.60 per comparison and anonymized worker IDs.",
"The study was approved by the university's ethics committee and we obtained informed consent from participants.",
"0 .",
"05 (binomial test), showing that the alternative summary concept maps are practically indistinguishable.",
"In contrast, Falke et al. (2017) observed preferences of up to 79% in their study.",
"Conclusion Based on the automatic and human evaluations, we conclude that both fast grouping techniques proposed in this paper do not substantially decrease the quality of the summaries.",
"Since there is also no clear difference between LSH-only and LSH-CW, we recommend both techniques, which allows practitioners to choose between more transparency or even faster runtimes.",
"Future Work The comparison of w2v-only and the reference in Table 2 reveals that relying only on word2vec and dropping the other features of the log-linear model hurts performance, suggesting that also adding the remaining features to the LSH techniques could lead to further improvements.",
"However, all other features of the reference model are pairwise features, which makes it difficult to incorporate them in the LSH-based techniques that only use mention features.",
"As an alternative direction, one could instead rely on more powerful word embeddings.",
"While we used word2vec to ensure comparability to previous work, using more recent embedding methods such as fastText (Bo-janowski et al., 2017), InferSent (Conneau et al., 2017) or ELMO (Peters et al., 2018) seems to be worth exploring in the future.",
"In this paper, we proposed two fast concept mention grouping techniques for CM-MDS, the direct application of LSH and a novel combination of LSH and Chinese Whispers.",
"Our analysis and experiments show that they are orders of magnitude faster than previous techniques with only small effects the quality of the resulting summary concept maps.",
"Using these techniques, summary concept maps can now be created for much larger document sets than what was possible before.",
"We would like to thank Kevin Mayer for his support during preliminary experiments leading to this paper.",
"This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No.",
"GRK 1994/1."
]
| [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
]
|
[
"Generating inferential texts about an event in different perspectives requires reasoning over different contexts that the event occurs.",
"Existing works usually ignore the context that is not explicitly provided, resulting in a context-independent semantic representation that struggles to support the generation.",
"To address this, we propose an approach that automatically finds evidence for an event from a large text corpus, and leverages the evidence to guide the generation of inferential texts.",
"Our approach works in an encoder-decoder manner and is equipped with a Vector Quantised-Variational Autoencoder, where the encoder outputs representations from a distribution over discrete variables.",
"Such discrete representations enable automatically selecting relevant evidence, which not only facilitates evidence-aware generation, but also provides a natural way to uncover rationales behind the generation.",
"Our approach provides state-of-the-art performance on both Event2Mind and ATOMIC datasets.",
"More importantly, we find that with discrete representations, our model selectively uses evidence to generate different inferential texts.",
"Inferential text generation aims to understand daily-life events and generate texts about their underlying causes, effects, and mental states of event participants, which is crucial for automated commonsense reasoning.",
"Taking Figure 1 as an example, given an event PersonX reads PersonY's diary , the cause of the participant PersonX is to obtain Person Y's secrets and the mental state of PersonX is guilty .",
"Standard approaches for inferential text generation (Rashkin et al., 2018; Sap et al., 2019; Bosselut et al., 2019; Du et al., 2019) typically only Work done while this author was an intern at Microsoft Research.",
"PersonX stole PersonY's diary secretly PersonYinvites PersonX to read his diary know more about PersonY PersonX feels Event Background Inferences obtain PersonY's secrets PersonX wants to PersonX reads PersonY's diary guilty curious Figure 1: An examples of inferential text generation on mental states of event participants.",
"take the event as the input, while ignoring the background knowledge that provides crucial evidence to generate reasonable inferences.",
"For example, if the background knowledge of this example is PersonY invites PersonX to read his diary , the outputs should be different.",
"In this paper, we present an evidence-aware generative model, which first retrieves relevant evidence from a large text corpus and then leverages retrieved evidence to guide the generation of inferential texts.",
"Our model is built upon Transformer-based (Vaswani et al., 2017) encoder-decoder architecture, and is equipped with Vector Quantised-Variational Autoencoder to map an event to a discrete latent representation (van den Oord et al., 2017).",
"These discrete representations embody the latent semantic distribution of inferences given the event, thus supporting selection of relevant evidence as background knowledge to guide the generation in different perspectives.",
"Furthermore, our model has two attractive properties: (1) it avoids the problem of posterior collapse, caused by latent variables being ignored, in traditional variational autoencoder with continuous latent variables (van den Oord et al., 2017), and more importantly (2) it uncovers the rationale of a generation to some extent through tracing back the evidence that guides the generation and the selected discrete representation of the event.",
"We evaluate our approach on Event2Mind (Rashkin et al., 2018) and ATOMIC (Sap et al., 2019) datasets, both of which focus on reasoning about causes and effects of events and mental states of event participants.",
"Experimental results show that our approach achieves state-of-the-art performances on both datasets.",
"Further analysis shows that our approach can equip the generation with an explicit control over the semantics of latent variables and selected evidence to generate inferential texts in different perspective.",
"The source codes are available at https: //github.com/microsoft/EA-VQ-VAE .",
"Figure 1 shows an example of the task, which aims to generate inferential texts about causes and effects of daily-life events and mental states of the events participants.",
"Formally, given an event x = { x 1 , x 2 ,",
".., x n } and an inference dimension r such as causes of the event, the goal is to generate multiple inferential texts Y = { y (1) , y (2) , ..., y ( m ) } 1 , where the background knowledge of the event is absent in the dataset.",
"We conduct experiments on Event2Mind 2 (Rashkin et al., 2018) and ATOMIC 3 (Sap et al., 2019) datasets.",
"Both datasets contain about 25,000 unique events extracted from multiple data sources and provide multiple inferences under different inference dimensions by crowd-sourcing on Amazon Mechanical Turk.",
"Event2Mind and ATOMIC contain 2.6 and 3.6 inferences on average per example, respectively.",
"Event2Mind focuses on three inference dimensions related to mental states of participants (i.e. intents and reactions of the events participants), while ATOMIC has broader inference dimensions including mental states, probable pre-and post conditions of the event, and persona status.",
"More details about the two datasets are provided in the Appendix A. 3 Overview of the Approach We present our approach in this section, which first retrieves relevant evidence from a large text corpus, and then utilizes retrieved evidence as background knowledge to generate inferences.",
"First, our encoder takes an event as the input and outputs a semantic representation z from a distribution over discrete latent variables, which is based on Vector Quantised-Variational Autoencoder (VQ-VAE) (van den Oord et al., 2017).",
"We then use the event as a query to retrieve top K evidence from a large text corpus as background knowledge.",
"Lastly, the evidence-aware decoder takes the semantic representation and evidence as the input and generates the inference y , where the semantic representation selectively uses relevant evidence as background knowledge to guide the generation of inferences.",
"Figure 3 illustrates the model architecture of our approach.",
"The model is based on encoder-decoder framework equipped with Vector Quantised-Variational Autoencoder (VQ-VAE) (van den Oord et al., 2017), where the VQ-VAE is learned to model the latent semantic distribution within inferences given an event.",
"Latent variables z from the VQ-VAE will be used to calculate the relevant of retrieved evidence in the semantic space to guide the generation.",
"Compared with continuous VAEs, VQ-VAE does not suffer from posterior collapse issues that latent variables are often ignored with a powerful decoder (van den Oord et al., 2017).",
"VQ-VAE mainly consists of three parts: a codebook for modeling the latent semantic distribution within inferences over discrete latent variables, a recognition network for modeling a posterior distribution q ( z | x, y ) , and a prior network for inferring a prior distribution p ( z | x ) .",
"Codebook A codebook aims to model the latent semantic discrete distribution within inferences, which is composed of k discrete latent variables (i.e. k -way categorical).",
"We define the codebook as an embedding table T R k d , where d is the dimension of latent variables.",
"The semantic latent variable z is indexed from the posterior distribution q ( z | x, y ) in the training phase and the prior distribution p ( z | x ) in the inference phase over the codebook, respectively.",
"Posterior Distribution We follow van den Oord et al. (2017) to model a discrete posterior distribution q ( z | x, y ) over the codebook.",
"First, we use Transformer (Vaswani et al., 2017) with two layers as our encoder, where the input sequence is the concatenation of an event x and its inference y .",
"In order to obtain the representation of an example ( x, y ) , we add a special token in the last of the input sequence and take the hidden state h ( x,y ) of the special token as the representation of the example.",
"The posterior categorical probability distribution q ( z | x, y ) is defined as one-hot as follows.",
"(1)",
"z (cid:3) = z k where k = arg min j || h ( x,y ) z j || 2 (2) Prior Distribution In the inference phase, only the event x is given, which requires a prior distribution estimator to infer the prior distribution p ( z | x ) .",
"As we can see, the hidden state h ( x,y ) of the example is mapped onto the nearest element z (cid:3) of the codebook under the posterior distribution q ( z | x, y ) .",
"Since the prior distribution is crucial for the inference phase, we use a powerful pre-trained language model such as RoBERTa (Liu et al., 2019) to encode the event into a hidden state h .",
"Since the prior distribution is categorical, we then use a k -way classifier following a softmax function to infer the prior distribution, where W k R d k is the model parameters.",
"In this section, we describe how to retrieve event-related evidence as background knowledge.",
"Given an event, we expect that retrieved evidence can contain the event and provide its context as a clue to guide the generation.",
"To retrieve event-related evidence, we use the event as a query to search evidence from a large text corpus.",
"Specifically, we first remove stop words in the given event and then concatenate the words as a query to search evidence from the corpus by Elastic Search engine 4 .",
"The engine ranks the matching scores between the query and all sentences using BM25 and select top K sentences as evidence C = { c 1 , c 2 , ..., c K } .",
"To provide detailed context about the event, we build our corpus upon BooksCorpus (Zhu et al., 2015) that consists of 11,038 story books, since stories usually give a detailed account of an event such as causes and effects of the event.",
"In this section, we propose an evidence-aware decoder, which consists of two components, evidence selection and a generator, respectively.",
"Evidence selection aims to calculate a context distribution 4 https://www.elastic.co/ 6121 p s ( c | z ) given a latent variable z to model the relevance of retrieved evidence, while the generator p m ( y | x, c ) takes an event x and evidence c as the input to generate the inferential text y .",
"The relevance of retrieved evidence is different depending on the semantics of inference, which requires a context distribution to model the relevance.",
"For examples, given an event PersonX reads PersonY's diary and its inference PersonX feels guilty , the relevance of the evidence PersonX stole PersonY's diary should be higher than that of the evidence PersonY invites PersonX to read his diary .",
"However, inferences are unseen in the inference phase, thus we cannot use inferences to model the context distribution.",
"Instead, we utilize semantic latent variables from the VQ-VAE that models the latent semantic distribution of inferences given an event to calculate the relevance of retrieved evidence.",
"Evidence selection aims to calculate a context distribution p s ( c | z ) over retrieved evidence given a semantic latent variable z to model the relevance of retrieved evidence.",
"Considering that term-based retrieval (i.e. BM25) may fail to retrieve relevant evidences and all retrieved evidence cannot support the generation, we add an empty evidence c into the set C of retrieved evidence as the placeholder.",
"We first use Transformer with two layers to encode retrieved evidence into context vectors HC = { h c 1 , h c 2 ,",
".., h c K , h c } in the semantic space.",
"Then, the context distribution p s ( c | z ) over retrieved evidence given the semantic latent variable z is calculated as one-hot as follows.",
"p s ( c k | z ) = 1 if k = arg min j || h c j z || 2 0 otherwise (4) As we can see, the latent variable z is mapped onto the nearest element c z of the retrieved evidence under the context distribution p s ( c | z ) .",
"Another soft distribution such as using an attention mechanism to calculate the relevance of retrieved evidence can also model the context distribution, but we choose the one-hot distribution as our context distribution since it maps the latent variable z onto the nearest element of the retrieved evidence, the property of which can help effectively learn the model (described in the Section 3.4).",
"Recently, Transformer-based (Vaswani et al., 2017) language models like GPT-2 (Radford et al., 2019) have achieved strong performance in text generation, which is pre-trained from a large-scale text corpus and then fine-tuned on downstream tasks.",
"In this work, we use the GPT-2 p m ( y | x, c ) as the backbone of our generator and further take retrieved evidence into account.",
"A general approach to utilize evidence to guide the generation is to calculate the context vector h c = (cid:5) K +1 i =1 p s ( c i | z ) h c i as the input of GPT-2 according to the relevance p s ( c | z ) of retrieved evidence.",
"However, this approach changes the architecture of GPT-2, invalidating the original weights of pre-trained GPT-2.",
"Instead, we sample an evidence c from the context distribution p s ( c | z ) and then concatenate the event and the selected evidence as the input.",
"To make the paper self-contained, we briefly describe the GPT-2, which takes an evidence and an event as the input and generates the inference y = { y 1 , y 2 ,",
".., y n } .",
"This model applies N transformer layers over the input tokens to produce an output distribution over target tokens: h 0 = [ c ; x ; y <t ] W e + W p h l = transformer l 1 ( h l 1 ) p ( y t ) = softmax ( h N 1 last W Te ) (6) where W e is the token embedding matrix, W p is the position embedding matrix, and h N 1 last is the hidden state of the last token on the top layer.",
"Each transformer layer transformer l 1 contains an architecturally identical transformer block that applies a masked multi-headed self-attention operation followed by a feed forward layer over the input h l 1 in the l -th layer.",
"g l = MultiAttn ( h l 1 ) g l = LN ( g l + h l 1 ) h l = F F N ( g l ) h l = LN ( h l + g l ) (7) where MultiAttn is a masked multi-headed self-attention mechanism, which is similar to Vaswani et al. (2017), F F N is a two layers feed forward network, and LN represents a layer normalization operation (Ba et al., 2016).",
"Our entire approach corresponds to the following generative process.",
"Given an event x , we first sample a latent variable z from the VQ-VAE p ( z | x ) .",
"We then select relevant evidence c according to the semantics of the latent variable from the context distribution p s ( c | z ) .",
"Finally, the generator p m ( y | x, c ) takes the event x and the selected evidence c as the input and generate the inference y .",
"Therefore, the probability distribution p ( y | x ) over inferences y given the event x is formulated as follow.",
"p ( y | x ) = (cid:6) z T (cid:6) c C p m ( y | x, c ) p s ( c | z ) p ( z | x ) (8) A straightforward method for learning our model might be maximizing the marginal likelihood by joint learning, but it is computationally intractable.",
"Instead, we first learn the VQ-VAE with the prior distribution p ( z | x ) in isolation, which can enable the codebook to capture the latent semantics within inferences.",
"Then, we train the evidence-aware decoder under the posterior distribution q ( z | x, y ) .",
"Training VQ-VAE To enable the codebook to capture the latent semantics within inferences, we train the VQ-VAE by reconstructing the inferential text y using the latent variable z .",
"We use the pre-trained language model GPT-2 (Radford et al., 2019) as our decoder to generate the inference p ( y | x, z ) , where the input is the sum of token embedding, position embedding and the latent variable z .",
"To make reconstruction better conditioned on the latent variable, we replace each query in the multi-head self-attention mechanism with the sum of the latent variable and the query, as well for keys, values and hidden states on the top layer.",
"We follow van den Oord et al. (2017) to learn the VQ-VAE by minimizing the loss function.",
"where sg stands for the stop gradient operator that has zero partial derivatives during differentiation, and is a hyperparameter which controls the speed to change the latent variable.",
"We set the as 0.25 in all experiments.",
"The decoder optimizes the first loss term (reconstruction) only, the encoder optimizes the first and the last loss terms, and the codebook are updated by the middle loss term.",
"We obtain the posterior distribution q ( z | x, y ) after optimizing the encoder and the codebook.",
"Afterward, we learn the prior distribution estimator to infer the prior distribution p ( z | x ) .",
"Since the posterior distribution is categorical, we can calculate approximate prior distributions as follow in the training dataset D , where N ( x ) is the number of examples that includes the event x .",
"p ( z | x ) = (cid:6) ( x,y i ) D q ( z | x, y i ) N ( x ) (10) Therefore, we can fit the prior distributions by minimizing the KL divergence.",
"Training Evidence-Aware Decoder After training VQ-VAE, we jointly learn the context distribution p s ( c | z ) and the generator p m ( y | x, c ) by maximizing the following marginal likelihood under the posterior distribution q ( z | x, y ) .",
"logp ( y | x ) = E z q [ (cid:6) c C logp m ( y | x, c ) p s ( c | z )] (12) According to the Equation 2, the example ( x, y ) is mapped onto the nearest element z (cid:3) of the codebook under the posterior distribution q ( z | x, y ) .",
"Meanwhile, according to the Equation 5, the latent variable z (cid:3) is mapped onto the nearest element c z (cid:3) of retrieved evidence.",
"Therefore, the objective in Equation 12 can be simplified as follow.",
"(13) Since the ground truth evidence for the example is unobserved, we cannot directly train the model by maximizing the marginal likelihood.",
"To remedy this problem, we use reinforcement learning algorithm to optimize the objective.",
"where R is the reward designed to guide the model training, ( x ) is 1 if x is larger than 0 otherwise 1 , and c r is a randomly selected evidence where c r (cid:3) = c z (cid:3) .",
"The idea of designing the reward is that correct evidence should increase the probability of the gold inference compared with other evidence.",
"Note that there is no real gradient defined for p s ( c | z ) , instead, we approximate the gradient similar to the straight-through estimator (Bengio et al., 2013).",
"Thus, we can optimize the evidence-aware decoder by maximizing the marginal likelihood in the Equation 15.",
"Please see more details about the model hyperparameters in Appendix B. 4 Experiment 4.1 Model Comparisons Following Sap et al. (2019), we first use the average BLEU-2 score between each sequence in the top 10 predictions and the gold generations to evaluate the accuracy of generations.",
"We report the result of existing methods on ATOMIC and Event2Mind datasets in the Table 1 and Table 2, respectively.",
"These approaches are divided into two groups.",
"The first group trains distinct models for each inference dimension separately, while the second group trains a model in a multi-task learning way for all inference dimensions.",
"S2S is a RNN-based sequence-to-sequence model (Sutskever et al., 2014).",
"VRNMT (Su et al., 2018) introduces a sequence of recurrent latent variables to model the semantic distribution of inferences.",
"CWVAE propose a context-aware variational autoencoder (Du et al., 2019) to acquire context information, which is first pre-trained on the auxiliary dataset and then fine-tuned for each inference dimension.",
"COMET (Bosselut et al., 2019) concatenate the event with an inference dimension as the input and fine-tune the pre-trained GPT-2.",
"Since COMET does not report the performance for each inference dimension, we re-implement the model for better comparison.",
"Our approach is abbreviated as EA-VQ-VAE , short for Evidence-Aware Vector Quantised Variational AutoEncoder.",
"As we can see in the Table 1 and Table 2, the multi-task learning performs better than single-task learning overall.",
"Therefore, we train our model in a multi-task way and compare our approach with multi-task learning based methods.",
"From the Table 1, we can see that our approach performs better on the majority of inference dimensions, achieving the state-of-the-art result on ATOMIC dataset.",
"For the Event2Mind dataset, results in the Table 2 show that our approach brings a gain of 1% BLEU score overall compared with the state-of-the-art method.",
"Besides, in order to evaluate the diversity of generations, we use the number of distinct unigrams (dist-1) and bigrams (dist-2) as evaluation metrics (Li et al., 2015).",
"Since we train our model in a multi-task way, we compare our approach with multi-task learning based methods for fair comparison.",
"Results in the Table 3 show that our approach could increase the diversity of generations overall on both datasets.",
"Since automatic evaluation of generated language is limited (Liu et al., 2016), we also perform a human evaluation on model performance.",
"Following the setup of (Sap et al., 2019), we evaluate 100 randomly selected examples from the test set and use beam search to generate 10 candidates from different models.",
"Five human experts are asked to identify whether a model generation is correct given an event with an inference dimension.",
"Table 4 shows the result of the human evaluation on both datasets, where our approach achieves a gain of 1.5% 2% accuracy compared with COMET .",
"We conduct ablation analysis to better understand how various components in our approach impact overall performance.",
"We remove evidence and VQ-VAE, respectively, to analyze their contribution.",
"Table 5 shows that the overall performance drops from 11.3% to 10.5% on Event2Mind dev dataset when removing the evidence totally (w/o evidence), which reveals the importance of evidence for inferential texts generation.",
"After ablating the VQ-VAE and selecting top-1 evidence as background (w/o VQ-VAE), we can see that the performance drops from 11.3% to 10.6%, which means VQ-VAE can automatically select relevant and useful evidence.",
"In order to demonstrate the effectiveness of our learning method, we also train our model by joint learning (w/o SL).",
"The overall BLEU score drops from 11.3% to 10.7%, which shows that our learning method can effectively train our model.",
"We also study how the amount of evidence retrieved from the corpus impacts the performance.",
"From Figure 4, we can see that overall BLEU score Figure 4: Overall performance with different number of retrieved evidence on Event2Mind dev dataset.",
"increases as the number of retrieved evidence expands.",
"This is consistent with our intuition that the performance of our approach is improved by expanding retrieved examples, since our approach can select relevant and useful evidence from more retrieved evidence.",
"When the number of retrieved evidence is larger than 20, the overall performance does not improve.",
"The main reason is that the quality and relevance of retrieved evidence decreases as the number of retrieved evidence expands.",
"We give a case study to illustrate the entire procedure of our approach.",
"Figure 5 provides an example of the generations given an event PresonX is away from home on the xIntent dimension (i.e. PersonX wants ).",
"We first sample two latent variables from the codebook (i.e. z 29 and z 125 ) according to the prior distribution of VQ-VAE.",
"We visualize the semantics of latent variables by displaying word cloud of examples that are under the same latent assignment.",
"As we can see, z 29 captures the positive semantics like play and friend , while z 125 captures the negative semantics like devas-tated and offended .",
"Then, two latent variables are respectively used to select relevant evidence as background knowledge.",
"As we can see, the first latent variable selects an evidence about playing , which provides a clue for the model to generate texts such as to have fun and to spend time with friends .",
"Another latent variable selects another evidence in a quarrel scene, which can help the model reason about PersonX wants to be alone .",
"The case study shows that our approach not only equips the generation with an explicit control over the semantics of evidence but select relevant evi-Event Latent Variable and Visualization Selected Evidence Generation PersonX is away from home (cid:28705) Rog playing away from home, is he?",
"dence to guide the generation.",
"Please find another case on other inference dimension on Appendix C. 4.4 Error Analysis We analyze 100 incorrectly predicted instances randomly selected from the ATOMIC dataset, and summary two main classes of errors.",
"The first problem is that some examples cannot retrieve relevant evidence since the scale of text corpus is limited.",
"We can leverage more sources like Wikipedia to retrieve evidence.",
"Another cause of this problem is that term-based retrieval (e.g. BM25) calculates the matching score using words overlap and cannot capture semantics of sentences.",
"For examples, the evidence the lights began to shift away from the fire, like a line of fireflies will be retrieved for the event PersonX lights a fire since of the high overlap, but the event does not occur in the evidence.",
"This problem might be mitigated by using better semantic-based retrieval model.",
"The second problem is that the model cannot effectively leverage selected evidence.",
"Although the selected evidence is closely related to the event and the inference can be obtained from the evidence, the model still generate incorrect texts since lacking of supervised information.",
"A potential direction to mitigate the problem is to annotate background knowledge of events in the training dataset.",
"Recently, event-related text understanding has attracted much attention (Chambers and Jurafsky, 2008; Segers et al., 2016; Wang et al., 2017; Li et al., 2018; Rashkin et al., 2018; Sap et al., 2019; Guo et al., 2020), which is crucial to artificial intelligence systems for automated commonsense reasoning.",
"There are a variety of tasks that focus on event-related text understanding in different forms.",
"Script (Schank and Abelson, 1977) uses a line to represent temporal and causal relations between events, and the task of script event prediction (Chambers and Jurafsky, 2008) requires models to predict the subsequent event given an event context.",
"Previous works on the task are mainly based on event pairs (Chambers and Jurafsky, 2008; Granroth-Wilding and Clark, 2016), event chains (Wang et al., 2017), and event evolutionary graph (Li et al., 2018) to predict script event.",
"In addition, our task relates to story ending prediction (Sharma et al., 2018; Mostafazadeh et al., 2016; Zellers et al., 2018).",
"Mostafazadeh et al. (2016) introduce a dataset for story ending prediction, which requires models to choose the most sensible ending given a paragraph as context.",
"In this work, we study inferential text generation proposed by Rashkin et al. (2018) and Sap et al. (2019), both of which focus on generating texts about causes and effects of events and mental states of event participants.",
"Natural Language Generation, also known as text generation (McKeown, 1992; Sutskever et al., 2011), has recently become popular in NLP community (Feng et al., 2018; Duan et al., 2020).",
"Recently, Variational Autoencoder (VAE) (Kingma and Welling, 2013) has achieved promising performance on various text generation tasks, including machine translation (Zhang et al., 2016; Su et al., 2018), text summarization (Miao and Blunsom, 2016; Li et al., 2017), and dialogue generation (Ser-ban et al., 2017; Zhao et al., 2017).",
"For machine translation, Zhang et al. (2016) and Su et al. (2018) introduce a continuous latent variable to explicitly model the semantics of a source sentence, which is used to guide the translation.",
"In dialogue genration, Serban et al. (2017) apply a latent variable hierarchical encoder-decoder model to facilitate longer response, while Zhao et al. (2017) uses latent vari-6126 ables to capture potential conversational intents and generates diverse responses.",
"A recent work CWVAE (Du et al., 2019) on event-centered If-Then reasoning is the most related to our work, which introduces an additional context-aware latent variable to implicitly guide the generation by a two-stage training procedure.",
"Different with previous works, we introduce a discrete latent variable to capture underlying semantics within inferences based on VQ-VAE that does not suffer from posterior collapse issues (van den Oord et al., 2017).",
"These discrete latent variables are used to selectively leverage evidence as background knowledge to explicitly guide the generation.",
"Besides, our approach provides a way to uncover the rationale of a generation to some extent through tracing back the evidence that supports the generation and the selected discrete latent variable.",
"In this paper, we present an evidence-aware generative model based on VQ-VAE, which utilizes discrete semantic latent variables to select evidence as background knowledge to guide the generation.",
"Experimental results show that our approach achieves state-of-the-art performance on Event2Mind and ATOMIC datasets.",
"Further analysis shows that our approach selectively uses evidence to generate different inferential texts from multiple perspectives.",
"Daya Guo and Jian Yin are supported by the National Natural Science Foundation of China (U1711262, U1611264, U1711261, U1811261, U1811264, U1911203), National Key R&D Program of China (2018YFB1004404), Guangdong Basic and Applied Basic Research Foundation (2019B1515130001), Key R&D Program of Guangdong Province (2018B010107005).",
"Jian Yin is the corresponding author."
]
| [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"other",
"method",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"objective",
"method",
"result",
"result",
"other",
"other"
]
|
[
"Transfer learning has yielded state-of-the-art (SoTA) results in many supervised NLP tasks.",
"However, annotated data for every target task in every target language is rare, especially for low-resource languages.",
"We propose UXLA a novel unsupervised data augmentation framework for zero-resource transfer learning scenarios.",
"In particular, UXLA aims to solve cross-lingual adaptation problems from a source language task distribution to an unknown target language task distribution, assuming no training label in the target language.",
"At its core, UXLA performs simultaneous self-training with data augmentation and unsupervised sample selection.",
"To show its effectiveness, we conduct extensive experiments on three diverse zero-resource cross-lingual transfer tasks.",
"UXLA achieves SoTA results in all the tasks, outperforming the baselines by a good margin.",
"With an in-depth framework dissection, we demonstrate the cumulative contributions of different components to its success.",
"Self-supervised learning in the form of pretrained language models (LM) has been the driving force in developing state-of-the-art NLP systems in recent years.",
"These methods typically follow two basic steps, where a supervised task-specific finetuning follows a large-scale LM pretraining (Rad-ford et al., 2019).",
"However, getting labeled data for every target task in every target language is difficult, especially for low-resource languages.",
"Recently, the pretrain-finetune paradigm has also been extended to multi-lingual setups to train effective multi-lingual models that can be used for zero-shot cross-lingual transfer.",
"Jointly trained deep multi-lingual LMs like mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) coupled Equal contribution with supervised fine-tuning in the source language have been quite successful in transferring linguistic and task knowledge from one language to another without using any task label in the target language.",
"The joint pretraining with multiple languages allows these models to generalize across languages.",
"Despite their effectiveness, recent studies (Pires et al., 2019; K et al., 2020) have also highlighted one crucial limiting factor for successful cross-lingual transfer.",
"They all agree that the cross-lingual generalization ability of the model is limited by the (lack of) structural similarity between the source and target languages.",
"For example, for transferring mBERT from English, K et al. (2020) report about 23 .",
"6% accuracy drop in Hindi (structurally dissimilar) compared to 9% drop in Spanish (struc-turally similar) in cross-lingual natural language inference (XNLI).",
"The difficulty level of transfer is further exacerbated if the (dissimilar) target language is low-resourced, as the joint pretraining step may not have seen many instances from this language in the first place.",
"In our experiments (3.2), in cross-lingual NER (XNER), we report F1 reductions of 28.3% in Urdu and 30.4% in Burmese for XLM-R, which is trained on a much larger multilingual dataset than mBERT.",
"One attractive way to improve cross-lingual generalization is to perform data augmentation (Simard et al., 1998), and train the model on examples that are similar but different from the labeled data in the source language.",
"Formalized by the Vicinal Risk Minimization (VRM) principle (Chapelle et al., 2001), such data augmentation methods have shown impressive results in vision (Zhang et al., 2018; Berthelot et al., 2019).",
"These methods enlarge the support of the training distribution by generating new data points from a vicinity distribution around each training example.",
"For images, the vicinity of a training image can be defined by a set of operations like rotation and scaling, or by linear mixtures of features and labels (Zhang et al., 2018).",
"However, when it comes to text, such unsupervised augmentation methods have rarely been successful.",
"The main reason is that unlike images, linguistic units are discrete and a smooth change in their embeddings may not result in a plausible linguistic unit that has similar meanings.",
"In NLP, to the best of our knowledge, the most successful augmentation method has so far been back-translation (Sennrich et al., 2016) which paraphrases an input sentence through round-trip translation.",
"However, it requires parallel data to train effective machine translation systems, acquiring which can be more expensive for low-resource languages than annotating the target language data.",
"Furthermore, back-translation is only applicable in a supervised setup and to tasks where it is possible to find the alignments between the original labeled entities and the back-translated entities, such as in question answering (Yu et al., 2018).",
"Other related work includes contextual augmentation (Kobayashi, 2018), conditional BERT (Wu et al., 2018) and AUG-BERT (Shi et al., 2019).",
"These methods use a constrained augmentation that alters a pretrained LM to a label-conditional LM for a specific task.",
"Since they rely on labels, their application is limited by the availability of enough task labels.",
"In this work, we propose UXLA , a robust u nsupervised cross l ingual a ugmentation framework for improving cross-lingual generalization of multilingual LMs.",
"UXLA augments data from the unlabeled training examples in the target language as well as from the virtual input samples generated from the vicinity distribution of the source and target language sentences.",
"With the augmented data, it performs simultaneous self-learning with an effective distillation strategy to learn a strongly adapted cross-lingual model from noisy (pseudo) labels for the target language task.",
"We propose novel ways to generate virtual sentences using a multilingual masked LM (Conneau et al., 2020), and get reliable task labels by simultaneous multilingual co-training.",
"This co-training employs a two-stage co-distillation process to ensure robust transfer to dissimilar and/or low-resource languages.",
"We validate the effectiveness and robustness of UXLA by performing extensive experiments on three diverse zero-resource cross-lingual transfer tasksXNER, XNLI, and PAWS-X, which posit different sets of challenges, and across many (14 in total) language pairs comprising languages that are similar/dissimilar/low-resourced.",
"UXLA yields impressive results on XNER, setting SoTA in all tested languages outperforming the baselines by a good margin.",
"The relative gains for UXLA are particularly higher for structurally dissimilar and/or low-resource languages: 28.54%, 16.05%, and 9.25% absolute improvements for Urdu, Burmese, and Arabic, respectively.",
"For XNLI, with only 5% labeled data in the source, it gets comparable results to the baseline that uses all the labeled data, and surpasses the standard baseline by 2.55% on average when it uses all the labeled data in the source.",
"We also have similar find-ings in PAWS-X.",
"We provide a comprehensive analysis of the factors that contribute to UXLA 's performance.",
"We open-source our framework at https://ntunlpsg.github.io/project/uxla/ .",
"While recent cross-lingual transfer learning efforts have relied almost exclusively on multi-lingual pretraining and zero-shot transfer of a fine-tuned source model, we believe there is a great potential for more elaborate methods that can leverage the unlabeled data better.",
"Motivated by this, we present UXLA , our unsupervised data augmentation framework for zero-resource cross-lingual task adaptation.",
"Figure 1 gives an overview of UXLA .",
"Let D s = ( X s , Y s ) and D t = ( X t ) denote the training data for a source language s and a target language t , respectively.",
"UXLA augments data from various origins at different stages of training.",
"In the initial stage (epoch 1), it uses the augmented training samples from the target language ( D (cid:48) t ) along with the original source ( D s ).",
"In later stages (epoch 2-3), it uses vicinal sentences generated from the vicinity distribution of source and target examples: ( x s n | x s n ) and ( x t n | x t n ) , where x sn X s and x tn X t .",
"It performs self-training on the augmented data to acquire the corresponding pseudo labels.",
"To avoid confirmation bias with self-training where the model accumulates its own errors, it simultaneously trains three task models to generate virtual training data through data augmentation and filtering of potential label noises via multi-epoch co-teaching (Zhou and Li, 2005).",
"In each epoch, the co-teaching process first performs co-distillation , where two peer task models are used to select reliable training examples to train the third model.",
"The selected samples with pseudo labels are then added to the target task Figure 1: Training flow of UXLA .",
"model's training data by taking the agreement from the other two models, a process we refer to as co-guessing .",
"The co-distillation and co-guessing mechanism ensure robustness of UXLA to out-of-domain distributions that can occur in a multilingual setup, e.g., due to a structurally dissimilar and/or low-resource target language.",
"Algorithm 1 gives a pseu-docode of the overall training method.",
"Each of the task models in UXLA is an instance of XLM-R finetuned on the source language task (e.g., English NER), whereas the pretrained masked LM parameterized by mlm ( i.e., before fine-tuning) is used to define the vicinity distribution ( x n | x n , mlm ) around each selected example x n .",
"In the following, we describe the steps in Algorithm 1.",
"We first train three instances of the XLM-R model ( (1) , (2) , (3) ) with an additional task-specific linear layer on the source language (English) labeled data.",
"Each model has the same architecture (XLM-R large) but is initialized with different random seeds.",
"For token-level prediction tasks ( e.g., NER), the token-level representations are fed into the classification layer, whereas for sentence-level tasks ( e.g., XNLI), the [CLS] representation is used as input to the classification layer.",
"Training with confidence penalty Our goal is to train the task models so that they can be used reliably for self-training on a target language that is potentially dissimilar and low-resourced.",
"In such situations, an overly confident (overfitted) model may produce more noisy pseudo labels, and the noise will then accumulate as the training progresses.",
"Overly confident predictions may also impose difficulties on our distillation methods (2.3) in isolating good samples from noisy ones.",
"However, training with the standard cross-entropy (CE) loss may result in overfitted models that produce overly confident predictions (low entropy), especially when the class distribution is not balanced.",
"We address this by adding a negative entropy term H to the CE loss as follows.",
"where x is the representation that goes to the output layer, and y c and p c ( x ) are respectively the ground truth label and model predictions with respect to class c .",
"Such regularizer of output distribution has been shown to be effective for training large models (Pereyra et al., 2017).",
"We also report significant gains with confidence penalty in 3.",
"Appendix B shows visualizations on why confidence penalty is helpful for distillation.",
"Our augmentated sentences come from two different sources: the original target language samples X t , and the virtual samples generated from the vicinity distribution of the source and target samples: ( x sn | x sn , mlm ) and ( x tn | x tn , mlm ) with x sn X s and x tn X t .",
"It has been shown that contextual LMs pretrained on large-scale datasets capture useful linguistic features and can be used to generate fluent grammatical texts (Hewitt and Manning, 2019).",
"We use XLM-R masked LM (Conneau et al., 2020) as our vicinity model mlm , which is trained on massive multilingual corpora (2.5 TB of Common-Crawl data in 100 languages).",
"The Algorithm 1 UXLA : a robust unsupervised data augmentation framework for cross-lingual NLP Input: source",
"In order to generate samples around each selected example, we first randomly choose P % of the input tokens.",
"Then we successively (one at a time) mask one of the chosen tokens and ask XLM-R masked LM to predict a token in that masked position, i.e., compute ( x m | x, mlm ) with m being the index of the masked token.",
"For a specific mask, we sample S candidate words from the output distribution, and generate novel sentences by following one of the two alternative approaches.",
"(i) Successive max In this approach, we take the most probable output token ( S = 1 ) at each prediction step, o m = arg max o ( x m = o | x, mlm ) .",
"A new sentence is constructed by P % newly generated tokens.",
"We generate (diversification factor) virtual samples for each original example x , by randomly masking P % tokens each time.",
"(ii) Successive cross In this approach, we divide each original (multi-sentence) sample x into two parts and use successive max to create two sets of augmented samples of size 1 and 2 , respectively.",
"We then take the cross of these two sets to generate 1 2 augmented samples.",
"Augmentation of sentences through successive max or cross is carried out within the GEN-LM (generate via LM) module in Algorithm 1.",
"For tasks involving a single sequence ( e.g., XNER), we directly use successive max.",
"Pairwise tasks like XNLI and PAWS-X have pairwise dependencies: dependencies between a premise and a hypothesis in XNLI or dependencies between a sentence and its possible paraphrase in PAWS-X.",
"To model such dependencies, we use successive cross, which uses cross-product of two successive max applied independently to each component.",
"Due to discrete nature of texts, VRM based augmentation methods that are successful for images such as MixMatch (Berthelot et al., 2019) that generates new samples and their labels as simple linear interpolation, have not been successful in NLP.",
"The meaning of a sentence can change entirely even with minor variations in the original sentence.",
"For example, consider the following example generated by our vicinity model.",
"Here, EU is an Organization whereas the newly predicted word Trump is a Person (different name type).",
"Therefore, we need to relabel the augmented sentences no matter whether the original sentence has labels (source) or not (target).",
"However, the relabeling process can induce noise, especially for dissimilar/low-resource languages, since the base task model may not be adapted fully in the early training stages.",
"We propose a 2-stage sample distillation process to filter out noisy augmented data.",
"Stage 1: Distillation by single-model The first stage of distillation involves predictions from a single model for which we propose two alternatives: ( i ) Distillation by model confidence: In this approach, we select samples based on the model's prediction confidence.",
"This method is similar in spirit to the selection method proposed by Ruder and Plank (2018a).",
"For sentence-level tasks ( e.g., XNLI), the model produces a single class distribution for each training example.",
"In this case, the model's confidence is computed by p = max c { 1 ...C } p c ( x ) .",
"For token-level sequence labeling tasks ( e.g., NER), the model's confidence is computed by: p = 1 T (cid:80) Tt =1 (cid:8) max c { 1 ...C } p c ( x t ) (cid:9) , where T is the length of the sequence.",
"The distillation is then done by selecting the top % samples with the highest confidence scores.",
"( ii )",
"Sample distillation by clustering: We propose this method based on the finding that large neural models tend to learn good samples faster than noisy ones, leading to a lower loss for good samples and higher loss for noisy ones (Han et al., 2018; Arazo et al., 2019).",
"We use a 1d two-component Gaussian Mixture Model (GMM) to model per-sample loss distribution and cluster the samples based on their goodness .",
"GMMs provide flexibility in modeling the sharpness of a distribution and can be easily fit using Expectation-Maximization (EM) (See more on Appendix C).",
"The loss is computed based on the pseudo labels predicted by the model.",
"For each sample x , its goodness probability is the posterior probability p ( z = g | x , GMM ) , where g is the component with smaller mean loss.",
"Here, distillation hyperparameter is the posterior probability threshold based on which samples are selected.",
"Stage 2: Distillation by model agreement In the second stage of distillation, we select samples by taking the agreement (co-guess) of two different peer models ( j ) and ( k ) to train the third ( l ) .",
"Formally, AGREEMENT (cid:0) D ( k ) , D ( j ) ) = { ( X ( k ) , Y ( k ) ) : Y ( k ) = Y ( j ) } s.t. k (cid:54) = j 2.4 Data Samples Manipulation UXLA uses multi-epoch co-teaching.",
"It uses D s and D (cid:48) t in the first epoch.",
"In epoch 2, it uses D t (tar-get virtual), and finally it uses all the four datasets D s , D (cid:48) t , D t , and D s (line 22 in Algorithm 1).",
"The datasets used at different stages can be of different sizes.",
"For example, the number of augmented samples in D s and D t grow polynomially with the successive cross masking method.",
"Also, the co-distillation produces sample sets of variable sizes.",
"To ensure that our model does not overfit on one particular dataset, we employ a balanced sampling strategy.",
"For N number of datasets {D i } Ni =1 with probabilities, { p i } Ni =1 , we define the following multinomial distribution to sample from: p i = f i (cid:80) Nj =1 f j , where f i = n i (cid:80) Nj =1 n j (2) where is the sampling factor and n i is the total number of samples in the i th dataset.",
"By tweaking , we can control how many samples a dataset can provide in the mix.",
"We consider three tasks in the zero-resource cross-lingual transfer setting.",
"We assume labeled training data only in English, and transfer the trained model to a target language.",
"For all experiments, we report the mean score of the three models that use different seeds.",
"XNER: We use the standard CoNLL datasets (Sang, 2002; Sang and Meulder, 2003) for English (en), German (de), Spanish (es) and Dutch (nl).",
"We also evaluate on Finnish (fi) and Arabic (ar) datasets collected from Bari et al. (2020).",
"Note that Arabic is structurally different from English, and Finnish is from a different language family.",
"To show how the models perform on extremely low-resource languages, we experiment with three structurally different languages from WikiANN (Pan et al., 2017) of different (unlabeled) training data sizes: Urdu (ur-20k training samples), Bengali (bn-10K samples), and Burmese (my-100 samples).",
"XNLI We use the standard dataset (Conneau et al., 2018).",
"For a given pair of sentences, the task is to predict the entailment relationship between the two sentences, i.e. , whether the second sentence ( hypothesis ) is an Entailment , Contradiction , or Model en es nl de ar fi Supervised Results LSTM-CRF (Bari et al., 2020) 89.77 84.71 85.16 78.14 75.49 84.21 XLM-R (Conneau et al., 2020) 92.92 89.72 92.53 85.81 XLM-R (our imp.) 92.9 89.2 92.9 86.2 86.8 92.4 Zero-Resource Baseline mBERT cased (our imp.) 91.13 74.76 79.58 70.99 45.48 65.95 XLM-R (our imp.) 92.23 79.29 80.87 73.40 49.04 75.57 XLM-R (ensemble) 92.76 80.62 81.46 75.40 52.30 76.85 Our Method mBERT cased +con-penalty 90.81 75.06 79.26 72.31 47.03 66.72 XLM-R+con-penalty 92.49 80.45 81.07 73.76 49.94 76.05 UXLA 83.05 85.21 80.33 57.35 79.75 UXLA (ensemble) 83.24 85.32 80.99 58.29 79.87 Table 1: F1 scores in XNER on the datasets from CoNLL and (Bari et al., 2020).",
"Neutral with respect to the first one ( premise ).",
"We experiment with Spanish, German, Arabic, Swahili (sw), Hindi (hi) and Urdu.",
"PAWS-X The Paraphrase Adversaries from Word Scrambling Cross-lingual task (Yang et al., 2019) requires the models to determine whether two sentences are paraphrases.",
"We evaluate on all the six (typologically distinct) languages: fr, es, de, Chinese (zh), Japanese (ja), and Korean (ko).",
"Evaluation setup Our goal is to adapt a task model from a source language distribution to an unknown target language distribution assuming no labeled data in the target.",
"In this scenario, there might be two different distributional gaps: ( i ) the generalization gap for the source distribution, and ( ii ) the gap between the source and target language distribution.",
"We wish to investigate our method in tasks that exhibit such properties.",
"We use the standard task setting for XNER, where we take 100% samples from the datasets as they come from various domains and sizes without any specific bias.",
"However, both XNLI and PAWS-X training data come with machine-translated texts in target languages.",
"Thus, the data is parallel and lacks enough diversity (source and target come from the same domain).",
"Cross-lingual models trained in this setup may pick up distributional bias (in the label space) from the source.",
"Artetxe et al. (2020) also argue that the translation process can induce subtle artifacts that may have a notable impact on models.",
"Therefore, for XNLI and PAWS-X, we experiment with two different setups.",
"First, to ensure distributional differences and non-parallelism, we use 5% of the training data from the source language and augment a different (nonparallel) 5% Model ur bn my Supervised Results XLM-R (our-impl) 97.1 97.8 76.8 Zero-Resource Results XLM-R (XTREME) 56.4 78.8 54.3 XLM-R (our imp.) 56.45 78.17 54.56 UXLA 84.99 82.68 70.61 Table 2: XNER results on WikiANN.",
"data for the target language.",
"We used a different seed each time to retrieve this 5% data.",
"Second, to compare with previous methods, we also evaluate on the standard 100% setup.",
"The evaluation is done on the entire test set in both setups.",
"We will refer to these two settings as 5% and 100% .",
"More details about model settings are in Appendix D. 3.2 Results XNER Table 1 reports the XNER results on the datasets from CoNLL and (Bari et al., 2020), where we also evaluate an ensemble by averaging the probabilities from the three models.",
"We observe that after performing warm-up with conf-penalty (2.1), XLM-R performs better than mBERT on average by 3.8% for all the languages.",
"UXLA gives absolute improvements of 3.76%, 4.34%, 6.94%, 8.31%, and 4.18% for es, nl, de, ar, and fi , respectively.",
"Interestingly, it surpasses supervised LSTM-CRF for nl and de without using any target language labeled data.",
"It also produces comparable results for es .",
"In Table 2, we report the results on the three low-resource langauges from WikiANN.",
"From these results and the results of ar and fi in Table 1, we see that UXLA is particularly effective for languages that are structurally dissimilar and/or low-resourced, especially when the base model is weak: Model en es de ar sw hi ur Supervised Results (TRANSLATE-TRAIN-ALL) XLM-R 89.1 86.6 85.7 83.1 78.0 81.6 78.1 Zero-Resource Baseline for Full (100%) English labeled training set XLM-R (XTREME) 88.7 83.7 82.5 77.2 71.2 75.6 71.7 XLM-R (our imp.) 88.87 84.34 82.78 78.44 72.08 76.40 72.10 XLM-R (ensemble) 89.24 84.73 83.27 79.06 73.17 77.23 73.07 XLM-R+con-penalty 88.83 84.30 82.86 78.20 71.83 76.24 71.62 UXLA 85.65 84.15 80.50 74.70 78.74 73.35 UXLA (ensemble) 86.12 84.61 80.89 74.89 78.98 73.45 Zero-Resource Baseline for 5% English labeled training set XLM-R (our imp.) 83.08 78.48 77.54 72.04 67.3 70.41 66.72 XLM-R (ensemble) 84.65 79.56 78.38 72.22 66.93 71.00 66.79 XLM-R+con-penalty 84.24 79.23 78.47 72.43 67.72 71.08 67.63 UXLA 81.53 80.88 77.42 72.31 74.70 70.84 UXLA (ensemble) 82.35 81.93 78.56 73.53 75.20 71.15 Table 3: Results in accuracy for XNLI.",
"28.54%, 16.05%, and 9.25% absolute improvements for ur, my and ar, respectively.",
"XNLI-5% From Table 3, we see that the performance of XLM-R trained on 5% data is surprisingly good compared to the model trained on full data (see XLM-R (our imp.)), lagging by only 5.6% on average.",
"In our single GPU implementation of XNLI, we could not reproduce the reported results of Conneau et al. (2020).",
"However, our results resemble the reported XLM-R results of XTREME (Hu et al., 2020).",
"We consider XTREME as our standard baseline for XNLI-100%.",
"We observe that with only 5% labeled data in the source, UXLA gets comparable results to the XTREME baseline that uses 100% labeled data (lagging behind by only 0.7% on avg.); even for ar and sw , we get 0.22% and 1.11% improvements, respectively.",
"It surpasses the standard 5% baseline by 4.2% on average.",
"Specifically, UXLA gets absolute improvements of 3.05%, 3.34%, 5.38%, 5.01%, 4.29%, and 4.12% for es, de, ar, sw, hi, and ur , respectively.",
"Again, the gains are relatively higher for low-resource and/or dissimilar languages despite the base model being weak in such cases.",
"XNLI-100% Now, considering UXLA 's performance on the full (100%) labeled source data in Table 3, we see that it achieves SoTA results for all of the languages with an absolute improvement of 2.55% on average from the XTREME baseline.",
"Specifically, UXLA gets absolute improvements of 1.95%, 1.68%, 4.30%, 3.50%, 3.24%, and 1.65% for es, de, ar, sw, hi, and ur , respectively.",
"PAWS-X Similar to XNLI, we observe sizable improvements for UXLA over the baselines on PAWS-X for both 5% and 100% settings (Table 4).",
"Specifically, in 5% setting, UXLA gets absolute gains of 5.33%, 5.94%, 5.04%, 6.85%, 7.00%, and 5.45% for de, es, fr, ja, ko, and zh , respectively, while in 100% setting, it gets 2.21%, 2.36%, 2.00%, 3.99%, 4.53%, and 4.41% improvements respectively.",
"In general, we get an average improvements of 5.94% and 3.25% in PAWS-X-5% and PAWS-X-100% settings respectively.",
"Moreover, our 5% setting outperforms 100% XLM-R baselines for es, ja, and zh .",
"Interestingly, in the 100% setup, our UXLA (ensemble) achieves almost similar accuracies compared to supervised finetuning of XLM-R on all target language training dataset.",
"In this section, we analyze UXLA by dissecting it and measuring the contribution of its each of the components .",
"For this, we use the XNER task and analyze the model based on the results in Table 1.",
"Model confidence vs. clustering We first analyze the performance of our single-model distillation methods (2.3) to see which of the two alternatives works better.",
"From Table 5, we see that both perform similarly with model confidence being slightly better.",
"In our main experiments (Tables 1-4) and subsequent analysis, we use model confidence for distillation.",
"However, we should not rule out the clustering method as it gives a more general Model en de es fr ja ko zh Supervised Results (TRANSLATE-TRAIN-ALL) XLM-R (our impl.) 95.8 92.5 92.8 93.5 85.5 86.6 87.6 Zero-Resource Baseline for Full (100%) English labeled training set XLM-R (XTREME) 94.7 89.7 90.1 90.4 78.7 79.0 82.3 XLM-R (our imp.) 95.46 90.06 89.92 90.85 79.89 79.74 82.49 XLM-R (ensemble) 96.10 90.75 90.55 91.80 80.55 80.70 83.45 XLM-R+con-penalty 95.38 90.75 90.72 91.71 81.77 82.07 84.25 UXLA 92.27 92.28 92.85 83.88 84.27 86.90 UXLA (ensemble) 92.55 92.35 93.35 84.30 84.35 86.95 Zero-Resource Baseline for 5% English labeled training set XLM-R (our imp.) 91.15 83.72 84.32 85.08 73.65 72.60 77.22 XLM-R (ensemble) 92.05 84.05 84.65 85.75 74.30 71.95 77.50 XLM-R+con-penalty 91.85 86.15 86.38 85.98 76.03 75.43 79.15 UXLA 89.05 90.27 90.12 80.50 79.60 82.65 UXLA (ensemble) 89.25 90.85 90.25 81.15 80.15 82.90 Table 4: Results in accuracy for PAWS-X.",
"solution to consider other distillation features ( e.g., sequence length, language) than model prediction scores, which we did not explore in this paper.",
"Distillation factor We next show the results for different distillation factor ( ) in Table 5.",
"Here 100% refers to the case when no single-model distillation is done based on model confidence.",
"We notice that the best results for each of the languages are obtained for values other than 100%, which indicates that distillation is indeed an effective step in UXLA .",
"See Appendix B for more analysis on .",
"Two-stage distillation We now validate whether the second-stage distillation ( distillation by model agreement ) is needed.",
"In Table 5, we also compare the results with the model agreement (shown as ) to the results without using any agreement ( ).",
"We observe better performance with model agreement in all the cases on top of the single-model distillation which validates its utility.",
"Results with = 100 , Agreement = can be considered as the tri-training (Ruder and Plank, 2018b) baseline.",
"Figure 2 presents the effect of different types of augmented data used by different epochs in our multi-epoch co-teaching framework.",
"We observe that in every epoch, there is a significant boost in F1 scores for each of the languages.",
"Arabic, being structural dissimilar to English, has a lower base score, but the relative improvements brought by UXLA are higher for Arabic, especially in epoch 2 Agreement es nl de ar fi Distillation by clustering 0.7 82.28 83.25 78.86 52.64 78.47 0.5 82.35 83.11 78.16 54.20 78.28 Distillation by model confidence 50% 82.52 82.46 75.95 52.00 77.51 81.66 82.26 77.19 52.97 77.77 80% 82.33 83.53 78.50 54.48 78.43 81.61 83.03 77.08 53.31 78.34 90% 81.90 82.80 79.03 52.41 78.66 81.21 82.77 77.28 52.20 77.93 100% 82.50 82.35 77.06 52.58 77.51 81.89 82.15 76.97 52.68 78.01 Table 5: Analysis of distillation on XNER.",
"For all the three tasks, we get reasonable improvements over the baselines by training with confidence penalty (2.1).",
"Specifically, we get 0.56%, 0.74%, 1.89%, and 1.18% improvements in XNER, XNLI-5%, PAWS-X-5%, and PAWS-X-100% respectively (Table 1,3,4).",
"The improvements in XNLI-100% are marginal and inconsistent, which we suspect due to the balanced class distribution.",
"From the results of ensemble models, we see that the ensemble boosts the baseline XLM-R.",
"However, our regular UXLA still outperforms the ensemble baselines by a sizeable margin.",
"Moreover, ensem-bling the trained models from UXLA further improves the performance.",
"These comparisons ensure that the capability of UXLA through co-teaching and co-distillation is beyond the ensemble effect.",
"Table 6 shows the robustness of the fine-tuned UXLA model on XNER task.",
"After fine-tuning in a specific target language, the F1 scores in English remain almost similar (see first row).",
"For some languages, UXLA adaptation on a different language also improves the performance.",
"For example, Arabic gets improvements for all UXLA -adapted models (compare 50.88 with others in row 5).",
"This indicates that augmentation of UXLA does not overfit on a target language.",
"More baselines, analysis and visualizations are added in Appendix.",
"Recent years have witnessed significant progress in learning multilingual pretrained models.",
"Notably, mBERT (Devlin et al., 2019) extends (English) BERT by jointly training on 102 languages.",
"XLM (Lample and Conneau, 2019) extends mBERT with a conditional LM and a translation LM (using parallel data) objectives.",
"Conneau et al. (2020) train the largest multilingual language model XLM-R with RoBERTa (Liu et al., 2019).",
"Wu and Dredze (2019), Keung et al. (2019), and Pires et al. (2019) evaluate zero-shot cross-lingual transferability of mBERT on several tasks and attribute its generalization capability to shared subword units.",
"Pires et al. (2019) also found structural similarity ( e.g., word order) to be another important factor for successful cross-lingual transfer.",
"K et al. (2020), however, show that the shared subword has a minimal contribution; instead, the structural similarity between languages is more crucial for effective transfer.",
"Older data augmentation approaches relied on distributional clusters (Tckstrm et al., 2012).",
"A number of recent methods have been proposed using contextualized LMs (Kobayashi, 2018; Wu et al., 2018; Shi et al., 2019; Ding et al., 2020; Liu et al., 2021).",
"These methods rely on labels to perform label-constrained augmentation, thus not directly comparable with ours.",
"Also, there are fundamental differences in the way we use the pretrained LM.",
"Unlike them our LM augmentation is purely unsupervised and we do not perform any fine-tuning of the pretrained vicinity model.",
"This disjoint characteristic gives our framework the flexibility to replace lm even with a better monolingual LM for a specific target language, which in turn makes UXLA extendable to utilize stronger LMs that may come in the future.",
"In a concurrent work (Mohiuddin et al., 2021), we propose a contextualized LM based data augmentation for neural machine translation and show its advantages over traditional back-translation gaining improved performance in low-resource scenarios.",
"We propose a novel data augmentation framework, UXLA , for zero-resource cross-lingual task adaptation.",
"It performs simultaneous self-training with data augmentation and unsupervised sample selection.",
"With extensive experiments on three different cross-lingual tasks spanning many language pairs, we have demonstrated the effectiveness of UXLA .",
"For the zero-resource XNER task, UXLA sets a new SoTA for all the tested languages.",
"For both XNLI and PAWS-X tasks, with only 5% labeled data in the source, UXLA gets comparable results to the baseline that uses 100% labeled data."
]
| [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain"
]
|
[
"Modern Irish is a minority language lacking sufficient computational resources for the task of accurate automatic syntactic parsing of user-generated content such as tweets.",
"Although language technology for the Irish language has been developing in recent years, these tools tend to perform poorly on user-generated content.",
"As with other languages, the linguistic style observed in Irish tweets differs, in terms of orthography, lexicon, and syntax, from that of standard texts more commonly used for the development of language models and parsers.",
"We release the first Universal Dependencies treebank of Irish tweets, facilitating natural language processing of user-generated content in Irish.",
"In this paper, we explore the differences between Irish tweets and standard Irish text, and the challenges associated with dependency parsing of Irish tweets.",
"We describe our bootstrapping method of treebank development and report on preliminary parsing experiments.",
"Irish is a minority language spoken mostly in small communities in Ireland called Gaeltachta' (CSO, 2016) but social media sites, such as Twitter, provide a platform for Irish speakers to communicate electronically from any location.",
"Users may reach a wide audience quickly, unconstrained by the conventions of standard language upheld by editors in publications, revealing the orthographic, lexical, and syntactic variation abundant in informal Irish.",
"Analysis of up-to-date, real-world language data can provide an insight into how Irish is used in everyday communication and how such informal texts compare to prescriptive norms of standardised language to which published texts tend to adhere.",
"User-generated content (UGC), such as tweets, is a valuable, highly available resource for training syntactic parsers that can accurately process social media text.",
"UGC is a genre with features different from those of both spoken language and standardised written language more traditionally found in natural language processing (NLP) corpora.",
"Plank (2016) notes the advantages of utilising fortuitous data in order to create more adaptive, robust language technology.",
"Given that the accuracy of syntactic parsing tools has been shown to decline when evaluated on noisy UGC data (Foster et al., 2011; Seddah et al., 2012) and that domain 1 adaptation has been shown to improve parser performance for dependency annotation of English tweets (Kong et al., 2014) and POS-tagging in Irish tweets (Lynn et al., 2015), the need for genre-specific resources is clear in order to reliably process this variety of data.",
"The prerequisite, therefore, for research in this area is a data set of Irish UGC.",
"This research attempts to fill this gap through the development of TwittIrish, a treebank of Irish tweets, within Universal Dependencies (UD) (Nivre et al., 2020), a cross-lingually consistent framework for dependency-based syntactic parsing.",
"TwittIrish provides linguistic information for Irish in a digitally accessible format valuable for linguistic research and the development of NLP tools.",
"Open-source projects such as UD facilitate collaboration and rapid evolution of ideas among linguists internationally.",
"In order to maintain optimum consistency with other UD treebanks, the annotation methodology employed in this research closely follows the general UD guidelines and the language-specific guidelines for Irish while aiming to incorporate the most up-to-date recommendations (Sanguinetti et al., 2022) for UGC in this evolving area of NLP.",
"UGC, especially social media text, has recently become a popular focus within UD and NLP research more broadly (Silveira et al., 2014; Luotolahti et al., 2015; Albogamy and Ram-1 The terms genre and domain are used interchangeably throughout this paper to refer to the category of text such as standard published text or Twitter text. 6869 say, 2017; Wang et al., 2017; Zeldes, 2017; Bhat et al., 2018; Blodgett et al., 2018; Van Der Goot and van Noord, 2018; Cignarella et al., 2019; Seddah et al., 2020) and has encouraged active conversation around how best to represent it within this framework among the UD community.",
"We carry out preliminary parsing experiments with TwittIrish, investigating the following two questions: How effective is a parser trained on the Irish UD Treebank (Lynn and Foster, 2016), which contains only edited text and no UGC, when applied to tweets?",
"And what difference do pretrained contextualised word embeddings make?",
"We observe a difference of approximately 23 LAS points between TwittIrish and the IUDT test set and find that the use of monolingual BERT embeddings (Barry et al., 2021) improves performance by over 10 LAS points.",
"The paper is structured as follows: Section 2 details the existing Irish NLP resources we use for our research, Section 3 outlines the development of the treebank, Section 4 describes the characteristics of UGC evident in Irish tweets, and Section 5 presents parsing experiments and error analysis.",
"Indigenous Tweets (IT) 2 This project compiles statistics on social media data of 185 minority and indigenous languages including Irish.",
"All tweets in the TwittIrish treebank were sourced via IT.",
"Lynn Twitter Corpus (LTC) 3 (Lynn et al., 2015) A corpus of 1,493 lemmatised and POS-tagged Irish language tweets randomly sampled from 950k tweets by 8k users posted between 2006 and 2014, identified by IT.",
"The LTC data also contains code-switching information (Lynn and Scannell, 2019).",
"Irish Universal Dependencies Treebank (IUDT) 4 (Lynn and Foster, 2016) A UD treebank consisting of 4,910 sentences sampled from a balanced mixed-domain corpus for Irish.",
"gaBERT (Barry et al., 2021) A monolingual Irish BERT model, trained on approximately 7.9 million sentences, which outperforms Multilingual 2 http://indigenoustweets.com/ 3 https://github.com/tlynn747/ IrishTwitterPOS 4 https://github.com/ UniversalDependencies/UD_Irish-IDT BERT (mBERT) (Devlin et al., 2019) and WikiBERT (Pyysalo et al., 2021) at the task of dependency parsing for Irish.",
"We combined 700 POS-tagged tweets from the LTC with 166 tweets more recently crawled by IT in order to leverage previous linguistic annotations while also including newer tweets.",
"This involved converting the LTC annotation scheme to that of the UD framework and then POS-tagging the new raw tweets.",
"We provide further detail in Appendix A. LTC conversion With regard to tokenisation, multiword expressions were automatically split into separate tokens following UD conventions.",
"Only minor manual adjustments were required for lemmatisation to ensure alignment with the IUDT (to enable bootstrapping see Section 3).",
"Finally, the POS tagset used in the LTC was automatically converted to the UD tagset.",
"Appendix A.2 describes this process.",
"Preprocessing of newly-crawled tweets Due to the lack of a tokeniser designed to deal specifically with UGC in Irish, we compared two tools for this task: UDPipe (Straka et al., 2016), 5 a language-agnostic trainable pipeline for tokenisation, tagging, lemmatisation and dependency parsing, and Tweettokenizer 6 from NLTK (Bird et al., 2009), a rule-based tokeniser designed for noisy UGC.",
"The latter proved to be more effective for tokenising UGC phenomena such as emoticons, URLs, and meta language tags.",
"Manual corrections were then applied in order to adhere to the Irish-specific tokenisation scheme within current UD guidelines.",
"In order to establish the best system to use for automatic lemmatising and POS-tagging, two tools, Morfette (Chrupala et al., 2008) and UDPipe (Straka et al., 2016), were analysed with Morfette achieving higher scores on both tasks.",
"Syntactic annotation As a method shown to reduce manual annotation efforts in syntactic annotation (Judge et al., 2006; Seraji et al., 2012), we carry out a bootstrapping approach to dependency parsing as recommended by UD.",
"7 The bootstrapping process is illustrated in Figure",
"1. After converting the LTC and new tweets 5 Trained on IUDT v2.8 with no pre-trained embeddings.",
"to the CoNLL-U format, we manually annotated a small set of 166 tweets and began the bootstrapping cycle.",
"8 (Step 1) A parsing model 9 was trained on the IUDT in combination with the newly annotated tweets.",
"(Step 2)",
"The parsing model was used to automatically annotate the next batch of 100 tweets.",
"(Step 3)",
"These tweets were manually corrected.",
"(Step 4)",
"The corrected tweets were added to the training data.",
"Steps 1 to 4 were repeated until all 866 tweets were fully parsed.",
"This dataset represents the TwittIrish test set in the UD version 2.8 release.",
"10 4 Annotating Irish UGC This section describes the linguistic features that can create challenges when parsing Irish social media text.",
"We provide Irish examples and discussion around the factors that influence these phenomena.",
"Orthographic variation refers to deviation from the conventional spelling system of the language and is observed at the token level.",
"Therefore, it can affect the lemmatisation of a token in an NLP pipeline, potentially affecting other downstream areas of annotation.",
"In the TwittIrish dataset, 2.5% of tokens contained some orthographic variation.",
"Table 1 exemplifies some frequently-occurring phenomena in Irish tweets that deviate from standard orthography.",
"8 Due to the limited funding available, all manual annotation and correction was performed by one linguist annotator.",
"9 Biaffine Parser (Dozat and Manning, 2017) with mBERT (Devlin et al., 2019) embeddings.",
"Diacritic variation Diacritic marks are often omitted or incorrectly added to tweets.",
"The acute accent or sneadh fada is used in Irish to indicate a long vowel and is necessary to disambiguate between certain words.",
"Example 1 shows the most probable intended word lacht lecture' rendered as leacht liquid'.",
"Abbreviation Predictable shorthand forms can occur in standard Irish texts e.g. lch as an abbreviated form of leathanach page'.",
"While more unconventional, and thus less predictable, abbreviations are observed in Irish tweets, as per Example 2 in which the word seachtain week' is shortened to seacht seven'.",
"Abbreviations are more common in tweets than standard text as the character limit and real-time, up-to-date nature of the platform encourages the user to be efficient with time and space.",
"(2) Bm de ghnth ach sa bhaile an tseacht seo I usually am but home this week' Lengthening This refers to the elongation of a token by repeating one or more characters.",
"This can be thought of as an encoding of sociophonetic information (Tatman, 2015) and is strongly linked to sentiment.",
"Despite incentives to save time and space while tweeting, users often elongate certain words for expressive purposes (Brody and Diakopoulos, 2011).",
"Example 3 demonstrates the lengthening of the word bu yellow'.",
"Case variation Nonstandard use of upperand lowercase text is another method of encoding sociophonetic information by focusing attention or emotion on a particular word or phrase.",
"Heath (2021) discusses the association between the use of all-caps and perceived shouting as in Example",
"4. (4) Nl todhcha na Gaeilge sa Ghaeltacht, ach in aon it AR DOMHAIN The future of Irish is not in the Gaeltacht but anywhere ON EARTH' 6871 Phenomenon Example Standard form Gloss Diacritic variation nior fhoghlaim tu nor fhoghlaim t you did not learn' Abbreviation fhoir rugba na hir fhoireann rugba na hireann Irish rugby team' Lengthening obairrrr obair work' Case variation ceolchoirm DEN SCOTH ceolchoirm den scoth excellent concert' Punctuation Variation ** folntas ** folntas vacancy' Transliteration go wil go bhfuil that is' Other spelling variation O ' Bama Obama Obama' Table 1: Examples of orthographic variation in Irish tweets.",
"Transliteration The practice of transliteration, in which a word in one language is written using the writing system of another, is common within the language pair of Irish and English.",
"In the TwittIrish treebank, the English language phrase fair play' occurs twice while variations fair pl', as shown in Example 5 and far pl' occur once each.",
"Punctuation variation Punctuation is used creatively in UGC to format or emphasise strings of text.",
"However, due to the lack of standardis-ation, occurrences of unconventional punctuation can make text difficult to parse for both human and machine, as in Example 6 which shows a phrase from an Irish tweet appended by two punctuation characters -)'.",
"It is unclear whether this should be interpreted as some form of punctuation, creative formatting, or a smiley e.g. :-)'.",
"Other spelling variation These are mostly slight variations very close to the intended word and may occur due to typographical error.",
"Typos are very common in UGC due to lack of editing or proofreading and may occur via insertion, deletion, substitution, or transposition of characters.",
"Example 7 shows sraith (season) rendered as *staith .",
"Due to their phonetic dissimilarity and the fact that t' and r' are adjacent on the QWERTY keyboard layout, it is reasonable to infer that the substitution was unintentional.",
"Less commonly, disguise or censorship of words or phrases may occur to encrypt profanity or taboo language.",
"Just 38.32% of the set of unique lemmata that make up the vocabulary of the TwittIrish treebank occur",
"Dialectal vocabulary Irish has three major dialects; Connaught, Munster, and Ulster.",
"Distinctive features of these dialects in the form of lexical variation are evident in spoken language and informal text such as tweets.",
"Example 8 shows the use of domh , the Ulster variant of dom to me'.",
"Initialism Multiword phrases are frequently represented by the initial letter of each of their constituent tokens.",
"Example 9 shows GRMA Thank you' used to represent its expanded form Go raibh maith agat .",
"Pictogram Emojis, emoticons, etc. can be added to text to emulate gesture (Gawne and McCulloch, 2019) or they may play a syntactic role in a phrase, replacing a word as in Example 10, in which the symbol, , acts as the object of a verb.",
"Pictograms tend not to have a one-to-one correspondence with natural language words.",
"Conas a deireann t ?",
"How do you say ' Truncation Due to the current limit of 280 characters per tweet, the end of a tweet may be unnaturally attenuated, sometimes mid-sentence as in Example 11 or even mid-word.",
"Code-switching vs. borrowing 66.74% of tokens in the TwittIrish treebank are in Irish, 4.85% of tokens are in English and the remainder (con-sisting of punctuation, meta language tags, etc.)",
"are classified as neither, or indeed both in the case of intraword code-switching or nonce borrowing in which the morphologies of two languages are combined in a single word.",
"In Example 12 the English verb root happen' is used instead of the Irish equivalent tarlaigh .",
"Insertional code-switching (Muysken et al., 2000) and borrowing are common in informal Irish.",
"74.71% of the tweets in the TwittIrish treebank were considered to be entirely in Irish, the remaining 25.29% of tweets being considered bior multilingual.",
"Example 13 shows a section of an Irish tweet utilising the English word Dubs', a nickname for Dubliners', and Example 14 shows the use of an eclipse and an acute accent applied to the foreign proper noun Barcelona'.",
"(12)",
"Eachtra i ndiaidh Happenil An event (is) after happening' (13) Roimh na Dubs Before the Dubs' (14) T sin i mBarcelna That is in Barcelona' Other nonstandard lexical forms Other unfamiliar terms may occur in the form of hypercorrection and neologisms.",
"Hypercorrection occurs when an autocorrection system is either not activated or available in a user's language of choice.",
"As a result, their attempts to type a word are corrected to a word with a similar spelling in another language.",
"Example 15 shows the Irish word coicse rendered as concise' probably due to automatic English spelling correction software.",
"It is often difficult to distinguish between hypercorrection, neologisms, typos, or other spelling variations.",
"Example 16 shows agus (and) rendered as agua which may have occurred due to automatic hypercorrection as agua' (water) is a frequent token in other languages such as Portuguese and Spanish.",
"However, it could also be a simple typo.",
"(15)",
"Mhscail m i mo leaba fin ar maidin i ndiaidh concise I woke up in my own bed after a fortnight' (16) t an teanga ag fil bhis agua the language is dying and' 4.3 Syntactic Variation Grammatical phenomena observed in Irish tweets are described in this section.",
"As these idiosyncrasies occur at the phrasal rather than token level, they may directly affect the structure of the parse tree.",
"Some phenomena, such as contraction and over-splitting, cause difficulty during the tokenisation stage, potentially having a negative downstream effect on parsing.",
"Table 3 exemplifies syntactic variation in Irish tweets.",
"Contraction Much like abbreviation at the token level, contraction is defined here as the fusion of several tokens for the purpose of brevity, sometimes mimicking spoken pronunciation.",
"Figure 2 shows the phrase go bhfuil siad that they are' reduced to gowil siad tokenised incorrectly.",
"Figure 3 shows the same contraction tokenised correctly.",
"Over-splitting The inclusion of extra white space within tokens is often observed in Irish tweets e.g. Nl m r chinnte .",
"The prefix r(too') is conventionally fused with the adjective it precedes in standardised text and so such tokens are annotated with the goeswith label as shown in Figure",
"4. 6873 Phenomenon Example Standard form Gloss Contraction go dt'n go dt an until the' Over-splitting ana shuimiil an-suimiil very interesting' Syntax-level code-switching T an tweet machine r-tapa T inneall na tvute r-tapa The tweet machine is too fast' Dialectal grammar N fhacthas n fhaca m I did not see' Ellipsis jab iontach danta aige t jab iontach danta aige he has done a wonderful job' Meta language tags #sonas sonas happiness' Non-sentential segmentation haha:) t sil agam go raibh s ann ha ha!",
"Syntax-level code-switching Alternational code-switching or congruent lexicalisation (Muysken et al., 2000) are likely to cause a change in the structure of the syntax tree, due to differing word orders of the languages involved, thus complicating the task of dependency parsing.",
"In Irish, the adjectival modifier usually follows the noun it modifies whereas the inverse is true for English.",
"Figure 5 exemplifies a case of congruent lexicalisation in which English adjective hippy-dippy' is positioned before an Irish noun rather than after as would be expected in classic' code-switching.",
"Dialectal grammar Figures 6 and 7 show semantically equivalent statements rendered using the synthetic, more common to the Munster dialect of Irish, and analytic verb forms respectively.",
"Ellipsis Example 17 shows a sentence fragment lacking a main verb.",
"The probable inferred full phrase is t bisteach anseo rain is here'.",
"Meta language tags Hashtags are used in tweets to render a topic searchable and at-mentions are used to address or refer to another user.",
"Either can play a syntactic role as exemplified in Figure 8.",
"Non-sentential structure In tweets, the sentence is not an appropriate unit of segmentation as frequently non-standard punctuation, or none at all, is used.",
"Figure 9 exemplifies a tweet utilising an emoji instead of punctuation.",
"Other grammatical variation Grammatical variation can also occur via unintentional deviation from conventional spelling or grammar by an L2 Irish speaker.",
"Example 18 shows a grammatically incorrect phrase roughly translating to I have to *going'.",
"In such cases, though the annotator may be able to infer the intended phrase Caithfidh m dul I have to go', no corrections are made by the 6874 annotator to the surface form, however this information can be represented in the annotation via the label CorrectForm as described by Sanguinetti et al. (2022).",
"Additionally, Irish tweets contain extremely unconventional constructions.",
"This can occur in the form of unnatural phrases that have been machine-translated or generated by bots.",
"Example 19 shows an ungrammatical construction that appears to have been translated automatically word by word.",
"A more natural construction might be conas tonna morgiste a fhil How to get a tonne of mortgage'.",
"Some examples of this variety are easy to identify from surrounding context such as links to websites with similar content however, tweets may consist of text alone making it difficult to infer whether the author is human or machine.",
"(18)",
"Caithfidh m ag dul I have to *going' (19) Conas a Faigh tonna de Morgiste *How to get a tonne of mortgage' 5 Parsing Experiments We compare the performance of two widely used neural dependency parsers on the TwittIrish test set, and examine the effect of using pre-trained contextualised word embeddings from a monolingual Irish BERT model (gaBERT).",
"We report parsing performance broken down by sentence/tweet length, UPOS tags, and dependency labels and carry out a manual error analysis.",
"Further information is detailed in Appendix B. 5.1 Parser Comparison We experiment with two neural dependency parsing architectures: UDPipe (Straka et al., 2016), an NLP pipeline that includes a transition-based nonprojective parser, and AllenNLP (Gardner et al., 2018), a biaffine dependency parser with a BiLSTM encoder (Dozat and Manning, 2017).",
"Both systems are trained on IUDT version 2.8 11 and tested on the IUDT and TwittIrish test sets for comparison.",
"Gold standard tokenisation is provided to the models which then predict UPOS tags and dependency relations.",
"As the TwittIrish test set is the only gold annotated treebank of Irish UGC, no UGC is used as training or development data in 11 Models were trained with and without XPOS and feature annotation.",
"The results shown here are without XPOS and features.",
"The addition of XPOS and features constituted a difference of approximately +/-1 LAS.",
"these experiments.",
"We opt to preserve it as a test set so that our results and results of future research in this area will be comparable.",
"To leverage the substantial advances in accuracy achieved in dependency parsing by the use of pretrained contexualised word representations (Che et al., 2018; Kondratyuk and Straka, 2019; Kul-mizev et al., 2019), we use AllenNLP with token representations obtained from the last hidden layer of the gaBERT model (Barry et al., 2021) which are then passed to the biaffine parsing component.",
"Table 4 shows that, when tested on the IUDT version 2.8 test set, UDPipe achieves 70.58 labelled attachment score (LAS).",
"In comparison, UDPipe achieves a much lower LAS of 47.33 on the TwittIrish test set.",
"Similarly to UDPipe, AllenNLP achieves 71.56 LAS on the IUDT test set with a similar decrease of 22.83 points on the TwittIrish test set.",
"The highest accuracy of 84.25 LAS is achieved by gaBERT with a difference of 24.91 points when tested on the TwittIrish test set.",
"The lower accuracy obtained by parsers on the TwittIrish test set is unsurprising given the linguistic differences between the training and test sets.",
"The 10+ LAS improvement provided by the gaBERT embeddings is seen in both test sets.",
"Analysis was carried out on the AllenNLP parser with gaBERT embeddings using Dependable (Choi et al., 2015).",
"LAS by Number of Tokens per Sentence/Tweet The mean sentence length of the IUDT is 23.5 tokens, whereas the mean tweet length in TwittIrish is 17.8.",
"Figure 10 shows that, when tested on the IUDT, parsing accuracy decreases as the length of the sentence increases.",
"The highest accuracy of 87.92 LAS is associated with sentences of 10 6875 Figure 10: LAS broken down by number of tokens per tree achieved by AllenNLP Parser with gaBERT embeddings on the IUDT and TwittIrish test sets.",
"tokens or fewer, and the lowest accuracy is observed in sentences of 40 tokens or more.",
"This is an unsurprising trend as a higher number of tokens increases the probability of longer dependency distances and more complex constructions within a sentence.",
"While the range of scores is smaller and trend less pronounced, the opposite effect is observed when the same parser is tested on TwittIrish, whereby LAS tends to increase as the length of the tweet increases.",
"The highest LAS of 59.97 is associated with tweets of 31 to 40 tokens in length and the lowest accuracy of 53.47 LAS is associated with tweets of 10 tokens or less.",
"This trend is also observed when gaBERT representations are not used, suggesting that, in this case, deep contextualised word embeddings do not cause this effect as observed in (Kulmizev et al., 2019).",
"From manual inspection of the data, we observe that the genre-specific phenomena which challenge the parser such as ellipsis, meta language tags, and URLs, occur in higher proportions in shorter tweets, which would explain this trend.",
"LAS by UPOS and dependency relation We observe a larger proportion of PROPN , SYM , and PUNCT tags in Irish tweets in comparison to standardised Irish text, which contains a higher proportion of NOUN , DET , and ADP tags.",
"This reflects the observations of Rehbein et al. (2019), who compare the distribution of POS tags in four German treebanks.",
"Additionally, we compare the POS tag distribution in treebanks of English (Liu et al., 2018) and Italian (Sanguinetti et al., 2018) tweets to treebanks of standard text in those languages.",
"We similarly observe that symbols, punctuation, and pronouns are more frequent in tweets and that nouns, determiners, and prepositions are more frequent in Figure 11: LAS broken down by UPOS tag achieved by AllenNLP Parser with gaBERT embeddings on the IUDT and TwittIrish test sets.",
"Figure 11 shows LAS associated with each UPOS tag when tested on the IUDT and TwittIrish.",
"LAS is higher when tested on the IUDT for all UPOS tags except CCONJ , ADV , and SYM and in these cases the difference is small (<10 LAS).",
"The most notable differences are X (71.6 LAS), INTJ (51.3 LAS), PROPN (43.5 LAS).",
"These differences are due to 1) the divergent genres of the treebanks e.g. in the TwittIrish treebank the UPOS tag X is used for all non-syntactic hashtags, and PROPN is used for all at-mentions, neither of which occur in the IUDT and 2) differing annotation conventions e.g. in the IUDT, the tag X is used mostly for foreign-language tokens, whereas, in TwittIrish, due to the high proportion of English language tokens, non-Irish words are annotated with their true UPOS tag where the language is known to the annotator.",
"The tag INTJ occurs very rarely in IUDT.",
"However, due to the conversational nature of tweets, phatic expressions and emotional signifiers (not normally present in standard text) are frequent.",
"Our analysis of the dependency relation distribution of standard English, German, and Italian text compared to that of tweets in those languages reveals that the parataxis , vocative , and advmod relations are more frequent in tweets and that the case , det , and nmod relations are more frequent in standard text.",
"We observe that this same effect is present in Irish tweets.",
"Figure 12 shows LAS broken down by dependency relation.",
"The parser obtains higher scores on the IUDT for all dependency relations except xcomp for which it is just one point higher when tested on TwittIrish.",
"The largest differences between the parsing performance on the two test sets are associated with the labels root , vocative , obl:tmod , csubj:cleft , conj , and punct .",
"As regards root and punct , the difference in accuracy could be attributed to the non-sentential nature of tweets.",
"In the IUDT each tree consists of a single sentence, whereas tweets may consist of sentence fragments or indeed several sentences, making root identification and establishing punctuation attachment more complex.",
"csubj:cleft tends to be mislabelled in the absence of the copula which is often elided in standard text.",
"This copula drop occurs even more frequently in tweets, negatively impacting on parsing accuracy.",
"With regard to conj , both nonstandard forms of coordinating conjunctions (e.g. and', +', misspellings etc.) and differing annotation styles between IUDT and TwittIrish lead to attachment errors.",
"As regards obl:tmod and vocative , the respective differences in accuracy are due to the infrequent occurrences in the IUDT of a speaker or author directly addressing someone in the text and references to time (e.g. 5pm), both of which are common occurrences in tweets.",
"Error Analysis In order to assess the effect of the UGC phenomena present in Irish tweets, we analyse the most and least accurate parses as shown in Table 5.",
"Seven tweets (76 tokens) were parsed with LAS between 0 and 5.",
"On investigation, we observed fifteen occurrences of emojis that were most commonly incorrectly labelled punct .",
"The ten English tokens were most commonly attached incorrectly via flat:foreign .",
"The nine (two syntactic) usernames were most commonly mislabelled as root .",
"There were five occurrences of ellipsis in the form of verb omission obfuscating the task of root selection.",
"The three hashtags were most commonly mislabelled as nmod as were the three Phenomenon Easiest Tweets Hardest Tweets Emoji 0 15 English Token 1 9 Username 3 10 Ellipsis 2 5 Hashtag 1 3 RT 0 3 URL 0 3 Spelling variation 2 2 Table 5: Number of occurrences of UGC phenomena where Easiest Tweets' refers to the 7 tweets that were parsed well with LAS between 95 and 100 and Hardest Tweets' refers to the 7 tweets (76 tokens) that were badly parsed with LAS between 0 and 5.",
"URLs.",
"One occurrence of spelling variation in the form of diacritic omission caused the parser to misinterpret the token r our' as ar on' meaning it was mislabelled as case instead of nmod:poss .",
"Seven tweets (89 tokens) were parsed with an accuracy between 95 and 100 LAS.",
"All of these were grammatical, well-formed sentences.",
"There were three usernames and one hashtag all of which were syntactically integrated and so they were parsed correctly.",
"There was one of insertional single-word code-switch which was accurately parsed.",
"There were two occurrences of spelling variation, both in the form of diacritic omission but, as these do not resemble any other words, they were parsed correctly.",
"Presented in this paper is the novel resource, TwittIrish, the first Universal Dependencies treebank for Irish UGC.",
"Analysis of this linguistic genre and anonymised examples of Irish tweets are presented.",
"This research facilitates the development of NLP tools such as dependency parsers for Irish by providing a test set on which future Irish language technology can be tested.",
"Future work will involve both further annotation and exploration of semi-supervised techniques.",
"We warmly thank the anonymous reviewers, as well as Steven Bird and Kevin Scannell for their valuable feedback.",
"This work was funded by the Irish Government Department of Tourism, Culture, Arts, Gaeltacht, Sport and Media under the Gael-Tech Project, and also supported by Science Foundation Ireland in the ADAPT Centre (Grant No. 13/RC/2106) at Dublin City University."
]
| [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other"
]
|
[
"Multilayer transformer networks consist of interleaved self-attention and feedforward sublayers.",
"Could ordering the sublayers in a different pattern lead to better performance?",
"We generate randomly ordered transformers and train them with the language modeling objective.",
"We observe that some of these models are able to achieve better performance than the interleaved baseline, and that those successful variants tend to have more self-attention at the bottom and more feedforward sublayers at the top.",
"We propose a new transformer pattern that adheres to this property, the sandwich transformer , and show that it improves perplexity on multiple word-level and character-level language modeling benchmarks, at no cost in parameters, memory, or training time.",
"However, the sandwich reordering pattern does not guarantee performance gains across every task, as we demonstrate on machine translation models.",
"Instead, we suggest that further exploration of task-specific sublayer reorderings is needed in order to unlock additional gains.",
"1 1 Introduction The transformer layer (Vaswani et al., 2017) is currently the primary modeling component in natural language processing, playing a lead role in recent innovations such as BERT (Devlin et al., 2019) and GPT-2 (Radford et al., 2019).",
"Each transformer layer consists of a self-attention sublayer ( s ) followed by a feedforward sublayer ( f ), creating an interleaving pattern of self-attention and feedforward sublayers ( sfsfsf ) throughout a multilayer transformer model.",
"To the best of our knowledge, there is no reason to expect this particular pattern to be optimal.",
"We conduct a series of explorations to obtain insights about the nature of transformer orderings that work well, and based on this, we 1 Our code is available at https://github.com/ ofirpress/sandwich_transformer sfsfsfsfsfsfsfsfsfsfsfsfsfsf",
"sssssssfsfsfsfsfsfsfsfffffff",
"(b) Sandwich Transformer Figure 1: A transformer model",
"First, we generate random transformer models, varying the number of each type of sublayer, and their ordering, while keeping the number of parameters constant.",
"We train these models on the standard WikiText-103 word-level language modeling benchmark (Merity et al., 2016), and observe that some of these random models outperform the original interleaved transformer model, even when the number of self-attention and feedforward layers is not equal.",
"Our analysis shows that models with more self-attention toward the bottom and more feedforward sublayers toward the top tend to perform better in general.",
"Based on this insight, we design a new family of transformer models that follow a distinct sublayer ordering pattern: sandwich transformers (Figure 1).",
"Our experiments demonstrate that a sandwich transformer outperforms the baseline of Baevski and Auli (2019).",
"This result is made more interesting by the fact that our sandwich transformer is simply a reordering of the sublayers in the baseline model, and does not require more parameters, memory, or training time.",
"Finally, we demonstrate that even though the Model PPL fsfsfffsffsfsssffsfssfssssffsffs 20.74 sfssffsffffssssfsfffsfsffsfssssf 20.64 fsffssffssssffsssssffsfssfsfffff 20.33 fsffffffsssfssffsfssffsfsssffsss 20.27 fssffffffsfsssfffssssfffssssffss 19.98 sssfssfsffffssfsfsfsssffsfsfffsf 19.92 fffsfsssfsffsfsffsffsssssffssffs 19.69 fffsffssffsssfssfsssfffffsfsssfs 19.54 sfsfsfsfsfsfsfsfsfsfsfsfsfsfsfsf 19.13 fsffssfssfffssssfffsssffffsfssfs 19.08 sfsffssssffssffffsssffsssfsffsff 18.90 sfsfsfsfsfsfsfsfsfsfsfsfsfsfsfsf 18.83 sssssssffsffsfsfsffffsfffsfssffs 18.83 sffsfsffsfsssffssfssssssfffffffs 18.77 sssfssffsfssfsffsfffssffsfsffssf 18.68 fffsssssfffsfssssffsfsfsfssffsff 18.64 sfffsssfsfssfsssssfssfffffsfffsf 18.61 ssffssfssssffffffssffsssfsffssff 18.60 fsfsssssfsfsfffffsfffsffssffssss 18.55 sfsfsfsfsfsfsfsfsfsfsfsfsfsfsfsf 18.54 sfsfsfsfsfsfsfsfsfsfsfsfsfsfsfsf 18.49 fsfsssssfsfffssfsffsfsfsfsffffss 18.38 sfssffsfsfsffsssssfffsssfffsffsf 18.28 sfsfsfsfsfsfsfsfsfsfsfsfsfsfsfsf 18.25 sfsfssfsssffsfsfsfsffffssffsfssf 18.19 Table 1: Randomly generated models with 16 self-attention ( s ) sublayers and 16 feedforward ( f ) sublayers, and their perplexity on the WikiText-103 development set.",
"sandwich transformer is motivated by random search experiments on WikiText-103, it can improve performance on additional domains and tasks.",
"Sandwich transformers achieve state-of-the-art results on the enwik8 character-level language modeling dataset and on an additional word-level corpus, but have no significant effect on machine translation.",
"We conjecture that tuning transformer reorderings to specific tasks could yield even larger gains, and that further exploration of the ordering space may provide universally beneficial patterns.",
"Each transformer layer consists of a self-attention sublayer followed by a feedforward sublayer, modifying a sequence of vectors X 0 as follows: 2",
"X 1 = self-attention ( X 0 ) + X 0 X 2 = feedforward ( X 1 ) + X 1",
"Stacking multiple transformer layers creates an interleaved network of sublayers.",
"We denote these 2 We omit dropout (Srivastava et al., 2014) and layer normalization (Ba et al., 2016) to simplify the notation.",
"models as strings, with s and f representing self-attention and feedforward sublayers, respectively.",
"A three-layer transformer network, for example, would be denoted sfsfsf , with the flow of computation moving from input on the left to output on the right.",
"Thus, any string in the regular language ( s | f ) defines a valid network that uses the same building blocks as the original transformer.",
"For simplicity, we refer to these alternatives as transformers as well.",
"We conduct a series of experiments to understand which transformer networks work well and whether particular architectural patterns can improve performance.",
"First, we generate random transformer models while keeping the number of parameters constant.",
"We then train these random models to determine whether the interleaving pattern ( sfsfsf ) is optimal (Section 3.1), and whether balancing the number of self-attention and feedforward sublayers is desirable (Section 3.2).",
"Finally, we analyze additional properties of these random models, and find that those with more self-attention at the beginning and more feedforward sublayers near the end tend to outperform the standard interleaved model (Section 3.3).",
"Experimental Setup Our baseline is the strong transformer language model of Baevski and Auli (2019), trained on WikiText-103 (Merity et al., 2016).",
"WikiText-103 contains roughly 103 million tokens from English Wikipedia, split into train, development, and test sets by article.",
"The Baevski Model PPL sfffssfsfsfssffffsfsffsffffff 22.80 sffssfsssssssssssssfsfsssfsffsssfsssfs 21.02 ssssssffsffffssfffffsssfsfsssssssss 20.98 fffffffffsffssffsffssssfsfsssf 20.75 fssfsssffffffssfsssfsfffssssfsfss 20.43 sffsffffffsfsfssfsssfsfsfssfssfs 20.28 sffssffsfffsfsfssssffffffssssff 20.02 fsffsfssffffsfsfffsfffssfffsss 19.93 sffsffssffsfsffsssfsssssfsssfffsss 19.85 ssfffffffssfffssfssffsfsfsffsf 19.82 sfsfsfffsfffssfsfffsffssfsfsfss 19.77 sfsffsssffsffsssfssfffffssssfsssf 19.55 sffsfssfffsffsfssssfsfsffffsfsss 19.49 sffffsffssssfsssfssfffsssfssssfsfs 19.47 fsssffssssssfsfsfsffsffffssfsfssss 19.25 sfsfsfsfsfsfsfsfsfsfsfsfsfsfsfsf 19.13 fssssssfsfsfsfffsfsssfssffssssfsff 18.86 sfsfsfsfsfsfsfsfsfsfsfsfsfsfsfsf 18.83 ssfsfsssfsssssffsfsfsssfssfsfsssssssf 18.62 sfsfsfsfsfsfsfsfsfsfsfsfsfsfsfsf 18.54 sfsfsfsfsfsfsfsfsfsfsfsfsfsfsfsf 18.49 sssfsffsfssfsssffsffffffssfsfff 18.34 sssfsfsffsssfsfffffsfsffffsssff 18.31 sfsfsfsfsfsfsfsfsfsfsfsfsfsfsfsf 18.25 ssssssfsssffffsfsfffffffffffsf 18.12 Table 2: Randomly generated models with the same number of parameters as the baseline, and their perplexity on the WikiText-103 development set.",
"and Auli model contains 16 transformer layers of d = 1024 dimensions, with 16 heads in each self-attention sublayer, and feedforward sublayers with an inner dimension of 4096 .",
"In this setting, each self-attention sublayer contains 4 d 2 parameters, while each feedforward sublayer contains 8 d 2 parameters (excluding bias terms, which have a marginal contribution).",
"Thus, each f sublayer contains twice the parameters of a s sublayer, following the parameter ratio between self-attention and feedforward sublayers described in Vaswani et al. (2017).",
"All of our experiments use the same hyperparameters as Baevski and Auli's original model.",
"To set an accurate baseline, we train the baseline model (the standard interleaved transformer) with five different random seeds, achieving 18.65 0.24 perplexity on the development set.",
"In the baseline 16-layer transformer model, 16 sublayers of each type are interleaved.",
"Can we improve model performance by simply rearranging them?",
"We thus generate 20 random transformer models with 16 self-attention sublayers and 16 feedforward Random Models: Parameter Budget Baseline 18 19 20 21 22 23 P e r p l e x i t y Figure 3: The perplexities on the WikiText-103 development set of 20 randomly generated models with the same number of parameters as the baseline, and of the 5 baselines (the standard transformer trained with different random seeds).",
"sublayers, randomly permuted, and train these models from scratch, without modifying any of the hyperparameters.",
"Table 1 shows the entire sample, while Figure 2 plots the perplexity distributions of the shuffled transformers and the baseline side by side.",
"We observe that 7 of the 20 randomly-permuted models perform at least as well as the interleaved baseline's average performance, with the best model achieving 18 .",
"19 perplexity.",
"While the average performance of the baseline model beats the average performance of these random models, the fact that a third of our random models outperformed the average baseline suggests that a better ordering than interleaving probably exists.",
"Is it necessary to have an identical number of sublayers of each type, or could models with more self-attention (or more feedforward) sublayers yield better results?",
"To find out, we generate 20 unbalanced transformer models by randomly selecting one sublayer at a time (either s or f with equal probability) until the parameter budget is exhausted.",
"Since a feedforward sublayer contains double the parameters of a self-attention sublayer, the networks' depth is not necessarily 32 sublayers as before and can range from 24 (all f ) to 48 (all s ).",
"Table 2 shows the entire sample, while Figure 3 plots the perplexity distributions of the randomly-generated transformers and the baseline side by side.",
"We see that four of the generated unbalanced models outperform the average baseline transformer.",
"The best performing random model reaches Models that are worse than baseline Models that are better than baseline 0 2 4 6 8 10 12 14 16 A v e r a g e s u b l a y e r c o un t i n t h e b o tt o m h a l f o f t h e m o d e l Self-attention Feedforward",
"a perplexity of 18.12 and has 12 self-attention and 18 feedforward sublayers.",
"Both the average and the median perplexities of this sample of unbalanced models are worse than those of the balanced permuted models (Section 3.1).",
"We do not observe any preference for more sublayers of one type over the other; there are self-attention-heavy and feedforward-heavy models in both the top five and the bottom five of the results table.",
"While offering no guarantees given the small sample sizes and fixed hyperparameters we conclude that a balanced number of self-attention and feedforward sublayers seems to be a desirable property, though not a necessary one.",
"So far, it is not clear which characteristics make one transformer model more successful than another; for example, measuring the number of times each sublayer type appears in the network does not reveal any strong correlation with performance.",
"However, analyzing the bottom (or top) half of the network in isolation reveals an interesting property.",
"We first split the models to those that perform better than the average baseline and those that do not.",
"We then slice each one of the previously-generated random models in half by parameter count (e.g., ssssff would be split to ssss and ff , since every f contains twice as many parameters as an s ), and count how many sublayers of each type appear in each slice.",
"Figure 4 shows that models that outperform the average baseline tend to have more self-attention s in the first (bottom) half of the network and more f in the second (top) half.",
"While we do not have a good hypothesis to explain this phenomenon, we can exploit it to improve transformers (Section 4).",
"Our analysis in the previous section motivates designing a transformer model that is heavy on self-attention at the bottom and feedforward sublayers at the top, while at the same time containing a more-or-less balanced amount of both sublayer types.",
"As a first attempt to manually design a better transformer, we take this hypothesis to the extreme, and train a transformer model of 16 self-attention sublayers followed by 16 feedforward sublayers ( s 16 f 16 ).",
"This model achieves 18.82 perplexity, which is comparable to the performance of the baseline with the same number of parameters.",
"We next generalize this model and the original interleaved transformer, creating the family of sandwich transformers .",
"A sandwich nk transformer consists of 2 n sublayers in total ( n of each type), conforming to the regular expression s k ( sf ) n k f k .",
"The first k sublayers are purely self-attention ( s ), while the last k are feedforward sublayers ( f ).",
"In between, we use the original interleaving pattern ( sf ) to fill the remaining 2( n k ) sublayers.",
"When k = 0 , we get the original transformer model, and when k = n 1 (its maximal value) we get the previously mentioned s n f n model.",
"We refer to k as the transformer's sandwich coefficient .",
"We train sandwich transformers for n = 16 (to remain within the same parameter budget as our baseline language model) and all values of k { 0 , . . . , 15 } .",
"Figure 5 shows the transformer's performance as a function of the sandwich coefficient k .",
"With the exception of k = 14 , 15 , all sandwich transformers achieve lower perplexities Model Test Baseline (Baevski and Auli, 2019) 18.70 Transformer XL (Dai et al., 2019) 18.30 kNN-LM (Khandelwal et al., 2019) 15.79 Baseline (5 Runs) 18.63 0.26 Sandwich 166 17.96 Table 3: Performance on the WikiText-103 test set.",
"than the average baseline transformer.",
"Of those, 6 models outperform the best baseline transformer ( k = 5 , 6 , 8 , 9 , 10 , 11 ).",
"The best performance of 17.84 perplexity is obtained when k = 6 .",
"We compare this model to the baseline on WikiText-103's test set.",
"Table 3 shows that, despite its simple design, the sandwich transformer outperforms the original transformer baseline by roughly double the gap between the baseline (Baevski and Auli, 2019) and Transformer XL (Dai et al., 2019).",
"This improvement comes at no extra cost in parameters, data, memory, or computation; we did not even change any of the original hyperparameters, including the number of training epochs.",
"To check whether this advantage is consistent, we train 4 more sandwich 166 models with different random seeds (5 in total) and evaluate them on the development set, to avoid evaluating our model more than once on the test set.",
"This is the only experiment in which we modify our model's random seed.",
"Figure 6 shows that we obtain a mean perplexity value of 17.98 with a standard deviation of 0.10, while the baseline achieves 18.65 mean perplexity, with a larger standard deviation of 0.34 (these values reflect development set performance, not test set performance as in Table 3).",
"In very recent work, kNN-LM (Khandelwal et al., 2019) set a new state of the art on WikiText-103, surpassing other recent models by a wide margin.",
"The model achieves this result by storing the entire training set in an auxiliary memory component.",
"Since this approach appears orthogonal to ours, it is quite possible that kNN-LM could benefit from sublayer reordering as well.",
"sublayer reorderings of the Baevski and Auli (2019) model, trained on the WikiText-103 word-level language modeling benchmark (Merity et al., 2016).",
"Does this particular pattern improve performance in other settings as well?",
"To find out, we apply sandwich transformers to three other tasks: word-level language modeling on a different do-main (Section 5.1), character-level language modeling (Section 5.2), and machine translation (Sec-tion 5.3).",
"Results show that as we drift away from our original setting, sandwich transformers provide diminishing gains, but always perform at least as well as the baseline transformers (provided that the sandwich coefficient is properly tuned).",
"This finding suggests that different settings may benefit from different sublayer reordering patterns.",
"We first apply sandwich transformers to a different domain, while retaining the other architectural aspects and hyperparameter settings from Baevski and Auli (2019).",
"Specifically, we use the Toronto Books Corpus (Zhu et al., 2015), which has previously been used to train GPT (Radford et al., 2018) and also BERT (Devlin et al., 2019) (combined with Wikipedia).",
"The corpus contains roughly 700M tokens.",
"We use the same train/validation/test split as Khandelwal et al. (2019), as well as their to-kenization, which uses BERT's vocabulary of 29K byte-pair encodings.",
"Since the vocabulary is much smaller than WikiText-103's, we replace the adaptive word embedding and softmax of Baevski and Auli (2019) with a tied word embedding and softmax matrix (Press and Wolf, 2017; Inan et al., 2017).",
"Finally, we tune the sandwich coefficient on the development set for k { 4 , . . . , 8 } , i.e., a neighborhood of 2 around the best value we found for WikiText-103 ( k = 6 ).",
"Table 4 shows that the sandwich transformer transfers well to the books domain, improving performance by 1.06 perplexity, achieving similar performance to the datastore-augmented kNN-LM (Khandelwal et al., 2019), which is the state of the art on WikiText-103 (see Section 4).",
"Modeling text as a stream of characters, rather than word or subword tokens, presents a different modeling challenge: long-range dependencies become critical, and the vocabulary takes on a more uniform distribution.",
"We apply our sandwich reordering to the adaptive span model of Sukhbaatar et al. (2019), which is state of the art on the popular English-language benchmark text8 and is currently a close second on enwik8.",
"3 The adaptive span 3 Both datasets are taken from http://mattmahoney.",
"model learns to control each attention head's maximal attention span, freeing up memory in the bottom layers (which typically need very short attention spans) and applying it to the top layers, allowing the top-level attention heads to reach significantly longer distances.",
"The adaptive span model's efficient use of attention also results in a significant speed boost.",
"We tune the sandwich coefficient on the development set for k { 1 , . . . , 8 } (the baseline model has 24 transformer layers).",
"We do not modify any hyperparameters, including the number of training epochs.",
"Table 5 compares the baseline model's performance with the sandwich transformer's.",
"On text8, the sandwich transformer performs within the baseline's random seed variance.",
"On enwik8, the sandwich transformer gains an improvement of about 0.007 bits-per-character, matching the state of the art results obtained by the Transformer-XL-based Compressive Transformer of Rae et al. (2020).",
"However, our approach is able to achieve this result without applying the Transformer-XL's recurrent attention, which is much slower (Sukhbaatar et al., 2019), and without adding additional parameters (the compressive transformer uses 277M parameters, while our baseline and sandwich models use only 209M).",
"Sandwich Decoders Tranformer-based translation models (Vaswani et al., 2017) consist of an encoder and decoder, where the encoder has interleaved self-attention and feedforward sublayers (just as in language models), while the decoder includes an additional sublayer, cross-attention ( c ), between every pair of self-attention and feedforward sublayers.",
"Cross-attention sublayers attend to the encoder's representations of the input sen-tence's tokens.",
"Following our notation from Section 2, a transformer decoder layer modifies the sequence of tokens in the target language Y 0 , using the encoded source tokens X , as follows: Y 1 = self-attention ( Y 0 ) + Y 0 Y 2 = cross-attention ( Y 1 , X ) + Y 1 Y 3 = feedforward ( Y 2 ) + Y 2 Applying the sandwich pattern to the encoder follows the same methodology as our previous experiments.",
"However, for the decoder, we group the Model text8 (BPC) enwik8 (BPC) Transformer-XL (Dai et al., 2019) 1.08 0.99 Adaptive Span (Sukhbaatar et al., 2019) 1.07 0.98 Compressive (Rae et al., 2020) 0.97 Baseline (Adaptive Span; 5 Runs) 1.0802 0.0103 0.9752 0.0008 Sandwich 243 1.076 Sandwich 245 0.968 Table 5: Performance on character-level language modeling, evaluated on the enwik8 and text8 test sets.",
"self-attention ( s ) and cross-attention ( c ) sublayers, and treat them as a single unit for reordering purposes ( sc ).",
"For example, a three layer decoder ( scfscfscf ) with a sandwiching coefficient of k = 1 would be: scscfscff .",
"We apply the sandwich pattern to either the encoder or decoder separately, while keeping the other stack in its original interleaved pattern.",
"Experiment Setting As a baseline, we use the large transformer model (6 encoder/decoder layers, embedding size of 1024, feedforward inner dimension of 4096, and 16 attention heads) with the hyperparameters of Ott et al. (2018).",
"We also follow their setup for training and evaluation: we train on the WMT 2014 En-De dataset which contains 4.5M sentence pairs; we validate on newstest13 and test on newstest14.",
"We use a vocabulary of 32K symbols based on a joint source and target byte pair encoding (Sennrich et al., 2016).",
"For inference we use beam search with a beam width of 4 and length penalty of 0.6, following Vaswani et al. (2017) and Ott et al. (2018).",
"As before, we do not modify our model's hyperparameters or training procedure.",
"Results Table 6 shows that reordering of either the encoder or decoder does not have a significant impact on performance, across the board.",
"We also find that using the most extreme sandwich decoder ( sc ) 6 f 6 performs almost exactly the same as the average baseline; this result is consistent with our observation from Section 4, where we show that the extreme sandwich language model ( s 16 f 16 ) performs as well as the baseline.",
"Discussion This experiment indicates that a reordering pattern that benefits one particular task (language modeling) might not carry the same performance gains to another (machine translation).",
"However, it also demonstrates the general robustness of transformer architectures to sublayer reordering, as we did not observe any major perfor-Sandwich Encoder Decoder Coefficient Sandwich Sandwich 0 (Baseline) 28.74 0.15 1 28.71 28.64 2 28.71 28.56 3 28.81 28.67 4 28.48 28.66 5 28.45 28.76 Table 6: BLEU on newstest2014 En-De.",
"mance degradation.",
"Since the sandwich pattern naively groups selfand cross-attention sublayers together, it is also possible that a reordering pattern that takes all three sublayer types into account could potentially improve performance.",
"At the time of writing, we do not have an explanation for why sublayer reordering improves performance on language modeling.",
"However, we are able to determine that sandwich transformers spread their attention in a different fashion than interleaved models.",
"We analyze two baseline models and two sandwich 166 models trained with different seeds on the WikiText-103 dataset, by first recording the attention values that each token's heads assign to all other tokens during inference on the validation set.",
"Given the attention outputs of two models, we then compute the models' attention distance for each token, and for each self-attention sublayer.",
"This metric compares the attention distribution in the i th self-attention sublayer of the first model to that of the i th self-attention sublayer of the second model, for a specific token.",
"Given a token and a self-attention sublayer, Model Pair Average Attention Distance Baseline Baseline 1 .",
"we use the Hungarian algorithm (Kuhn, 1955) to find a matching of heads in the first model to heads in the second model [ a 1 , b 1 ] , . . . , [ a 8 , b 8 ] such that (cid:80) 8 i =1 EMD ( a i , b i ) is minimized, where EMD ( a i , b i ) is the earth mover's (Wasserstein) distance between the attention distributions of head a i in the first model and head b i in the second model.",
"That minimal value is the attention distance for that token, in that layer.",
"We then average the attention distances across all tokens and layers.",
"Table 7 shows the average attention distances between every pair of models.",
"We observe that models of the same architecture have significantly lower attention distances than models with different sublayer orderings.",
"This indicates that sublayer reordering has a strong effect on the attention function that the model learns in each head.",
"Future investigations of what this difference is, in a qualitative sense, could potentially provide important insights for designing better reordering patterns.",
"In this paper, we manually search through a constrained transformer architecture space, after analyzing the results of two small-scale random searches.",
"This human-in-the-loop method for architecture search has advantages over previous methods (Jozefowicz et al., 2015; Zoph and Le, 2016; Tan and Le, 2019) since it requires that only a few dozen models be trained, unlike typical architecture search methods that require training thousands of instances, consuming massive computational resources.",
"While we do find a better performing transformer, our goal is not only to do so, but to better understand how sublayer ordering affects transformer models.",
"Future work could apply methods from the architecture space literature to the sublayer ordering problem.",
"Furthermore, a better understanding of the inner workings of transformers could inspire more efficient, constrained architecture search.",
"Much recent work has been devoted to improving transformers by modifying their sublayers.",
"This includes sparsifying their attention patterns, either in an input-based manner (as in Correia et al., 2019), or in a static manner (as in Guo et al., 2019).",
"So et al. (2019) proposed modifying the transformer by adding convolutions and changing the activation function, while others have demonstrated that different initialization schemes (Zhang et al., 2019) and repositioning the layer normalization (Nguyen and Salazar, 2019) can also have a positive effect on performance.",
"In this paper, we do not modify the sublayers at all, but simply rearrange their order.",
"The performance gains from sublayer reordering are orthogonal to improving the sublayers themselves, and could be combined to achieve even better performance.",
"Recently, Lu et al. (2019) introduced a new transformer ordering, where instead of stacking layers of the form sf (as in the vanilla interleaved trans-former), they stack layers of the form fsf .",
"In order keep the total parameter count unchanged, Lu et al. cut the hidden dimension of their feedforward sublayers by half.",
"However, the overall depth of the network is increased by 50%, which causes a similar increase in the model's inference time (Sanh, 2019).",
"We train random transformer models with reordered sublayers, and find that some perform better than the baseline interleaved transformer in language modeling.",
"We observe that, on average, better models contain more self-attention sublayers at the bottom and more feedforward sublayer at the top.",
"This leads us to design a new transformer stack, the sandwich transformer, which significantly improves performance over the baseline at no cost in parameters, memory, or runtime.",
"We then show that the sandwich ordering also improves language modeling performance on a different word-level language modeling benchmark, and that the sandwich pattern can be used to achieve state of the art results on character-level language modeling.",
"Although sandwich ordering does not improve translation models, we show that they are robust to layer order changes, and that even extreme reorderings (all attention sublayers at the bottom, and all the feedforward sublayers at the top) perform as well as the baseline.",
"Sublayer reordering can improve the performance of transformer models, but an ordering that improves models on one group of tasks (word/character-level language modeling) might not improve the performance on another task.",
"By showing that sublayer ordering can improve models at no extra cost, we hope that future research continues this line of work by looking into optimal sublayer ordering for other tasks, such as translation, question answering, and classification.",
"We thank Tim Dettmers, Jungo Kasai, Sainbayar Sukhbaatar, and the anonymous reviewers for their valuable feedback."
]
| [
"abstain",
"abstain",
"objective",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"result",
"result",
"objective",
"abstain",
"result",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"result",
"result",
"objective",
"result",
"result",
"abstain",
"result",
"other"
]
|
[
"Semantic role labeling (SRL) is a task to recognize all the predicate-argument pairs of a sentence, which has been in a performance improvement bottleneck after a series of latest works were presented.",
"This paper proposes a novel syntax-agnostic SRL model enhanced by the proposed associated memory network (AMN), which makes use of inter-sentence attention of label-known associated sentences as a kind of memory to further enhance dependency-based SRL.",
"In detail, we use sentences and their labels from train dataset as an associated memory cue to help label the target sentence.",
"Furthermore, we compare several associated sentences selecting strategies and label merging methods in AMN to find and utilize the label of associated sentences while attending them.",
"By leveraging the attentive memory from known training data, Our full model reaches state-of-the-art on CoNLL-2009 benchmark datasets for syntax-agnostic setting, showing a new effective research line of SRL enhancement other than exploiting external resources such as well pre-trained language models.",
"Semantic role labeling (SRL) is a task to recognize all the predicate-argument pairs of a given sentence and its predicates.",
"It is a shallow semantic parsing task, which has been widely used in a series of natural language processing (NLP) tasks, such as information extraction (Liu et al., 2016) and question answering (Abujabal et al., 2017).",
"Corresponding author.",
"This paper was partially supported by National Key Research and Development Program of China (No. 2017YFB0304100), National Natural Science Foundation of China (No. U1836222 and No. 61733011) and Key Project of National Society Science Foundation of China (No. 15-ZDA041).",
"predicate identification, predicate disambiguation, argument identification, and argument classifica-tion.",
"In recent years, great attention (Zhou and Xu, 2015; Marcheggiani et al., 2017; He et al., 2017, 2018a,b) has been turned to deep learning method, especially Long Short-term Memory (LSTM) network for learning with automatically extracted features.",
"(Zhou and Xu, 2015) proposed the first end-to-end recurrent neural network (RNN) to solve the SRL task.",
"(Marcheggiani et al., 2017) studied several predicate-specified embedding and decoding methods.",
"(He et al., 2017) delivered a full study on the influence of RNN training and decoding strategies.",
"Whether to use the syntactic information for SRL is also studied actively (He et al., 2017, 2018b).",
"Since the recent work of (Marcheggiani et al., 2017), which surprisingly shows syntax-agnostic dependency SRL for the first time can be rival of syntax-aware models, SRL has been more and more formulized into standard sequence labeling task on a basis of keeping syntax unavailable.",
"A series of work on SRL received further performance improvement following this line through further refining neural model design (He et al., 2018a).",
"Different from all previous work, we propose to introduce an associated memory network which builds memory from known data through the inter-sentence attention to enhance syntax-agnostic model even further.",
"Inspired by the observation that people always refer to other similar problems and their solutions when dealing with a problem they have never seen, like query in their memory, we want to utilize similar known samples which include the associated sentences and their annotated labels to help model label target sentence.",
"To reach such a goal, we adopt a memory network component, and use inter-sentence attention to fully exploit the information in memory.",
"Based on Memory Network (Weston et al., 2014; Sukhbaatar et al., 2015), (Miller et al., 2016) proposed Key-Value Memory Network (KV-MemNN) to solve Question Answering problem and gain large progress.",
"Our proposed method is similar to KV-MemNN, but with a different definition of key-value and different information distilling process.",
"Thus, we propose a carefully designed inter-sentence attention mechanism to handle it.",
"Recently, there are also some attempts to make use of attention mechanism in SRL task.",
"(Tan et al., 2018; Strubell et al., 2018) focus on self-attention, which only uses the information of the input sentence as the source of attention.",
"(Cai et al., 2018) makes use of biaffine attention (Dozat and Manning, 2017) for decoding in SRL, which was the current state-of-the-art (SOTA) in CoNLL-2009 benchmark as this work was embarking.",
"Different from all previous work, we utilize inter-sentence attention to help model leverage associated information from other known sentences in the memory.",
"To our best knowledge, this is the first time to use memory network in the SRL task.",
"Our evaluation on CoNLL-2009 benchmarks shows that our model outperforms or reaches other syntax-agnostic models on English, and achieves competitive results on Chinese, which indicates that memory network learning from known data is indeed helpful to SRL task.",
"There are several SRL annotation conventions, such as PropBank (Bonial et al., 2012) and FrameNet (Baker et al., 1998).",
"This paper focuses on the former convention.",
"Under PropBank convention, there are two role representation forms, which are span-based SRL, such as CoNLL 2005 and CoNLL 2012 shared tasks, and dependency-based SRL, such as CoNLL 2009 shared task.",
"The former uses span to represent argument, while the latter uses the headword of the span to represent the argument.",
"As the latter has been more actively studied due to dependency style SRL for convenient machine learning, we will focus on dependency SRL only in this work.",
"Given a sentence S , the goal of dependency SRL task is to find all the predicate-argument pairs ( p, a ) .",
"The following shows an example sentence with semantic role labels marked in subscripts.",
"Here, v means the predicate, A0 means the agent, A1 means the patient and ARGM-MNR means how an action v is performed.",
"In the rest of this paper, we will describe our model in Section",
"2. Then, the experiment set-up and results are given in Section",
"3. Related works about SRL and attention mechanism will be given in Section",
"4. Conclusions and future work are drawn in Section",
"5. 2 Model An SRL system usually consists of four pipeline modules: predicate identification and disambiguation, argument identification and classification.",
"Following most of previous work, we focus on the last two steps in standard SRL task: argument identification and classification.",
"The predicate identification subtask is not needed in CoNLL-2009 shared task 1 , and we follow previous work (He et al., 2018b) to handle the predicate disambiguation subtask.",
"This work will only focus on the argument labeling subtask through sequence labeling formalization.",
"We first describe our base model in Section 2.1.",
"Then we introduce the proposed associated memory network including the inter-sentence attention design and label merging strategies in Section 2.2.",
"The full model architecture is shown in Figure 1.",
"We use the concatenation of the following embeddings as the representation for every word.",
"(1) Random-initialized word embedding x rei R d re (2) GloVe (Pennington et al., 2014) word embedding x pei R d pe pre-trained on 6B tokens (3) Random-initialized part-of-speech (POS) tag embedding x pos i R d pos (4) Random-initialized lemma embedding x lei R d le (5) Contextualized word embedding derived by applying fully connected layer on ELMo embedding x cei R d ce (Pe-ters et al., 2018), and (6) Random-initialized predicate specified flag embedding x predi R d pred .",
"The final representation of each word is: x i = x rei x pei x posi x lei x cei x predi where stands for concatenation operator.",
"LSTM network is known to handle the dependency over long sentence well, and can effectively model the context information when encoding.",
"Therefore, we leverage a stacked BiLSTM network LST M e to be our encoder.",
"It takes word embedding sequence x = [ x i ] n S i =1 of sentence S = [ w i ] n S i =1 as input ( n S is the length of sentence), and outputs two different hidden states h i and h i for word w i by processing the sequence in forward and backward directions.",
"The final contextual representation of word w i is the concatenation of two hidden states h i = h i h i .",
"Then, we use a final softmax layer after the BiLSTM encoding to predict the label of each word.",
"Using the base model as backbone, we introduce an associated memory network (AMN) component for further performance improvement.",
"The proposed AMN memorizes known associated sentences and their labels, then the useful clue in the memory will be delivered to the SRL module through an inter-sentence mechanism.",
"AMN processing includes three steps, associated sentence selection, inter-sentence attention and label merging.",
"We aim to utilize the associated sentences and their labels to help our model label the target sentences.",
"For the sake of fairness, we only use the sentences in train dataset as our source.",
"However, it is impossible to attend all the sentences in train dataset because of the extremely high computational and memory cost.",
"Therefore, we propose a filter to select the most useful sentences from the given dataset (train dataset in this paper) when given the label-unknown sentence S .",
"The filter algorithm is straightforward.",
"First, We compute the distance of every two sentences.",
"Then, we sort all the sentences in train dataset according to their distances with the target sentence S , and select top m sentences { A j } mj =1 with the minimum distances and their label sequences { L j } mj =1 as our associated attention.",
"m is the memory size.",
"As for the computation of distance between two sentences, we formally consider three types of distances, which are edit distance (ED) , word moving distance (WMD) and smooth inverse frequency distance (SD) , plus random distance (RD) as baseline.",
"These distances are defined as follows, edit distance This method uses the edit distance of the POS tag sequences of two sentences as the distance value.",
"word moving distance Following (Kusner et al., 2015), this method takes word moving distance of two sentences 2 .",
"smooth inverse frequency distance Following (Arora et al., 2017), we use Euclidean distance between the SIF embedding of two sentences as the distance value.",
"random distance This method returns a random value for distance computation thus lead to selecting sentences randomly in the train dataset.",
"This part aims to attain the inter-sentence attention matrix, which can be also regarded as the core memory part of the AMN.",
"The input sentence S and associated sentences { A j } mj =1 first go through a stacked BiSLTM network LST M a to encode the sentence-level information to each word representation 3 : S (cid:48) = LST M a ( S ) A (cid:48) j = LST M a ( A j ) j { 1 , 2 , ..., m } where S (cid:48) = [ x (cid:48) i ] n S i =1 and A (cid:48) j = [ x (cid:48) j,k ] n j k =1 are the lists of new word representations, with each word representation is a vector x (cid:48) R d a , where d a is the size of hidden state in LST M a .",
"Then, for each associated sentence A (cid:48) j , we multiply it with the input sentence representation S (cid:48) to get the raw attention matrix M raw j .",
"Every element M rawj ( i, k ) = x (cid:48) i x (cid:48) Tj,k can be regarded as an indicator of similarity between the i th word in input sentence S (cid:48) and the k th word in associated sentence A (cid:48) j .",
"Finally, we perform softmax operation on every row in M rawj to normalize the value so that it can 2 In this paper, we use relaxed word moving distance (rwmd) for efficiency 3 Here we abuse the symbol S and A j for meaning both the word sequence [ w i ] and the embedded sequence [ x i ] Figure 1: Semantic role labeling with associated memory network, where S is the input sentence with its length n S .",
"be considered as probability from input sentence S to associated sentence A j .",
"where f ( ) stands for softmax function.",
"i,j can be regarded as probability vector indicating the similarity between the i th word in sentence S and every word in the associated sentence A (cid:48) j .",
"In order to utilize the labels { L j } mj =1 of the associated sentences during decoding, a label merging needs to be done.",
"We use randomly initialized argument embedding x ae R d ae to embed each argument label.",
"Therefore, the label sequence L j of associated sentence A j can be written as L j = [ x aej,k ] n j k =1",
".We treat the probability vector i,j as weight to sum all the elements in L j to get the associated-sentence-specified argument embedding a i,j , which represents the attention embedding of word w i S calculated from the j th associated sentence A j and label L j .",
"a i,j = i,j L Tj = (cid:80) n j k =1 i,j ( k ) x aej,k Because the associated sentences are different, the overall contributions of these argument embeddings should be different.",
"We let the model itself learn how to make use of these argument embeddings.",
"Following attention combination mechanism from (Libovick`y and Helcl, 2017), we consider four ways to merge the label information.",
"1) Concatenation All the associated argument embedding are concatenated as the final attention embeddings.",
"2) Average The average value of all the associated argument embeddings is used as the final attention embedding.",
"3) Weighted Average The weighted average of all the associated argument embedding is used as the final attention embedding.",
"We calculate the mean value of every raw similarity matrix M rawj to indicate the similarity between input sentence S and associated sentence A j , and we use the softmax function to normalize them to get a probability vector indicating the similarity of input sentence S towards all the associated sentences { A j } mj =1 .",
"where f ( ) stands for softmax function and g ( ) represents the mean function.",
"Then, we use the probability vector as weight to sum all the associated-sentence-specified attention embedding a i,j to get the final attention embedding a i of the i th word w i in input sentence S .",
"Then, we perform softmax operation on every row in M raw to normalize the value so that it can be considered as probability from input sentence S to all associated sentences A j .",
"where f ( ) stands for softmax operation.",
"n all = (cid:80) mj =1 n j is the total length of all m associated sentences.",
"We also concatenate the associated label information, and use i as weight to sum the concatenated label sequence as final attention embedding.",
"After we have the final attention embedding a i , we concatenate it with word embedding x i as the input of the BiLSTM encoder LST M e .",
"We conduct experiments on CoNLL-2009 (Hajic et al., 2009) English and Chinese dataset.",
"We use the standard training, development and test data split provided by CoNLL-2009 shared task.",
"The word lemma, word POS are the predicted ones given in CoNLL-2009 dataset.",
"Adam optimizer (Kingma and Ba, 2014) is used for training to minimize the categorical cross entropy loss.",
"All the hyper-parameters we use are listed in Table 1.",
"All parameters are learned during training, and are randomly initialized except the pre-trained GloVe (Pennington et al., 2014) word embeddings.",
"For English, We independently determine the best distance calculating method and the best merging method one after another.",
"First, we select a distance according to the results on development set and then we determine the merging method with the selected distance method.",
"At last we explore the impact of memory size.",
"For Chinese, System (syntax-aware single) P R F 1 (Zhao et al., 2009a) -86.2 (Zhao et al., 2009c) -85.4 (FitzGerald et al., 2015) -86.7 (Roth and Lapata, 2016) 88.1 85.3 86.7 (Marcheggiani and Titov, 2017) 89.1 86.8 88.0 (He et al., 2018b) 89.7 89.3 89.5 (Li et al., 2018) 90.3 89.3 89.8 System (syntax-aware ensemble) P R F 1 (FitzGerald et al., 2015) -87.7 (Roth and Lapata, 2016) 90.3 85.7 87.9 (Marcheggiani and Titov, 2017) 90.5 87.7 89.1 System (syntax-agnostic single) P R F 1 (Marcheggiani et al., 2017) 88.7 86.8 87.7 (He et al., 2018b) 89.5 87.9 88.7 (Cai et al., 2018) 89.9 89.2 89.6 (Li et al., 2018) 89.5 87.9 88.7 Ours ( + AMN + ELMo) 90.0 89.2 89.6 Table 2: Results on CoNLL-2009 English in-domain (WSJ) test set.",
"we obtain the result with similar parameters as for the best model in English.",
"The English and Chinese GloVe word embeddings are both trained on Wikipedia.",
"The pretrained English ELMo model is from (Peters et al., 2018), and the Chinese one is from (Che et al., 2018), which is hosted at (Fares et al., 2017).",
"The model is trained for maximum 20 epochs for the nearly best model based on development set results.",
"We re-run our model using different initialized parameters for 4 times and report the average performance 4 .",
"For the predicate disambiguation, we use the same one from (He et al., 2018b) with the precisions",
"of 95.01% and 95.58% on development and test sets.",
"We compare our full model (using edit distance and average method ) with the reported state-of-the-art models on both English and Chinese dataset.",
"The results are in Tables 2, 3 and",
"4. For English in-domain test, our model outperforms the syntax-agnostic model in (He et al., 2018b), whose architecture is quite similar to our base model.",
"Our model achieves 89.6% in F 1 score, which is the same with current SOTA syntax-agnostic model (Cai et al., 2018).",
"Besides, our result is competitive with existing syntax-aware and better than ensemble models.",
"The advantage is more salient on English out-of-domain test set.",
"The F 1 score of our model is 79.7%, which is 0.7% higher than the current SOTA syntax-agnostic model (Cai et al., 2018).",
"The result is also competitive with the best syntax-aware model (Li et al., 2018).",
"The comparisons show that the proposed model has a greater generalization ability.",
"For Chinese, starting with the similar parameters as for the best model in English, we find that attending 5 associated sentences shows a better result on Chinese.",
"Our model achieves 83.8% F 1 score, outperforming (He et al., 2018b) with an improvement of 2.0% in F 1 score.",
"Our result is also competitive with that of (Cai et al., 2018).",
"Note that our method is not conflict with the one in (Cai et al., 2018), which leverages biaffine attention (Dozat and Manning, 2017) for decoding.",
"However, due to experiment cycle, we are not able to combine these two methods together.",
"We will leave the combination as future work.",
"In the following part, we conduct several ablation studies on our model.",
"All the experiments are re-run 2-4 times and the average values are re-System P R F 1 WMD (Kusner et al., 2015) 89.1 87.1 88.1 SD (Arora et al., 2017) 88.5 87.5 88.0 RD 89.1 87.2 88.1 Base Model 88.7 86.9 87.8 ED 89.0 87.5 88.3 Table 5: Ablations about distance on CoNLL-2009 English development set.",
"Table 5 shows the performance of different distance calculating methods.",
"All models use average method for label merging, and the memory size m is set to",
"4. It can be observed from Table 5 that edit distance performs best among all the distance calculating methods, with 88.3% F 1 score.",
"All the distance calculating methods have surpassed the base model, showing that the proposed AMN is effective.",
"Note that even the random distance model performs better than the base model, with an improvement of 0.3% in F 1 score, which shows that the proposed AMN can effectively extract useful information from even poorly related sentences.",
"Besides, associated sentence selection methods based on word embeddings like WMD and SD have similar performance with random distance (RD), which shows simple word embedding may not be good enough signal indicator to measure semantic structure similarity in SRL task.",
"On the contrary, we may also try to explain why even the random distance selection may work to some extent.",
"As sentences always have core arguments label such as A0, A1 and A2, associated sentences even from random selection may also have such labels, which makes them helpful to enhance SRL over these labels.",
"This may explain why our model with randomly selected associated sentences can distinguish core arguments better.",
"Table 6 shows the performance of different label merging methods.",
"All models use edit distance with 4 associated sentences.",
"The result shows that Average label merging strategy gives the best performance, achieving 88.3% in F 1 score with an improvement of 0.5% compared to the baseline model.",
"Note that our weighted average model does not outperform the average model, which is a surprise to us.",
"We speculate that the current weight calculation method needs to be more improved to fit the concerned task.",
"Table 7 compares the performance contribution from ELMo and AMN.",
"Our model can achieve better performance only using informative clue from training set in terms of AMN design, rather than focusing on external resource like ELMo.",
"However, even though our baseline SRL has been enhanced by ELMo, it can still receive extra performance improvement from the propose AMN.",
"Note that our enhancement from the proposed AMN keeps effective when ELMo is included (a 0.5% enhancement on baseline over the 0.3% enhancement on ELMo baseline) 3.5 Ablation on Memory Size We show the effect of different memory size in Figure",
"3. Note that more associated sentences means more cost on time and space.",
"We test memory size m from 2 to 6 (which reaches the limit under experiment setting in 11G GPU).",
"We also fit the measured points with a linear function (the blue line in Figure 3).",
"The performance of our model has a general trend of increasing when the System (syntax-aware) P R F 1 (He et al., 2018b) 86.8 85.8 86.3 (He et al., 2018b) + ELMo 87.7 87.0 87.3 (Li et al., 2018) 87.7 86.7 87.2 (Li et al., 2018) + ELMo 89.2 87.6 88.4 Ours (syntax-agnostic) P R F 1 Base 86.9 85.0 86.0 Base + AMN 86.9 85.6 86.3 Base + ELMo 88.7 86.9 87.8 Ours + AMN + ELMo 89.0 87.5 88.3 Table 7: AMN vs. ELMo, the performance comparison on English development set.",
"memory size becomes larger, which shows the potential of the proposed AMN.",
"To further understand the advance of the proposed method, we conduct an error type break down analysis.",
"Figures 4 and 5 show the confusion matrices of labeling errors in the baseline model and our model on development set, respectively.",
"We only show the main and most informative type of arguments.",
"Every number in these figures stands for the times of occurrence.",
"Comparing these two confusion matrixes shows that the proposed model makes fewer mistakes between core arguments such as A 0 , A 1 , and A 2 .",
"AMN indeed helps when labeling them.",
"It is also noted that, as in (He et al., 2017; Tan et al., 2018), the model still easily confuses ARG2 with AM-DIR, AM-LOC and AM-MNR.",
"We compare the performance concerning with the distance of argument and predicate on our best model and base model in Figure 2, from which we can observe that our model performs better nearly at any distance.",
"To explore how the AMN works in the model, we visualize the similarity matrix M of some sentences from development set in Figure 6.",
"The input sentence is it A 1 should AM MOD run v forever AM TMP .",
"The current predicates are run , happen respectively.",
"The visualization shows that inter-sentence attention can find and align the word in the similar context correctly, which shows that the proposed AMN is reasonable and effective.",
"Early attempts (Pradhan et al., 2005; Zhao et al., 2009a,b, 2013; Roth and Woodsend, 2014) to the SRL task were mainly linear classifiers.",
"The main focus was how to find proper feature templates that can best describe the sentences.",
"(Pradhan et al., 2005) utilized a SVM classifier with rich syntactic features.",
"(Toutanova et al., 2008) took the structural constraint into consideration by using a global reranker.",
"(Zhao et al., 2009c) adopted a maximum entropy model with large scale feature template selection.",
"(Roth and Woodsend, 2014) explored the distributional word representations as new feature to gain more powerful models.",
"Recently, a great attention has been paid on neural networks.",
"(Zhou and Xu, 2015) proposed an end-to-end model using stacked BiLSTM network combined with CRF decoder without any syntactic input.",
"(Marcheggiani et al., 2017) explored the predicate-specified encoding and decoding and also provided a syntax-agnostic LSTM model.",
"(He et al., 2017) followed (Zhou and Xu, 2015) and analyzed all popular methods for initialization and regularization in LSTM network.",
"borrows power from the memory, the proposed inter-sentence attention in our AMN shares features with memory networks, which was proposed in (Weston et al., 2014) with motivation that memory may reduce the long-term forgetting issues.",
"(Sukhbaatar et al., 2015) and (Miller et al., 2016) later further improved this work.",
"However, we use quite different mechanisms to store the memory, and the effectiveness of our model needs a carefully designed attention mechanism to handle the sequence-level information distilling.",
"Attention mechanism was first used by (Bah-danau et al., 2014) in machine translation.",
"Recently, (Tan et al., 2018) and (Strubell et al., 2018) proposed to use self-attention mechanism in SRL task.",
"(Cai et al., 2018) leveraged the biaffine attention (Dozat and Manning, 2017) for better decoding performance.",
"Different from all the existing work, we instead introduce an inter-sentence attention to further enhance the current state-of-the-art SRL.",
"This paper presents a new alternative improvement on strong SRL baselines.",
"We leverage memory network which seeks power from known data, the associated sentences, and thus is called associated memory network (AMN).",
"The performance of our model on CoNLL-2009 benchmarks shows that the proposed AMN is effective on SRL task.",
"As to our best knowledge, this is the first attempt to use memory network in SRL task.",
"There is still a large space to explore along this research line.",
"For example, our weighted average method may need more carefully improved.",
"Our model can be built over the biaffine attention which has been verified effective in (Cai et al., 2018) 5 , and the encoder in our model can be improved with more advanced forms such as Transformer (Vaswani et al., 2017).",
"At last, as this work is done on a basis of quite limited computational resources, only one piece of nVidia 1080Ti (11G graphic memory), much plentiful available computational resource will greatly enable us to explore more big model setting (i.e., larger memory size m ) for more hopefully better performance improvement.",
"5 As this paper is submitting, we get to know the work (Li et al., 2019), which has taken both strengths of biaffine and ELMo.",
"We leave the verification of our proposed method over this new strong baseline in the future."
]
| [
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"objective",
"objective",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"result",
"objective",
"method",
"objective"
]
|
[
"Research in the area of style transfer for text is currently bottlenecked by a lack of standard evaluation practices.",
"This paper aims to alleviate this issue by experimentally identifying best practices with a Yelp sentiment dataset.",
"We specify three aspects of interest (style transfer intensity, content preservation, and naturalness) and show how to obtain more reliable measures of them from human evaluation than in previous work.",
"We propose a set of metrics for automated evaluation and demonstrate that they are more strongly correlated and in agreement with human judgment: direction-corrected Earth Mover's Distance, Word Mover's Distance on style-masked texts, and adversarial classification for the respective aspects.",
"We also show that the three examined models exhibit tradeoffs between aspects of interest, demonstrating the importance of evaluating style transfer models at specific points of their tradeoff plots.",
"We release software with our evaluation metrics to facilitate research.",
"Style transfer in text is the task of changing an attribute (style) of an input, while retaining non-attribute related content (referred to simply as content for brevity in this paper).",
"1 For instance, previous work has modified text to make it more positive (Shen et al., 2017), romantic (Li et al., 2018), or politically slanted (Prabhumoye et al., 2018).",
"Some style transfer models enable modifica-tions by manipulating latent representations of the text (Shen et al., 2017; Zhao et al., 2018; Fu et al., 2018), while others identify and replace style-related words directly (Li et al., 2018).",
"Regardless of approach, they are hard to compare as there is 1 This definition of style transfer makes a simplifying as-sumption that style words can be distinguished from content words, or words carrying relatively less or no stylistic weight, such as caf ` e in What a nice caf ` e.",
"The definition is motivated by penalizing unnecessary changes to content words, e.g. What a nice caf ` e to This is an awful caf ` e. currently neither a standard set of evaluation practices, nor a clear definition of which exact aspects to evaluate.",
"In Section 2, we define three key aspects to consider.",
"In Section 3, we summarize issues with previously used metrics.",
"Many rely on human ratings, which can be expensive and time-consuming to obtain.",
"To address these issues, in Section 4, we consider how to obtain more reliable measures of human judgment for aspects of interest, and automated methods more strongly correlated with human judgment than previously used methods.",
"Lastly, in Section 5, we show that the three examined models exhibit aspect tradeoffs, highlighting the importance of evaluating style transfer models at specific points of their tradeoff plots.",
"We release software with our evaluation metrics at https://github.com/passeul/ style-transfer-model-evaluation .",
"We consider three aspects of interest on which to evaluate output text x (cid:48) of a style transfer model, potentially with respect to input text x :",
"1. style transfer intensity ST I ( SC ( x ) , SC ( x (cid:48) )) quantifies the difference in style, where SC ( ) maps an input to a style distribution 2. content preservation CP ( x, x (cid:48) ) quantifies the similarity in content between the input and the output 3. naturalness NT ( x (cid:48) ) quantifies the degree to which the output appears as if it could have been written by humans",
"Style transfer models should be compared across all three aspects to properly characterize differences.",
"For instance, if a model transfers from negative to positive sentiment, but alters content such as place names, it preserves content poorly.",
"If it preserves content well, but sequentially repeats words such as the, the output is unnatural.",
"Conversely, a model that overemphasizes text reconstruction would yield high content preservation and possibly high naturalness, but little to no style transfer.",
"All three aspects are thus critical to analyze in a system of style transfer evaluation.",
"We review previously used approaches for evaluating the outputs of style transfer models.",
"Due to the high costs related to obtaining human evaluations, we focus on three models: the cross-aligned autoencoder (CAAE), adversarially regularized autoencoder (ARAE), and delete-and-retrieve (DAR) models (Shen et al., 2017; Zhao et al., 2018; Li et al., 2018).",
"Table 1 illustrates the spread of evaluation practices in these papers using our notation from Section 2, showing that they all rely on a different combination of human and automated evaluation.",
"For human evaluation, the papers use different instruction sets and scales, making it difficult to compare scores.",
"Below we describe the automated metrics used for each aspect.",
"Some rely on training external models on the corpus of input texts, X , and/or the corpus of output texts, X (cid:48) .",
"We encourage readers seeking details on how to compute the metrics to reference the algorithms in the original papers.",
"Style Transfer Previous work has trained classifiers on X and corresponding style labels, and measured the number of outputs classified as having a target style (Shen et al., 2017; Zhao et al., 2018; Li et al., 2018).",
"Results from this target style scoring approach may not be directly comparable across papers due to different classifiers used in evaluations.",
"Content Preservation To evaluate content preservation between x and x (cid:48) , previous work has used BLEU (Zhao et al., 2018; Li et al., 2018), an n-gram based metric originally designed to evaluate machine translation models (Papineni et al., 2002).",
"BLEU does not take into account the aim of style transfer models, which is to alter style by necessarily changing words.",
"Intended differences between x and x (cid:48) are thus penalized.",
"Naturalness Past evaluations of naturalness have relied largely on human ratings on a variety of scales under different names: grammaticality, fluency/readability, and naturalness itself (Ta-ble 1).",
"An issue with measuring grammaticality is that text with proper syntax can still be semantically nonsensical, e.g. Colorless green ideas sleep furiously (Chomsky, 1957).",
"Furthermore, input texts may not demonstrate perfect grammaticality or readability, despite being written by humans and thus being natural by definition (Sec-tion 2).",
"This undermines the effectiveness of measures for such specific qualities of output texts.",
"Zhao et al. (2018) used perplexity to evaluate fluency, which, like grammaticality, we consider a subset of naturalness itself.",
"Low perplexity signi-fies less uncertainty over which words can be used to continue a sequence, quantifying the ability of a language model to predict gold or reference texts (Brown et al., 1992; Young et al., 2006).",
"However, style transfer outputs are not necessarily gold standard, and the correlation between perplexity and human judgments of those outputs is unknown in the style transfer setting.",
"We describe how to construct a style lexicon for use in human and automated evaluations.",
"We also describe best practices that we recommend for obtaining scores of those evaluations, as well as how they can be used for evaluating other datasets.",
"Please refer to Section 5 for experimental results.",
"Because the process of style transfer may result in the substitution or removal of more stylistically weighted words, it is ideal to have a lexicon of style-related words to reference.",
"Words in x and/or x (cid:48) that also appear in the lexicon can be ignored in evaluations of content preservation.",
"While building a new style lexicon or an extension of existing ones like WordNet-Affect (Strap-parava and Valitutti, 2004) may be feasible with binary sentiment as the style, it may not be scalable to manually do so for various other types of styles.",
"Static lexica also might not take context into account.",
"This is an issue for text with words or phrases that are ambiguous in terms of stylistic weight, e.g. dog in That is a man with a dog vs. That man is a dog.",
"It is more appropriate to automate the construction of a style lexicon per dataset of interest.",
"While multiple options may exist for doing so, we emphasize the simplicity and replicability of training a logistic regression classifier on X and corresponding style labels.",
"We populate the lexicon with features having the highest absolute weights, as those have the most impact on the outcome of the style labels.",
"(Table 2 shows sample words in the lexicon constructed for the dataset used in our experiments.)",
"While sentiment datasets have been widely used in the literature (Shen et al., 2017; Zhao et al., 2018; Li et al., 2018), a lexicon can be constructed for other datasets in the same manner, as long as the dataset has style labels.",
"Given existing NLP techniques, it may not be possible to correctly identify all style-related words in a text.",
"Consequently, there is a tradeoff between identifying more style-related words and incorrectly marking some other (content) words as style-related.",
"We opt for higher precision and lower recall to minimize the risk of removing content words, which are essential to evaluations of content preservation.",
"This issue is not critical because researchers can compare their style transfer methods using our lexicon.",
"As seen in Table 1, past evaluations of both style transfer and naturalness consider only output text x (cid:48) .",
"Existing work from other fields have, however, shown that asking human raters to evaluate two relative comparisons provides more accurate scores than asking them to provide a numerical score for a single observation (Stewart et al., 2005; Bijmolt and Wedel, 1995).",
"With this knowledge, we construct more reliable ways of obtaining human evaluations via relative scoring instead of absolute scoring .",
"Style Transfer Intensity Past evaluations have raters mark the degree to which x (cid:48) exhibits a target style (Li et al., 2018).",
"We instead ask raters to score the difference in style between x and x (cid:48) , on a scale of 1 (identical styles) to 5 (completely different styles).",
"This approach can also used for non-binary cases.",
"Consider text modeled as a distribution over multiple emotions (e.g. happy, sad, scared, etc.), where each emotion can be thought of as a style.",
"One task could be to make a scared text more happy.",
"Presented with x and x (cid:48) , raters would still rate the degree to which they differ in style.",
"Content Preservation We consider the diffi-culty of asking raters to ignore style-related words as done in (Shen et al., 2017).",
"Because not all raters may identify the same words as stylistic, their evaluations may vary substantially from one another.",
"To account for this, we ask raters to evaluate content preservation on the same texts, but where we have masked style words using our style lexicon.",
"Under this new masking approach, raters have a simpler task, as they are no longer responsible for taking style into account when they rate the similarity of two texts on a scale of 1 to 5.",
"Naturalness We ask raters to determine whether x or x (cid:48) (they are not told which is which) is more natural.",
"An x (cid:48) marked as more natural indicates some success on the part of the style transfer model, as it is able to fool the rater.",
"This is in contrast to previous work, where raters score the naturalness of x (cid:48) on a continuous scale without taking x into account at all, even though x serves as the basis for comparison of what is considered natural.",
"Style Transfer Intensity Rather than count how many output texts achieve a target style, we can capture more nuanced differences between the style distributions of x and x (cid:48) , using Earth Mover's Distance (Rubner et al., 1998; Pele and Werman, 2009).",
"EMD ( SC ( x ) , SC ( x (cid:48) )) is the minimum cost to turn one distribution into the other, or how intense the transfer is.",
"Distributions can have any number of values (styles), so EMD handles binary and non-binary datasets.",
"Note that even if argmax ( SC ( x (cid:48) )) is not the target style class, EMD still acknowledges movement towards the target style with respect to SC ( x ) .",
"However, we penalize (negate) the score if SC ( x (cid:48) ) displays a relative change of style in the wrong direction, away from the target style.",
"Depending on x , not a lot of rewriting may be necessary to achieve a different style.",
"This is not an issue, as ST I relies on a style classifier to quantify not the difference between the content of x and x (cid:48) , but their style distributions.",
"For the style classifier, we experiment with textcnn (Kim, 2014; Lee, 2018) and fastText (Joulin et al., 2017).",
"Content Preservation We first subject texts to different settings of modification: style removal and style masking.",
"This is to address undesired penalization of metrics on texts expected to demonstrate changes after style transfer (Section 3).",
"For style removal, we remove style words from x and x (cid:48) using the style lexicon.",
"For masking, we replace those words with a (cid:104) customstyle (cid:105) placeholder.",
"Table 3 exemplifies these modifications.",
"For measuring the degree of content preservation, in addition to the widely used BLEU, we consider METEOR and embedding-based metrics.",
"METEOR is an n-gram based metric like BLEU, but handles sentence-level scoring more robustly, allowing it to be both a sentence-level and corpus-level metric (Banerjee and Lavie, 2005).",
"For the embedding-based metrics, word embeddings can be obtained with methods like Word2Vec (Mikolov et al., 2013) or GloVe (Pen-nington et al., 2014).",
"Sentence-level embeddings can be comprised of the most extreme values of word embeddings per dimension ( vector extrema ) (Forgues et al., 2014), or word embedding averages (Sharma et al., 2017).",
"Word Mover's Distance (WMD), based on EMD , calculates the minimum distance between word embeddings of x and of x (cid:48) , where smaller distances signify higher similarity (Kusner et al., 2015).",
"Greedy matching greedily matches words in x and x (cid:48) based on their embeddings, calculates their similarity (e.g. cosine similarity), and averages all the similarities.",
"It repeats the process in the reverse direction and takes the average of those two scores (Rus and Lintean, 2012).",
"We evaluate with all these metrics to identify the one most strongly correlated with human judgment of content preservation.",
"Naturalness For a baseline understanding of what is considered natural, any method used for automated evaluation of naturalness requires the human-sourced input texts.",
"We train unigram and neural logistic regression classifiers (Bowman et al., 2016) on samples of X and X (cid:48) for each transfer model.",
"Via adversarial evaluation, these classifiers must distinguish human-generated inputs from machine-generated outputs.",
"The more natural an output is, the likelier it is to fool a classifier (Jurafsky and Martin, 2018).",
"We calculate agreement between each type of human evaluation (Section 4.2) and each classifier AC .",
"Agreement is the ratio of instances where humans and AC rate a text as more natural than the other.",
"We also train LSTM language models (Hochre-iter and Schmidhuber, 1997) on X and compute sentence-level perplexities for each text in X (cid:48) in order to determine the relative effectiveness of adversarial classification as a metric.",
"Due to high costs of human evaluation, we focus on CAAE, ARAE, and DAR models with transfer tasks based on samples from the Yelp binary sentiment dataset (Shen et al., 2017).",
"2 Below we detail 2 Like most literature, including the papers on CAAE, ARAE and DAR, we focus on the binary case.",
"Creating a high-quality, multi-label style transfer dataset for evaluation is a demanding task, which is out of scope for this paper.",
"the range of parameters each model is trained on in order to compare evaluation practices and generate aspect tradeoff plots.",
"Each of three Amazon Turk raters evaluated 244 texts per aspect, per model.",
"Of those texts, half are originally of positive sentiment transferred to negative, and vice versa.",
"For brevity, we reference average scores (cor-relation, kappa, and agreement, each of which is described below) from across all models in our analysis of results.",
"For detailed scores per model, please refer to the corresponding tables.",
"For each style transfer model, we choose a wide range of training parameters to allow for variation of content preservation, and indirectly, of style transfer intensity, in X .",
"We show sample outputs from the models for a given input text in Table 4.",
"CAAE uses autoencoders (Vincent et al., 2008) that are cross-aligned, assuming that texts already share a latent content distribution (Shen et al., 2017).",
"It uses latent states of the RNN and multiple discriminators to align distributions of texts in X (cid:48) exhibiting one style with distributions of texts in X exhibiting another.",
"Adversarial components help separate style information from the latent space where inputs are represented.",
"We train CAAE on various values (0.01, 0.1, 0.5, 1,",
"5) of , a weight on the adversarial loss.",
"CAAE is a baseline for other style transfer models, such as ARAE, which trains a separate decoder per style class (Zhao et al., 2018).",
"We train ARAE on various values (1, 5, 10) of , which is also a weight on adversarial loss.",
"The third model that we evaluate, which also uses CAAE as a baseline, avoids adversarial methods in an approach called Delete-and-Retrieve (DAR) (Li et al., 2018).",
"It identifies and removes style words from texts, searches for related words pertaining to a new target style, and combines the de-stylized text with the search results using a neural model.",
"We train DAR on = 15 , where is a threshold parameter for the maximum number of style words that can be removed from texts, with Model Text Modification Setting Unmasked Style Masked CAAE 0.158 0.289 ARAE 0.201 0.321 DAR 0.161 0.281 Average 0.173 0.297 Table 5: Fleiss' kappas for human judgments of content preservation of unmasked and style-masked texts.",
"respect to the size of the corpus vocabulary.",
"For this single training value, we experiment with a range of values (0.1, 1, 15, 500) during test time because, by design, the model does not need to be retrained (Li et al., 2018).",
"We use Fleiss' kappa of inter-rater reliability (see formula in L. Fleiss and Cohen, 1973) to identify the more effective human scoring task for different aspects of interest.",
"The kappa metric is often levied in a relative fashion, as there are no universally accepted thresholds for agreements that are slight, fair, moderate, etc.",
"For comprehensive experimentation, we compare kappas over the outputs of each style transfer model.",
"The kappa score for ratings of content preservation based on style-masked texts is 0 .",
"297 .",
"Given the kappa score of 0 .",
"173 for unmasked texts, style masking is a more reliable approach towards human evaluation for content preservation (Table 5).",
"For style transfer intensity, kappas for relative scoring do not show improvement over the previously used approach of absolute scoring of x (cid:48) .",
"However, we observe the opposite for the aspect of naturalness.",
"Kappas for relative naturalness scoring tasks exceed those of the absolute scoring ones (Table 6).",
"Despite the two types of tasks having Model fastText textcnn Target Style Scores Earth Mover's Distance Target Style Scores Earth Mover's Distance CAAE 0.566 0.038 0.573 0.038 0.587 0.037 0.589 0.037 ARAE 0.513 0.053 0.516 0.053 0.515 0.053 0.519 0.053 DAR 0.470 0.049 0.539 0.045 0.508 0.047 0.566 0.043 Average 0.516 0.047 0.543 0.045 0.537 0.046 0.558 0.044 Table 7: Correlations of automated style transfer intensity metrics with human scores.",
"Model BLEU METEOR Embed Average Greedy Match Vector Extrema WMD CAAE 0.458 0.044 0.498 0.042 0.370 0.048 0.489 0.043 0.496 0.042 0.496 0.042 ARAE 0.337 0.064 0.387 0.062 0.313 0.065 0.419 0.060 0.423 0.060 0.445 0.058 DAR 0.440 0.051 0.455 0.050 0.379 0.054 0.472 0.049 0.472 0.049 0.484 0.048 Average 0.412 0.053 0.447 0.051 0.354 0.056 0.460 0.051 0.464 0.050 0.475 0.049 Table 8: Absolute correlations of content preservation metrics with human scores on texts with style removal.",
"different numbers of categories (2 vs 5), we can compare them by using a threshold to bin the absolute score for each text into a natural group ( x (cid:48) is considered to be more natural than x ) or un-natural one (vice versa), like in relative scoring.",
"For example, = 2 places texts with absolute scores greater than or equal to 2 into the natural group.",
"Judgments for relative tasks yield greater inter-rater reliability than those of absolute tasks across multiple thresholds ( { 2 , 3 } ).",
"This suggests that the relative scoring paradigm is preferable in human evaluations of naturalness.",
"Per aspect of interest, we compute Pearson correlations between scores from the existing metric and human judgments.",
"(As there were three raters for any given scoring task, we take the average of their scores.)",
"We do the same for our proposed metrics to identify which metric is more reliable for automated evaluation of a given aspect.",
"For style transfer intensity, across both the fastText and textcnn classifiers, our proposed direction-corrected Earth Mover's Distance metric has higher correlation with human scores than the past approach of target style scoring (Table 7).",
"than BLEU for machine translation (Banerjee and Lavie, 2005), shows the same relationship for style transfer.",
"However, across various text modification settings, WMD generally shows the strongest correlation with human scores (Tables 8 and 9).",
"Because WMD is lower when texts are more similar, it is anti-correlated with human scores.",
"We take absolute correlations to facilitate comparison with other content preservation metrics.",
"With respect to text modification, style masking may be more suitable as it, on average for WMD, exhibits a higher correlation with human judgments.",
"For naturalness, both unigram and neural classifiers exhibit greater agreement on which texts are considered more natural with the humans given relative scoring tasks than with those given absolute scoring tasks (Table 10), although the neural classifier achieves higher agreements on average.",
"We also confirm that sentence-level perplexity is 0 1 1 0 1 1 0 1 1",
"not an appropriate metric.",
"It exhibits no significant correlation with human scores ( = 0 . 05 ).",
"These results suggest that adversarial classifiers can be useful for automating measurement of naturalness.",
"Previous work has compared models with respect to a single aspect of interest at a time, but has only, to a limited degree, considered how relationships between multiple aspects influence these comparisons.",
"In particular, concurrent work by (Li et al., 2018) examines tradeoff plots, but focuses primarily on variants of its own model, while including only a single point on the plots of style transfer models from other papers.",
"For a comprehensive comparison, it is ideal to have plots for all models.",
"It is helpful to first understand the tradeoff space.",
"For example, we define extreme cases for style transfer intensity and content preservation, where we assume measurement of the latter ignores stylistic content.",
"Consider two classes of suboptimal models.",
"One class produces outputs with a wide range of style transfer intensity, but poor content preservation (Figure 1a).",
"The other class of models produces outputs with low style transfer intensity, but a wide range of content preservation (Figure 1b).",
"This is in contrast to a model that yields a wide range of style transfer intensity and consistently high content preservation (Figure 1c).",
"If we take that to be an ideal model for a sentiment dataset, we can interpret models with better performance to be the ones whose tradeoff plots are closer to that of the ideal model and farther from those of the suboptimal ones.",
"The plot for an ideal model will likely vary by dataset, especially because the tradeoff between content preservation and style transfer intensity depends on the level of distinction between style words and content words of the dataset.",
"With this interpretation of the tradeoff space, we construct a plot for each style transfer model (Figure 2), where each point represents a different hyperparameter setting for training (Section 5.1).",
"We collect scores based on the automated metrics most strongly correlated with human judgment: direction-corrected EMD for style transfer intensity, WMD for content preservation, and percent of output texts marked by an adversarial classifier as more natural than input texts.",
"Because WMD scores are lower when texts are more similar, we instead take the normalized inverses of the scores to represent the degree of content preservation.",
"Across all models, there is a trend of reduction in content preservation and naturalness as style transfer intensity increases.",
"Without the plots, one might conclude that ARAE and DAR perform substantially differently, especially if hyperparameters are chosen such that ARAE achieves the leftmost point on its plot and DAR achieves the rightmost point on its plot.",
"With the plots, at least for the set of hyperparameters considered, it is evident that they perform comparably (Figure 2a) and do not exhibit the same level of decrease in naturalness as CAAE (Figure 2b).",
"Previous work on style transfer models used a variety of evaluation methods (Table 1), making it difficult to meaningfully compare results across papers.",
"Moreover, it is not clear from existing research how exactly to define particular aspects of interest, or which methods (whether human or automated) are most suitable for evaluating and comparing different style transfer models.",
"To address these issues, we specified key aspects of interest (style transfer intensity, content preservation, and naturalness) and showed how to obtain more reliable measures of them from human evaluation than in previous work.",
"Our proposed automated metrics (direction-corrected EMD, WMD on style-masked texts, and adversarial classification) exhibited stronger correlations with human scores than existing automated metrics on a binary sentiment dataset.",
"While human evaluation may still be useful in future research, automation facilitates evaluation when it is infeasible to collect human scores due to prohibitive cost or limited time.",
"For style transfer intensity, the relative scoring task (rating the degree of stylistic difference between x and x (cid:48) ) did not have greater rater reliability than the previously used task of rating output texts on an absolute scale.",
"This is likely due to task complexity or rater uncertainty, which motivates the need for further exploration of task design for this particular aspect of interest.",
"For content preservation, our form of human evaluation operates on texts whose style words are masked out, unlike the previous approach (no masking).",
"Our approach addresses the unintentional variable of rater-dependent style identification that could lead to noisy, less reliable ratings.",
"Identification and masking of words was made possible with a style lexicon.",
"constructed the lexicon in a way that can be done for any style dataset, as long as style labels are available (Section 4.1).",
"We acknowledge a tradeoff between filling the lexicon with more style words and being conservative in order to avoid capturing content words.",
"We justify taking a more conservative approach as content words are naturally critical to evaluations of content preservation.",
"For naturalness, we introduced a paradigm of relative scoring that uses both the output and input texts.",
"This achieved a higher inter-rater reliability than did absolute scoring, the previous approach.",
"For style transfer intensity, we proposed using a metric with EMD as the basis to acknowledge the spectrum of styles that can appear in outputs and to handle both binary and non-binary datasets.",
"The metric also accounts for direction by penalizing scores in the cases where the style distribution of the output text explicitly moves away from the target style.",
"Previous work used external classifiers, whose style distributions for x and x (cid:48) can be used to calculate direction-corrected EMD, making it a simple addition to the evaluation workflow.",
"For content preservation, WMD (based on EMD) works in a similar fashion, but with word embeddings of x and of x (cid:48) .",
"BLEU, used widely in previous work, may yield weaker correlations with human judgment in comparison as it was designed to have multiple reference texts per candidate text (Papineni et al., 2002).",
"Several reference texts, which are more common in machine translation tasks, increase the chance of n -gram (such as n 3 ) overlap with the candidate.",
"In the style transfer setting, however, the only reference text for x (cid:48) is x .",
"Having a single reference text reduces the likelihood of overlap and the overall effectiveness of BLEU.",
"For naturalness, strong agreement of adversarial classifiers with relative scores assigned by humans suggest that classifiers are suitable for automated evaluation.",
"One might assume input texts would almost always be rated as more natural by both humans and classifiers, biasing the agreement.",
"This is not the case, as we justify our rating scheme with evidence of outputs being rated as more natural across several models (Figure 2b).",
"Output texts classified as more natural indicate some success for a style transfer model, as it can produce texts with a quality like that of human-generated inputs, which are, by definition, natural.",
"Finally, with aspect tradeoff plots constructed using scores from the automated metrics, we can directly compare models with respect to multiple aspects simultaneously.",
"Points of intersection, or near intersection, for different models signify that they, at the hyperparameters that yielded those points, can achieve similar results for various aspects.",
"These parameters can be useful for understanding the impact of decisions made during model design and optimization phases.",
"As we confirmed, sentence-level perplexity of output x (cid:48) is not meaningful by itself for the automated evaluation of naturalness.",
"The idea of using both x and x (cid:48) , akin to how we train automated classifiers of naturalness (Section 4.3), can be extended to construct a perplexity-based metric that also takes into account the perplexity of input x .",
"Another avenue for future work could be evaluating on datasets with a different style or number of style classes.",
"It is worth studying the distinction between style words and content words in the vocabulary of each such dataset.",
"Given the definition of style transfer and its simplifying assump-tion in Section 1, it would be reasonable to expect naturally low content preservation scores for any given style transfer model operating on datasets with less distinction, such as those of formality.",
"This is not so much an issue as it is a dataset-specific trend that can be visualized in corresponding tradeoff plots, which would provide a holistic evaluation of model performance.",
"In any case, results from inter-rater reliability and correlation testing on these additional datasets would overall enable more consistent evaluation practices and further progress in style transfer research.",
"We would like to thank Juncen Li, Tianxiao Shen, and Junbo (Jake) Zhao for guidance in the use of their respective style transfer models.",
"These models serve as markers of major progress in the area of style transfer research, without which this work would not have been possible."
]
| [
"abstain",
"objective",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"result",
"result",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
]
|
[
"During the past few decades, knowledge bases (KBs) have experienced rapid growth.",
"Nevertheless, most KBs still suffer from serious incompletion.",
"Researchers proposed many tasks such as knowledge base completion and relation prediction to help build the representation of KBs.",
"However, there are some issues unsettled towards enriching the KBs.",
"Knowledge base completion and relation prediction assume that we know two elements of the fact triples and we are going to predict the missing one.",
"This assumption is too restricted in practice and prevents it from discovering new facts directly.",
"To address this issue, we propose a new task, namely, fact discovery from knowledge base.",
"This task only requires that we know the head entity and the goal is to discover facts associated with the head entity.",
"To tackle this new problem, we propose a novel framework that decomposes the discovery problem into several facet discovery components.",
"We also propose a novel auto-encoder based facet component to estimate some facets of the fact.",
"Besides, we propose a feedback learning component to share the information between each facet.",
"We evaluate our framework using a benchmark dataset and the experimental results show that our framework achieves promising results.",
"We also conduct extensive analysis of our framework in discovering different kinds of facts.",
"The source code of this paper can be obtained from https: //github.com/thunlp/FFD .",
"Recent years have witnessed the emergence and growth of many large-scale knowledge bases (KBs) such as Freebase (Bollacker et al., 2008), DBpedia (Lehmann et al., 2015), YAGO (Suchanek et al., 2007) and Wikidata (Vrandecic",
"and Krotzsch, 2014) to store facts of the real world.",
"Most KBs typically organize the complex structured information about facts in the form of triples ( head entity , relation , tail entity ), e.g., ( Bill Gates , CEOof , Microsoft Inc. ).",
"These KBs have been widely used in many AI and NLP tasks such as text analysis (Berant et al., 2013), question answering (Bordes et al., 2014a), and information retrieval (Hoffmann et al., 2011).",
"The construction of these KBs is always an ongoing process due to the endless growth of real-world facts.",
"Hence, many tasks such as knowledge base completion (KBC) and relation prediction (RP) are proposed to enrich KBs.",
"The KBC task usually assumes that one entity and the relation r are given, and another entity is missing and required to be predicted.",
"In general, we wish to predict the missing entity in ( h, r, ?) or (? , r, t ) , where h and t denote a head and tail entity respectively.",
"Similarly, the RP task predicts the missing relation given the head and tail entities and their evidence sentences, i.e. filling ( h, ? , t ) .",
"Nevertheless, the assumption of knowing two parts of the triple is too strong and is usually restricted in practice.",
"In many cases, we only know the entity of interest, and are required to predict both its attributive relations and the corresponding entities.",
"As shown in Figure 1, the task is to predict the fact triples when given only the head entity, i.e. filling ( h, ? , ?) .",
"Since any entity can serve as the head entity for identifying its possible fact triples, this task should be more practical for real-world settings.",
"This task is non-trivial since less information is provided for prediction.",
"We name the task as Fact Discovery from Knowledge Base (FDKB).",
"Some existing methods such as knowledge base representation (KBR) can be applied to tackle the FDKB task with simple modifications.",
"KBR models typically embed the semantics of both entities and relations into low-dimensional semantic space, i.e., embeddings.",
"For example, TransE (Bordes et al., 2013) learns low-dimensional and real-valued embeddings for both entities and relations by regarding the relation of each triple fact as a translation from its head entity to the tail entity.",
"TransE can thus compute the valid score for each triple by measuring how well the relation can play a translation between the head and tail entities.",
"Many methods have been proposed to extend TransE to deal with various characteristics of KBs (Ji et al., 2015, 2016; He et al., 2015; Lin et al., 2015a).",
"To solve the FDKB task using KBR, one feasible way is to exhaustively calculate the scores of all ( r, t ) combinations for the given head entity h .",
"Afterwards, the highly-scored facts are returned as results.",
"However, this idea has some drawbacks: (1) It takes all relations to calculate ranking scores for each head entity, ignoring the nature of the head entity.",
"The combination of all possible relations and tail entities will lead to huge amount of computations.",
"(2) A large set of candidate triples immerses the correct triples into a lot of noisy triples.",
"Although the probability of invalid facts getting a high score is small, with the large size of the candidate set, the total number of invalid facts with high score is non-negligible.",
"To address the above issues, we propose a new framework named as fact facet decomposition (FFD).",
"The framework follows human being's common practice to identify unknown facts: One typically firstly investigates which relation that a head may have, and then predicts the tail entity based on the predicted relation.",
"This procedure actually utilizes information from several perspectives.",
"Similarly, FFD decomposes fact discovery into several facets, i.e., head-relation facet, tail-relation facet, and tail inference facet, and model each facet respectively.",
"The candidate fact is considered to be correct when all of the facets are trustworthy.",
"We propose a novel auto-encoder based entity-relation component to discover the relatedness between entities and relations.",
"Besides, we also propose a feedback learning component to share the information between each facet.",
"We have conducted extensive experiments using a benchmark dataset to show that our framework achieves promising results.",
"We also conduct an extensive analysis of the framework in discovering different kinds of facts.",
"The contributions of this paper can be summarized as follows: (1) We introduce a new task of fact discovery from knowledge base, which is more practical.",
"(2) We propose a new framework based on the facet decomposition which achieves promising results.",
"In recent years, many tasks (Wang et al., 2017) have been proposed to help represent and enrich KBs.",
"Tasks such as knowledge base completion (KBC) (Bordes et al., 2013; Wang et al., 2014; Ji et al., 2015, 2016; Wang et al., 2017) and relation prediction (RP) (Mintz et al., 2009; Lin et al., 2015a; Xie et al., 2016) are widely studied and many models are proposed to improve the performance on these tasks.",
"However, the intention of these tasks is to test the performance of models in representing KBs and thus they cannot be used directly to discover new facts of KBs.",
"Moreover, our FDKB task is not a simple combination of the KBC and RP task since both of these two tasks require to know two of the triples while we assume we only know the head entity.",
"A common approach to solving these tasks is to build a knowledge base representation (KBR) model with different kinds of representations.",
"Typically, one element of the triples is unknown.",
"Then, all entities are iterated on the unknown element and the scores of all combinations of the triples are calculated and then sorted.",
"Many works focusing on KBR attempt to encode both entities and relations into a low-dimensional semantic space.",
"KBR models can be divided into two major categories, namely translation-based models and semantic matching models (Wang et al., 2017).",
"Translation-based models such as TransE (Bor-des et al., 2013) achieves promising performance in KBC with good computational effi-ciency.",
"TransE regards the relation in a triple as a translation between the embedding of head and tail entities.",
"It means that TransE enforces that the head entity vector plus the relation vector approximates the tail entity vector to obtain entity and relation embeddings.",
"However, TransE suffers from problems when dealing with 1-to-N, N-to-1 and N-to-N relations.",
"To address this issue, TransH (Wang et al., 2014) enables an entity to have distinct embeddings when involving in different relations.",
"TransR (Lin et al., 2015b) models entities in entity space and uses transform matrices to map entities into different relation spaces when involving different relations.",
"Then it performs translations in relation spaces.",
"In addition, many other KBR models have also been proposed to deal with various characteristics of KBs, such as TransD (Ji et al., 2015), KG2E (He et al., 2015), PTransE (Lin et al., 2015a), TranSparse (Ji et al., 2016).",
"Semantic matching models such as RESCAL (Nickel et al., 2011), DistMult(Yang et al., 2014), Complex (Trouillon et al., 2016), HolE (Nickel et al., 2016) and ANALOGY (Liu et al., 2017) model the score of triples by the semantic similarity.",
"RESCAL simply models the score as a bilinear projection of head and tail entities.",
"The bilinear projection is defined with a matrix for each relation.",
"However, the huge amount of parameters makes the model prone to overfitting.",
"To alleviate the issue of huge parameter space, DistMult is proposed to restrict the relation matrix to be diagonal.",
"However, DistMult cannot handle the asymmetric relations.",
"To tackle this problem, Complex is proposed assuming that the embeddings of entities and relations lie in the space of complex numbers.",
"This model can handle the asymmetric relations.",
"Later, Analogy is proposed by imposing restrictions on the matrix rather than building the matrix with vector.",
"It achieves the state-of-the-art performance.",
"Besides, (Bordes et al., 2011; Socher et al., 2013; Chen et al., 2013; Bor-des et al., 2014b; Dong et al., 2014; Liu et al., 2016) conduct the semantic matching with neural networks.",
"An energy function is used to jointly embed relations and entities.",
"We denote E as the set of all entities in KBs, R is the set containing all relations.",
"|E| and |R| stand for the size of each set respectively.",
"A fact is a triple ( h, r, t ) in which h, t E and r R .",
"T is the set of all true facts.",
"When a head entity set H is given, a new fact set is to be discovered based on these head entities.",
"The discovered fact set is denoted as T d = { ( h, r, t ) | h H} .",
"Our goal is to find a fact set T d that maximize the number of correct discovered facts: max T d |T d T | s.t. |T d | = K, (1) in which K is a user-specified size.",
"Problem (1) is intractable since the set T is unknown.",
"We tackle this problem by estimating a fact confidence score function c ( h, r, t ) for each fact in T d and maximize the total score.",
"The problem is then formulated as: max T d (cid:88) ( h,r,t ) T d c ( h, r, t ) s.t. |T d | = K. (2) To integrate the information from various facets of the fact, our framework, known as Fact Facet Decomposition (FFD) framework, decomposes the fact discovery problem into several facet-oriented detection tasks.",
"A fact is likely to be correct if all facets provide supportive evidence.",
"The facets are as follows:",
"1. Head-relation facet: A fact is likely true, if the head entity has a high probability of containing the relation.",
"This is denoted as f h ( r ) ; 2. Tail-relation facet: A fact is likely true, if the tail entity has a high probability of containing the relation.",
"This is denoted as f t ( r ) ; 3. Tail inference facet: A fact is likely true, if the score of the tail entity is high with respect to the given head and relation.",
"This is denoted as f h,r ( t ) .",
"Therefore, the facet confidence score can be expressed as: c ( h, r, t ) = 1 f h ( r ) + 2 f t ( r ) + 3 f h,r ( t ) , (3) where 1 , 2 , 3 are weight parameters.",
"The head-relation facet and the tail-relation facet can be both KBs Train Predict L ( y e , y e ) p ( r | e ) y e y e y 0 e y e x x e, r 1 , t 1 e, r 2 , t 2 e, r 3 , t 3 e, r 4 , t 4 e, r 5 , t 5 Figure 2: The structure of the entity-relation component.",
"modeled with an entity-relation facet component.",
"The tail inference facet can be modeled by a KBR component.",
"The entity-relation component estimates the probability of a relation given an entity.",
"The structure is shown in Figure",
"2. It is modeled as the log of the estimated conditional probability: f e ( r ) = log p ( r | e ) , (4) where e = h or t .",
"p ( r | e ) aims at measuring the probability of a relation that this entity may have.",
"In order to estimate this probability, the existing relations of a head or tail entity is used to infer other related relations.",
"For example, if a head entity has an existing fact in which the relation is BirthPlace, we may infer that this head entity may be a person and some relations such as Gen-der, Language may have a high probability of association with this head entity.",
"Therefore, the problem is transformed into a problem that estimates the relatedness between relations.",
"To infer the probability of each relation based on existing relations, we employ a denoising auto-encoder (Vincent et al., 2008) which can recover almost the same representation for partially destroyed inputs.",
"Firstly, facts related to an entity is extracted from the KBs.",
"Then, this entity is encoded by the existing relations.",
"Let y e R |R| be the 0-1 representation of relations that e has.",
"y ei indicates whether the entity e has the relation i or not.",
"During the training phase, non-zero elements in y e is randomly set to zero and the auto-encoder is trained to recover the corrupted elements.",
"The corrupted vector is denoted as y (cid:48) e .",
"Formally, our structure encoder first maps the corrupted one-hot vector y (cid:48) e to a hidden representation x R d 1 of the entity through a fully connected layer: x = tanh( W f y (cid:48) e + b f ) , (5) where W f R d 1 |R| is the translation matrix and b f R d 1 is the bias vector.",
"x is the vector representation of the entities in a hidden semantic space.",
"In this space, similar entities are close to each other while entities of different types are far from each other.",
"If some relations are missing, the fully connected layer will also map the entity into a nearby position.",
"Afterwards, x is used to recover the probability distribution for all relations through a fully connected layer and a sigmoid layer: y e = sigmoid ( W g x + b g ) , (6) where W g R |R| d 1 and b g R |R| is the weight matrix and bias vector of the reverse mapping respectively.",
"y e is the recovered probability distribution of each relation (therefore, the sum of each element in y e does not necessarily equal to 1).",
"This layer will map the entity representation in the semantic space into a probability vector over all relations.",
"Since similar entities are located in the adjacent area, they are likely to have a similar relation probability.",
"Therefore, the probability of missing relations will also be high though the relations are unknown.",
"We use the original one-hot representation of the relations and the recovered relation probability to calculate a loss function: L ( y e , y e ) = |E| (cid:88) e =1 |R| (cid:88) i =1 { y ei log( y ei ) + (1 y ei ) log(1 y ei ) } .",
"(7) The loss function forces the output y ei to be consistent with y ei which makes it capable to discover all related relations from known relations.",
"It can be optimized with an Adam (Kingma and Ba, 2015) based optimizer.",
"When predicting new facts, the one-hot representation y e is sent into the auto-encoder directly instead of using the corrupted representation.",
"The result y e is the estimated probability of each relation, i.e. p ( r = i | e ) = y ei .",
"(8) This probability will be high if relation i is closely related to the existing relations of the entity e .",
"We use a KBR component to model the tail inference facet f h,r ( t ) .",
"Three KBR models are investigated namely DistMult, Complex, and Analogy.",
"The DistMult model defines the score function as f r ( h, t ) = h T diag ( r ) t , in which h , r , t are vector representation of the head, relation and tail respectively.",
"The learning objective is to maximize the margin between true facts and false facts.",
"It can decrease the score of the wrong facts and increase the the score of the true facts at the same time.",
"The Complex model employs complex number as the KBR embedding.",
"Therefore, the score function is defined as f r ( h, t ) = Re ( h T diag ( r ) t ) , in which h , r , t are complex vectors and t stands for the conjugate of t .",
"The Analogy model does not restrict the relation matrix to be diagonal.",
"Therefore, the score function is f r ( h, t ) = h TM r t , in which M r is the matrix corresponding to the relation r .",
"Since many relations satisfy normality and commutativity requirements, the constraints can thus be set as W r W Tr = W Tr W r , r R and W r W r (cid:48) = W r (cid:48) W r , r, r (cid:48) R .",
"Solving such a problem is equivalent to optimizing the same objective function with the matrix constrained to almost-diagonal matrices(Liu et al., 2017).",
"After the score function is calculated, the tail inference facet f h,r ( t ) is modeled by a softmax function: f h,r ( t ) = log p ( t | h, r ) = log e f r ( h,t ) (cid:80) t (cid:48) E e f r ( h,t (cid:48) ) .",
"It should be noted that the normalizing step is only conducted on the tail entities since the head and relation are the input of the model.",
"We only use these three models due to the limited space.",
"Other models can be embedded into our framework easily in the same way.",
"As mentioned above, we need to calculate f h ( r ) , f t ( r ) and f h,r ( t ) .",
"f h ( r ) and f t ( r ) are computed by the entity-relation component while f h,r ( t ) is computed by the tail inference component.",
"Recall that a fact is likely to be true when all the facets exhibit strong support.",
"In other words, we can prune away the fact if one of the facets is low and stop calculating other facets.",
"Based on this strategy, we design two additional constraints on Problem (2).",
"Therefore, this method can be viewed as a shrink of the constraint space of the optimization problem.",
"The new problem can be expressed as: max T d (cid:88) ( h,r,t ) T d { 1 f h ( r ) + 2 f t ( r ) + 3 f h,r ( t ) } s.t. h H ; |T d | = K f h ( r ) > h ; (cid:88) r 1 ( f h ( r ) > h ) = n h f t ( r ) > t ; (cid:88) r 1 ( f t ( r ) > t ) = n t , (10) where 1 A ( x ) is an indicator function.",
"1 A ( x ) = 1 if x A and 1 A ( x ) = 0 otherwise.",
"n h and n t are the user-specified parameters indicating topn h or topn t relations are considered.",
"1 , 2 and 3 are fixed hyperparameters.",
"Problem (10) is actually a mixed integer linear programming problem.",
"We start to solve this problem from the constraints.",
"Since f t ( r ) is indepen-dent of the given H , it can be preprocessed and can be reused for other queries.",
"When a head entity h is given, we firstly calculate f h ( r ) and get topn h relations ranked by f h ( r ) .",
"Then, for each relation, f t ( r ) is used to get the topn t entities.",
"Afterwards, the tail inference facet f h,r ( t ) will be calculated for all remaining relations and entities and topn f triples will be cached.",
"Finally, top K facts ranked by the facet confidence score c ( h, r, t ) is returned as the new facts discovered for the entity h , where K = K/ |H| stands for the average fact number for each head entity.",
"The three facets depict the characteristics of the KBs from different perspectives.",
"For example, the head-relation facet indicates which relation the head entity may have.",
"The tail-relation facet can be interpreted in a similar manner.",
"We propose a feedback learning (FL) component for the facets to share the information in different perspective with each other.",
"FL feeds the predicted facts back to the training set to enhance the training procedure and iterates predicting and training several times.",
"In the iteration, the information from different perspectives is shared with each facet via the newly added facts.",
"Specifically, after predicting the topn h facts for each head entity, we select topn fb ( n fb < n h ) most probable facts according to the score of each triple and then feed them into the existing knowledge base for re-training the FFD model.",
"We repeat the above updating operation several rounds.",
"We evaluate our framework by re-splitting a widely used dataset FB15k (Bordes et al., 2013), which is sampled from Freebase.",
"It contains 1 , 345 relations and 14 , 951 entities.",
"In FB15k, some of the testing set's head entities do not appear in the training set as head.",
"To evaluate our framework, we construct the new dataset.",
"We re-split FB15k into training ( T train ), validation ( T valid ) and testing ( T test ) set, and make sure that there is no overlap between the three sets.",
"For all head entities in H , a relation ratio R % is used to assign the facts into training and testing set.",
"R % relations of a head entity are in the training set while the other 1 R % are in the testing set.",
"In order to evaluate the task, we require that the head entities in H is the same as the testing head entity and is a subset of the training head set, i.e. H = { h | ( h, r, t ) T test , r, t E} { h | ( h, r, t ) T train , r, t E} .",
"We set R = 50 .",
"After the splitting, the training, testing and validation set size is 509 , 339 , 41 , 861 and 41 , 013 respectively.",
"MF models firstly count the frequency of all relation-tail pairs.",
"Some low-frequency relation-tail pairs are ignored to save computational time.",
"Afterwards, we build a (head, relation-tail) co-occurrence matrix MC R |E| p , in which p is the size of the relation-tail pair set.",
"Each element M Cij in the matrix represents whether the head entity i has the relation-tail pair j or not.",
"Then, the matrix will be decomposed by the product of two matrices, i.e. MC W H, (11) in which W R |E| k , H R k p .",
"k is the hidden category number of the head and relation-tail pairs.",
"The decomposition can be achieved in several ways with different assumptions.",
"Two kinds of matrix decomposition models are used namely SVD (Halko et al., 2011) and NMF (Lee and Se-ung, 1999).",
"In the prediction stage, a new matrix is constructed by M (cid:48) C = W H .",
"For each row in M (cid:48) C , we record top K relation-tail pairs and their scores.",
"The MF models always suffer from the sparsity problem since a lot of relation-tail pairs are ignored.",
"The most straightforward method of estimating the fact confidence score c ( h, r, t ) is to use KBR model directly to evaluate each triples' score.",
"We exhaustively score all possible combinations of relations and tails and use the highly-scored facts to make up the set T d .",
"We select some state-of-the-art models including DistMult (Yang et al., 2014), Complex (Trouillon et al., 2016) and Analogy (Liu et al., 2017).",
"We denote them as DistMult+, Complex+ and Analogy+.",
"After a KBR model learns a score function f r ( h, t ) , the probability of each ( r, t ) pair with respect to a given head entity can be estimated by a softmax function: p ( r, t | h ) = e f r ( h,t ) (cid:80) r (cid:48) R (cid:80) t (cid:48) E e f r (cid:48) ( h,t (cid:48) ) .",
"There are 2,000 head entities in the testing set.",
"Therefore, we predict the corresponding relation and tail entity with respect to these 2,000 head entities.",
"In MF models, only relation-tail pairs that occur more than 3 times in the training set are considered (24,615 pairs in total).",
"For each head entity, we set K = 50 .",
"In KBR+, we also set K = 50 .",
"For our framework, we set n h = n t = 30 , n f = 10 , K = 50 , 1 = 1 .",
"0 , 2 = 1 .",
"0 , 3 = 0 .",
"5 .",
"The auto-encoder iterates for 1,000 epochs and the learning rate for Adam is 0.005.",
"For the feedback learning component, we set n fb = 20 , 000 .",
"With this setting, each model returns 100,000 facts.",
"We use four evaluation metrics, including precision, recall MAP, and F1 in relation prediction.",
"Precision is defined as the ratio of the true positive candidates' count over the number of all the retrieved candidates' count.",
"Recall is defined as the ratio of the true positive candidates' count over all the positive facts' count in the testing set.",
"MAP (Manning et al., 2008) is a common evaluation method in information retrieval tasks.",
"F1 is defined as the harmonic mean of the precision and recall.",
"the experiment result, we observe that:",
"1. FFD based model outperforms other models in all metrics.",
"It illustrates the advantage of our decomposition design.",
"Moreover, in FFD, using Analogy to predict c ( h, r, t ) outperforms Complex.",
"One reason is that the discovery algorithm harness the relatively large parameter space of Analogy and avoids some occasionally emerging wrong facts;",
"2. The relation of the head entity can be correctly predicted.",
"This is because, in training, we remove some relations and the auto-encoder is trained to learn to recover the missing relations based on the remaining relations.",
"3. The MF based models (i.e. SVD and NMF) perform not as good as KBR+ models and FFD.",
"The reason is partially due to the sparsity problem in MF models.",
"A lot of relation-10 0 10 1 10 2 10 3 Average head per tail (hpt) 0.00 0.25 0.50 0.75 1.00 P r e c i s i o n SVD Analogy+ FFD(Analogy) 10 0 10 1 10 2 Average tail per head (tph) Figure 3: Precision on each relation.",
"tail pairs have not been used as the feature and thus cannot be predicted;",
"4. Different from the traditional KBC task, Complex performs slightly better than Analogy.",
"One reason is that Analogy's constraint is looser than Complex.",
"Therefore, it may easily predict wrong facts due to error propagation;",
"5. The ablation experiment shows that the feedback learning can improve the performance effectively.",
"To illustrate the capability of handling different kinds of relations, we plot the accuracy with respect to different kinds of relations.",
"We use heads per tail (hpt) and tails per head (tph) index to represent the difficulty of each relation.",
"If the rela-tion's index is high, it means that each head of the relation may have more tails or vice versa.",
"These relations are more difficult to predict.",
"This is the similar problem of 1-N, N-1 and N-N relation in KBC task.",
"The plot is shown in Figure",
"3. From the figure, we can observe that:",
"1. FFD can be adapted to all kinds of relations with different hpt and tph;",
"2. MF, KBR+, FFD models can handle relations with relatively high hpt but fail with high tph.",
"This is because our goal is to predict relation and tail based on the head.",
"Therefore, the choice may be harder to make with high tph;",
"3. As the hpt grows, the precision of SVD model also grows.",
"The reason is that as hpt grows, the sparsity problem is alleviated.",
"Therefore, the performance of SVD grows.",
"training set.",
"We examine the training set and observe that 97.46% relation-tail pairs does not appear and 0.34% relation-tail pairs appear for only one time.",
"These pairs can hardly provide any information for the MF models either.",
"To test whether our framework is capable of dealing with the data sparsity problem.",
"We remove training facts which contains head entities in H according to a specific ratio.",
"We decrease the relation ratio R % from 50% to 10% to explore the effectiveness of our framework in discovering new facts.",
"The dataset statistics is shown in Table",
"2. We apply FFD (Analogy) on each dataset.",
"As shown in Table 3, precision, F1 and recall decrease since the data becomes more and more sparse.",
"MAP increase slightly since it is averaged on all extracted facts.",
"When the extracted facts number decreases, some facts rank at the tail with low scores are excluded.",
"We provide a case study to demonstrate the characteristics of different models and show that our FFD can utilize more information.",
"We choose the head entity Stanford Law School (Freebase ID: /m/021s9n).",
"The predicted facts of SVD, Analogy+ and FFD (Analogy) are shown in Table",
"4. From the table, we can observe that:",
"1. FFD (Analogy) can predict facts such as (Located In, Stanford) and (Mail Address City, Stanford) while other methods fail to.",
"It implies that this model can predict some relation with multiple possible tails;",
"2. Analogy+ outperforms SVD in general while fails to exceed FFD (Analogy).",
"The reason is Relation Tail InRTpair SVD Ana-log+ FFD(Anal-ogy) Located In USA Located In California Located In Stanford EducationalInstitution Stanford Law School GraduatesDegree Law Degree GraduatesDegree Juris Doctor Mail AddressState California Mail AddressCity Stanford Parent Institution StanfordUniversity Tuition Measurement US Dollar WebpageCategory WebPage Table 4: Predicted facts by SVD, FFD+ and FFD (Analogy).",
"that it fails to predict some general facts like (Located In, USA) or (Tuition Measure-ment, US Dollar).",
"This may due to the high scores given to some wrong facts;",
"3. The SVD model can only predict those facts whose relation and tail belong to the selected relation-tail pairs while Analogy+ and FFD (Analogy) can predict more facts;",
"4. SVD model prefers to predict some basic facts such as Located In and Tuition Mea-surement.",
"This is because those relations appear a lot of times in the training set and have limited possible tail entities.",
"Therefore, it is easy for SVD model to make such prediction.",
"In this paper, we introduce a new task of fact discovery from knowledge base, which is quite important for enriching KBs.",
"It is challenging due to the limited information available about the given entities for prediction.",
"We propose an effective framework for this task.",
"Experimental results on real-world datasets show that our model can effectively predict new relational facts.",
"We also demonstrate that the feedback learning approach is useful for alleviating the issue of data sparsity for the head entities with few facts.",
"Facts discovery from knowledge base is essential for enriching KBs in the real world.",
"Despite the fact that our work shows some promising results, there still remains some challenges: (1) There exists much more internal information such as relational paths and external information such as text, figures and videos on the web, which can be used to further improve the performance.",
"(2) The feedback learning approach in this paper is to simply utilize those confident predicted relational facts to enhance the model.",
"Reinforcement learning may help us dynamically select those informative and confident relational facts.",
"The work described in this paper is partially supported by grants from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Codes: 14203414) and the Direct Grant of the Faculty of Engineering, CUHK (Project Code: 4055093).",
"Liu and Lin are supported by the National Key Research and Development Program of China (No. 2018YFB1004503) and the National Natural Science Foundation of China (NSFC No. 61572273, 61661146007)."
]
| [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"result",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"method",
"objective",
"objective",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"result",
"method",
"abstain",
"other",
"other"
]
|
[
"Abstract Current state-of-the-art neural dialogue models learn from human conversations following the data-driven paradigm.",
"As such, a reliable training corpus is the crux of building a robust and well-behaved dialogue model.",
"However, due to the open-ended nature of human conversations, the quality of user-generated training data varies greatly, and effective training samples are typically insufficient while noisy samples frequently appear.",
"This impedes the learning of those data-driven neural dialogue models.",
"Therefore, effective dialogue learning requires not only more reliable learning samples, but also fewer noisy samples.",
"In this paper, we propose a data manipulation framework to proactively reshape the data distribution towards reliable samples by augmenting and highlighting effective learning samples as well as reducing the effect of inefficient samples simultaneously.",
"In particular, the data manipulation model selectively augments the training samples and assigns an importance weight to each instance to reform the training data.",
"Note that, the proposed data manipulation framework is fully data-driven and learnable.",
"It not only manipulates training samples to optimize the dialogue generation model, but also learns to increase its manipulation skills through gradient descent with validation samples.",
"Extensive experiments show that our framework can improve the dialogue generation performance with respect to various automatic evaluation metrics and human judgments.",
"Open-domain dialogue generation, due to its potential applications, is becoming ubiquitous in the community of natural language processing.",
"Current end-to-end neural dialogue generation models (Li et al., 2016; Serban et al., 2017; Zhao et al., Work done at Data Science Lab, JD.com. i n e c i e n t c o n v e r s a t i o n s augmentation reweighting e ec t i v e c o n v e r s a t i o n s a u g m e n t e d e ec t i v e c o n v e r s a t i o n s Figure 1: Data manipulation helps the dialogue model training by augmenting and highlighting effective learning samples as well as reducing the weights of inefficient samples. 2017) are primarily built following the data-driven paradigm, that is, these models mimic the human conversations by training on the large-scale query-response pairs.",
"As such, a reliable training corpus that exhibits high-quality conversations is the crux of building a robust and well-behaved dialogue model.",
"Unfortunately, owing to the subjectivity and open-ended nature of human conversations, the quality of the collected human-generated dialogues varies greatly (Shang et al., 2018), which hampers the effectiveness of data-driven dialogue models: 1) Effective conversation samples are quite insufficient.",
"To glean some insights on the data quality of dialogue corpus, we choose the query-relatedness to take a glimpse of the data quality.",
"In dialogue corpus, some conversations are quite coherent, where the queries and responses are well-correlated, while others are not.",
"Query-relatedness measures the semantic similarities between the query and its corresponding response in the embedding space and ranges from 0 to 1.",
"When reviewing DailyDialog (Li et al., 2017), we find that only 12% conversation samples are of relatively high query-relatedness scores ( > 0 . 6 ).",
"Without adequate reliable training samples, the neural dialogue model is prone to converge to a sub-optimal point.",
"2) Meanwhile, noisy and even meaningless conversation samples frequently appear.",
"As Li et al. (2016) reported, I don't know appears in over 113K sentences in the training corpus OpenSubtitles (Lison and Tiedemann, 2016).",
"Such kind of noisy conversation data prevails in neural dialogue model training, and vitally impedes the model learning.",
"Therefore, effective dialogue learning requires not only more reliable learning samples, but also fewer noisy samples.",
"In this work, as illustrated in Figure 1, we propose a novel learnable data manipulation framework to proactively reshape the data distribution towards reliable samples by augmenting and highlighting effective learning samples as well as reducing the weights of inefficient samples simultaneously.",
"Specifically, to generate more effective data samples, the data manipulation model selectively augments the training samples in terms of both word level and sentence level, using masked language models such as BERT (Devlin et al., 2019) and back-translation (Sennrich et al., 2016) technique.",
"To reduce the weights of inefficient samples from the original training samples and the augmented samples, the data manipulation model assigns an importance weight to each sample to adapt the sample effect on dialogue model training.",
"It gives out higher importance weights to critical learning samples and lower weights to those inefficient samples.",
"Furthermore, different from most previous data augmentation or data weighting studies (Li et al., 2019; Shang et al., 2018; Csaky et al., 2019), which are unaware of the target model states during augmentation or weighting, our data manipulation framework not only manipulates training samples to optimize the dialogue generation model, but also learns to increase its manipulation skills through gradient descent with validation samples.",
"We apply the proposed data manipulation framework on several state-of-the-art generation models with two real-life open-domain conversation datasets and compare with the recent data manipulation approaches in terms of 13 automatic evaluation metrics and human judgment.",
"Experiment results show that our data manipulation framework outperforms the baseline models over most of the metrics on both datasets.",
"The proposed data manipulation framework tackles the problem of un-even quality data by inducing the",
"model learning from more effective dialogue samples and reducing effects of those inefficient samples simultaneously.",
"In particular, as illustrated in Figure 2, it manipulates and reshapes the data distribution for neural dialogue model learning in mainly three stages: First, each batch of training samples are selectively augmented to generate more variant samples; and then, all the samples, including the original samples and the augmented samples, are assigned with instance weights indicating their importance regarding current learning status; finally, the weighted samples are fed into the neural dialogue model to induce the model learning from more effective training instances.",
"Note that, although we describe the framework in three components for ease of understanding, in fact, the whole framework can be trained in an end-to-end manner.",
"As a result, the data manipulation network is capable of not only manipulating training samples to optimize the dialogue generation model, but also learning to increase its manipulation skills through gradient descent with validation samples.",
"We first introduce the augmentation and weighting strategies for data manipulation in 2.1 and 2.2, and then describe how the neural dialogue generation model learns from the manipulated samples in 2.3.",
"Parameters estimation for the data manipulation model is elaborated in 2.4.",
"To induce the neural dialogue generation model to learn from more effective samples, we develop a gated data augmentation mechanism for the manipulation framework to selectively augment the learning samples.",
"Specifically, as shown in Figure 3, given a training sample, the manipulation framework first spec-ifies whether to augment it or not through an in-Data Manipulation original batch samples Word-level Augmentation <latexit sha1_base64=\"vqpAnr4xaGqQu1ehF84ZE4exar0=\">AAACyXicjVHLSsNAFD2Nr1pfVZdugkVwVZIq6LLoRnBTwT6gLZJMpzU2L5OJWIsrf8Ct/pj4B/oX3hmnoBbRCUnOnHvPmbn3urHvpcKyXnPGzOzc/EJ+sbC0vLK6VlzfaKRRljBeZ5EfJS3XSbnvhbwuPOHzVpxwJ3B93nSHxzLevOFJ6kXhuRjFvBs4g9Dre8wRRDU6rBeJ9KJYssqWWuY0sDUoQa9aVHxBBz1EYMgQgCOEIOzDQUpPGzYsxMR1MSYuIeSpOMc9CqTNKItThkPskL4D2rU1G9JeeqZKzegUn96ElCZ2SBNRXkJYnmaqeKacJfub91h5yruN6O9qr4BYgUti/9JNMv+rk7UI9HGoavCoplgxsjqmXTLVFXlz80tVghxi4iTuUTwhzJRy0mdTaVJVu+yto+JvKlOycs90boZ3eUsasP1znNOgUSnbe+XK2X6peqRHnccWtrFL8zxAFSeooU7eV3jEE56NU+PauDXuPlONnNZs4tsyHj4A8b+RtA==</latexit> <latexit sha1_base64=\"vqpAnr4xaGqQu1ehF84ZE4exar0=\">AAACyXicjVHLSsNAFD2Nr1pfVZdugkVwVZIq6LLoRnBTwT6gLZJMpzU2L5OJWIsrf8Ct/pj4B/oX3hmnoBbRCUnOnHvPmbn3urHvpcKyXnPGzOzc/EJ+sbC0vLK6VlzfaKRRljBeZ5EfJS3XSbnvhbwuPOHzVpxwJ3B93nSHxzLevOFJ6kXhuRjFvBs4g9Dre8wRRDU6rBeJ9KJYssqWWuY0sDUoQa9aVHxBBz1EYMgQgCOEIOzDQUpPGzYsxMR1MSYuIeSpOMc9CqTNKItThkPskL4D2rU1G9JeeqZKzegUn96ElCZ2SBNRXkJYnmaqeKacJfub91h5yruN6O9qr4BYgUti/9JNMv+rk7UI9HGoavCoplgxsjqmXTLVFXlz80tVghxi4iTuUTwhzJRy0mdTaVJVu+yto+JvKlOycs90boZ3eUsasP1znNOgUSnbe+XK2X6peqRHnccWtrFL8zxAFSeooU7eV3jEE56NU+PauDXuPlONnNZs4tsyHj4A8b+RtA==</latexit> why are you deciding to go abroad choosingwantingpreferring overseasawayoutside Weighting Sentence-level Augmentation <latexit sha1_base64=\"vqpAnr4xaGqQu1ehF84ZE4exar0=\">AAACyXicjVHLSsNAFD2Nr1pfVZdugkVwVZIq6LLoRnBTwT6gLZJMpzU2L5OJWIsrf8Ct/pj4B/oX3hmnoBbRCUnOnHvPmbn3urHvpcKyXnPGzOzc/EJ+sbC0vLK6VlzfaKRRljBeZ5EfJS3XSbnvhbwuPOHzVpxwJ3B93nSHxzLevOFJ6kXhuRjFvBs4g9Dre8wRRDU6rBeJ9KJYssqWWuY0sDUoQa9aVHxBBz1EYMgQgCOEIOzDQUpPGzYsxMR1MSYuIeSpOMc9CqTNKItThkPskL4D2rU1G9JeeqZKzegUn96ElCZ2SBNRXkJYnmaqeKacJfub91h5yruN6O9qr4BYgUti/9JNMv+rk7UI9HGoavCoplgxsjqmXTLVFXlz80tVghxi4iTuUTwhzJRy0mdTaVJVu+yto+JvKlOycs90boZ3eUsasP1znNOgUSnbe+XK2X6peqRHnccWtrFL8zxAFSeooU7eV3jEE56NU+PauDXuPlONnNZs4tsyHj4A8b+RtA==</latexit> <latexit sha1_base64=\"vqpAnr4xaGqQu1ehF84ZE4exar0=\">AAACyXicjVHLSsNAFD2Nr1pfVZdugkVwVZIq6LLoRnBTwT6gLZJMpzU2L5OJWIsrf8Ct/pj4B/oX3hmnoBbRCUnOnHvPmbn3urHvpcKyXnPGzOzc/EJ+sbC0vLK6VlzfaKRRljBeZ5EfJS3XSbnvhbwuPOHzVpxwJ3B93nSHxzLevOFJ6kXhuRjFvBs4g9Dre8wRRDU6rBeJ9KJYssqWWuY0sDUoQa9aVHxBBz1EYMgQgCOEIOzDQUpPGzYsxMR1MSYuIeSpOMc9CqTNKItThkPskL4D2rU1G9JeeqZKzegUn96ElCZ2SBNRXkJYnmaqeKacJfub91h5yruN6O9qr4BYgUti/9JNMv+rk7UI9HGoavCoplgxsjqmXTLVFXlz80tVghxi4iTuUTwhzJRy0mdTaVJVu+yto+JvKlOycs90boZ3eUsasP1znNOgUSnbe+XK2X6peqRHnccWtrFL8zxAFSeooU7eV3jEE56NU+PauDXuPlONnNZs4tsyHj4A8b+RtA==</latexit> why are you deciding to go abroad why are you choosing to leave for a foreign country",
"stance filter, which can be implemented using a sigmoid gating function.",
"Then, two levels of data augmentation are introduced, word-level contextual augmentation and sentence-level data augmentation, to augment the chosen sample accordingly.",
"As the name suggests, word-level augmentation enriches the training samples by substituting the words in the original sample (Figure 3",
"(a)).",
"Here, we employ a masked language model, BERT (De-vlin et al., 2019), to implement word-level augmentation.",
"Given an original sentence, the language model first randomly masks out a few words.",
"BERT then takes in the masked sentence and predicts the corresponding masked positions with new words.",
"A fixed pre-trained BERT may not generalize well for our data manipulation framework, because BERT is unaware of the dialogue learning status.",
"To mitigate such defects, we further fine-tune BERT through backpropagation (more details in 2.4).",
"In particular, BERT is adapted to be differentiable by utilizing a gumbel-softmax approximation (Jang et al., 2017) when predicting substitution words.",
"Word-level data augmentation is quite straightforward.",
"However, such kind of rewriting is limited to only a few words.",
"In human dialogues, there exist various synonymous conversations with different sentence structures.",
"To further diversify the expressions in conversion, we introduce the sentence-level data augmentation through back-translation as in Edunov et al. (2018); Yu et al. (2018), which trains two translation models: one translation model from the source language to target language and another backward translation model from the target to the source, as shown in Figure 3",
"(b).",
"By transforming the expression styles across different languages, the augmented training samples are expected to convey similar information while with different expressions.",
"Similar to the fine-tuning strategy in word-level data augmentation, we also fine-tune the sentence-level data augmentation components to encourage the model to generate more effective samples for dialogue training.",
"The gradients are back-propagated into the translation-based augmentation model, where a differentiable gumbel-softmax is utilized when predicting sentences using the translation model.",
"Given the original training samples and the augmented samples, to deal with the problem of noisy instances, data manipulation model assigns an importance weight to each training sample regarding the learning status.",
"In particular, the sample importance weights are approximated through a softmax function over the scores of these instances.",
"A multilayer perceptron is employed to compute example scores, taking distributional representations of these instances as input.",
"Each sample is converted into its corresponding distributional representation through a transformer-based encoder.",
"Conventionally, neural dialogue generation model is optimized with a vanilla negative log-likelihood loss using the training data D with size N : L vanilla = (cid:80) Nj =1 log p ( y j | x j ) , where each sample is treated equally.",
"In our framework, we assign each sample with an importance weight and augment the original training set D = { ( x j , y j ) } Nj =1 to D (cid:48) = { ( x j , y j ) } N (cid:48) j =1 regarding the learning status.",
"To perform the weighted optimization with augmented training set D (cid:48) , we utilize a weighted negative log-likelihood loss function: L dm = N (cid:48) (cid:88) j =1 w j log p ( y j | x j ) , (1) where w j is the instance weight produced by the data manipulation network.",
"The data manipulation network not only manipulates training samples to optimize the dialogue learning process, but also learns to increase its manipulation skills through gradient descent with validation samples.",
"We formulate such joint learning process following a novel policy learning paradigm (Hu et al., 2019; Tan et al., 2019), where the manipulation framework is formulated as a learnable data-dependent reward function R ( d = { x , y }|D ) , the dialogue model p ( y | x ) is treated as a policy, the input x as the state, and the output y as the action.",
"The reward function R ( d |D ) is defined as: R ( d |D ) = w i if d is an augmented sample of d i or d = d i , d i D otherwise , (2) where denotes the parameter of data manipulation network and w i R is the importance weight associated with the i th data sample.",
"In such formulation, a sample d receives a real-valued reward when d is an augmented sample, or d matches an instance in the original training set.",
"As depicted in Algorithm 1, the parameter of the neural dialogue model and parameter of the data manipulation network are alternatively optimized.",
"Jointly optimizing the dialogue model and the manipulation network can be regarded as reward learning, where the policy p ( y | x ) receives relatively higher rewards for effective samples and Algorithm 1 Joint Learning of Dialogue Model and Data Manipulation Network Input: The dialogue model , data manipulation network , training set D and validation set D v 1: Initialize dialogue model parameter and data manipulation model parameter 2: repeat 3: Optimize on D enriched with data manipulation.",
"lower rewards for those inefficient samples.",
"More concretely, to optimize the neural dialogue model, at each iteration, mini-batch instances are sampled from the training set, and are then enriched through augmentation and weighting.",
"The parameter of the neural dialogue model is then updated with a weighted negative log-likelihood loss function in",
"Eq.(1): (cid:48) = L dm ( , ) , (3) where L dm ( , ) is the gradient of with respect to the loss L dm , and is the step size.",
"The parameter of the data manipulation network is learned by taking a meta gradient descent step on validation samples (Ren et al., 2018).",
"Equation (3) shows that (cid:48) depends on .",
"Therefore, the manipulation model (i.e. the reward function R ( d |D ) ) can be optimized by directly backpropagating the gradient through (cid:48) to .",
"Data We conduct experiments on two English conversation datasets: (1) DailyDialog (Li et al., 2017), a collection of real-world dialogues widely used in open-domain dialogue generation.",
"This is a multi-turn dataset, and we treat each turn as a training pair in this work.",
"The overlapping pairs are removed from the data set.",
"(2) OpenSubtitles (Lison and Tiedemann, 2016), a group of human-human conversations converted from movie transcripts.",
"80,000 instances are sampled from the original corpus and the data proportion for train/valid/test set is set to 8/1/1, respectively.",
"The dataset statistics are listed in Table 1.",
"Experimental Models To ascertain the effectiveness and applicability of our method, we implement the proposed data manipulation framework on following representative models:",
"(i) SEQ2SEQ : a RNN-based sequence-to-sequence model with attention mechanisms (Bahdanau et al., 2015);",
"(ii) CVAE : a latent variable model using conditional variational auto-encoder, trained with KL-annealing and a BoW loss as in Zhao et al. (2017);",
"(iii) Transformer : an encoder-decoder architecture relying solely on the attention mechanisms (Vaswani et al., 2017).",
"Comparison Models We also compare our approach with previous data augmentation or instance weighting methods:",
"(i) CVAE-GAN (Li et al., 2019): a model that combines CVAE and GAN for augmenting the training data to generate more diversified expressions.",
"(ii) Calibration (Shang et al., 2018): a calibration network measures the quality of data samples and enables weighted training for dialogue generation.",
"(iii) Clustering (Csaky et al., 2019): it clusters high-entropy samples as noises and filters them out.",
"We adopt several widely used metrics (Liu et al., 2016; Li et al., 2016; Serban et al., 2017; Gu et al., 2019) to measure the performance of dialogue generation models, including BLEU, embedding-based metrics, entropy-based metrics and distinct metrics.",
"In particular, BLEU measures how much a generated response contains n-gram overlaps with the reference.",
"We compute BLEU scores for n < 4 using smoothing techniques 1 .",
"Embedding-based metric computes the cosine similarity of bag-of-words embeddings between the hypothesis and the reference.",
"We employ the following three embedding metrics to assess the response quality: (1) Embedding Average ( Avg ): cosine similarity between two utterances, in which the sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency sent emb ( e ) = 1 | e | (cid:80) e 0 .",
"001 0 .",
"001+ p ( ) emb ( ) of words as in Arora et al. (2017).",
"where emb ( ) 1 https://www.nltk.org/_modules/nltk/ translate/bleu_score.html and p ( ) are the embedding and the probability 2 of word respectively.",
"(2) Embedding Greedy ( Gre ): greedily matching words in two utterances based on the cosine similarities between their embeddings, and averaging the obtained scores, (3) Embedding Extrema ( Ext ): cosine similarity between the largest extreme values among the word embeddings in the two utterances.",
"We use Glove vectors as the word embeddings.",
"Regarding entropy-based metrics, we compute the n-gram entropy Ent-n = 1 | r | (cid:80) r log 2 p ( ) of responses to measure their non-genericness, where the probabilities p ( ) of n-grams (n=1,2,3) are calculated based on the maximum likelihood estimation on the training data (Serban et al., 2017).",
"Distinct computes the diversity of the generated responses.",
"Dist-n is defined as the ratio of unique n-grams (n=1,2,3) over all n-grams in the generated responses.",
"Following Gu et al. (2019), we also report Intra { 1,2,3 } metrics which are computed as the average of distinct values within each sampled response.",
"For word-level dialogue augmentation, we employ the pre-trained BERT-base language model with the uncased version of tokenizer.",
"We follow the hyper-parameters and settings suggested in Devlin et al. (2019).",
"The replacement probability is set to 15%.",
"For back-translation in sentence-level dialogue augmentation, we use the Transformer model (Vaswani et al., 2017) trained on En-De and En-Ru WMT'19 news translation tasks (Ng et al., 2019).",
"German and Russian sentences were to-kenized with the Moses tokenizer (Koehn et al., 2007).",
"The same hyper-parameters are used for the translation tasks, i.e., word representations of size 1024, dropout with 0.8 keep probability, feed-forward layers with dimension 4096, 6 blocks in the encoder and decoder with 16 attention heads.",
"Models are optimized with Adam (Kingma and Ba, 2015) optimizer using initial learning rate 7e-4.",
"Regarding dialogue models implementation, we adopt a 2-layer bidirectional LSTM as the encoder and a unidirectional one as the decoder for both the SEQ2SEQ and CVAE.",
"The hidden size is set to 256, and the latent size used in CVAE is set to 64.",
"The transformer model for dialogue generation is configured with 512 hidden size, 8 attention heads and 6 blocks in both the encoder and decoder.",
"The 2 Probability is computed based on the maximum likelihood estimation on the training data.",
"hyper-parameters in the baseline models are set following the original papers (Li et al., 2019; Shang et al., 2018; Csaky et al., 2019).",
"To investigate the effectiveness and general applicability of the proposed framework, we instantiate our data manipulation framework on several state-of-the-art models for dialogue generation.",
"The automatic evaluation results of our proposed learning framework and the corresponding vanilla models are listed in Table 2.",
"Compared with the vanilla training procedure, the proposed data manipulation framework brings solid improvements for all the three architectures regarding almost all the evaluation metrics.",
"Such improvements are consistent across both two conversation datasets, affirming the superiority and general applicability of our proposed framework.",
"We further compare our model with existing related methods.",
"Not surprisingly, as shown in Table 3, our data manipulation framework outperforms the baseline methods on most of metrics.",
"In particular, the improvement on Distinct metrics of our model is much greater, which implies that data manipulation effectively induce the neural dialogue model generating more diverse responses.",
"We use the DailyDialog as the evaluation corpus since it is more similar to our daily conversations and easier for annotators to make the judgement.",
"Three graduate students are recruited to conduct manual evaluations.",
"100 test messages are randomly sampled.",
"We present the input messages and the corresponding responses generated by our model and the comparison model to the annotators.",
"The annotators are then required to compare the quality of these two responses (response 1 , response 2 ), taking the following criteria into consideration: coherence, language consistency, fluency and informativeness, and evaluate among win (response 1 is better), loss (response 2 is better) and tie (they are equally good or bad).",
"Note that cases with different evaluation results are labeled as tie.",
"Table 4 summarizes Dist-1 Dist-2 Dist-3 Intra-1 Intra-2 Intra-3 Ent-1 Ent-2 Ent-3 BLEU Avg Ext Gre Baseline 0.8570 4.0123 7.9559 88.509 94.727 96.844 6.7783 10.394 11.719 0.2146 65.200 46.355 67.344 w/ word-level augmentation 1.2205 6.0622 12.2620 89.916 95.265 96.627 6.9457 10.920 12.334 0.2657 65.315 46.821 68.025 w/ sentence-level augmentation 1.4702 6.7803 13.0910 91.309 95.772 97.397 7.0260 10.952 12.517 0.2721 66.788 47.464 67.911 Table 5: Ablation test (%) for word-level and sentence-level augmentations.",
"human evaluation results.",
"The kappa scores indicate that the annotators came to a fair agreement in the judgement.",
"Compared with the baseline methods, our data manipulation approach brings about more informative and coherent replies.",
"Learning Efficiency Figure 4 presents validation results along iterations when training the SEQ2SEQ model on DailyDialog.",
"We observe that when training SEQ2SEQ using our framework, the initial learning speed is a bit slower than the standard vanilla training.",
"However, our framework surpasses the vanilla training on the final stage.",
"One reason is that, at the early stage, the data manipulation model takes some time to improve its manipulation skills.",
"This may slow down the neural dialogue model learning.",
"Once the manipulation skills are effective enough, the neural dialogue model may benefit from learning more effective samples instead of those inefficient instances, and achieves better performance.",
"Examples with Different Augmentation Frequencies The data manipulation model selectively chooses samples to conduct data augmentation.",
"To further glean the insights regarding which samples are favored by the augmentation model, we list examples with different augmentation frequencies in Figure 5.",
"We notice that samples frequently augmented by the manipulation model are more reliable than those seldom augmented ones.",
"Therefore, the dialogue model is able to learn from those effective instances and their synonymous variants.",
"Word-level vs. Sentence-level Augmentation In our framework, we implement two kinds of augmentation mechanisms.",
"Word-level augmentation enriches the given samples by substituting words, while sentence-level augmentation paraphrases the original samples through back-translation.",
"We evaluate their performances and report results in Table 5.",
"Both augmentation mechanisms improve the performance over the vanilla SEQ2SEQ baseline, while sentence-level augmentation performs slightly better than word-level augmentation on most evaluation metrics.",
"One possible reason is that sentence-level augmentation captures more paraphrasing phenomenon.",
"Ablation Study Table 6 presents the results of model variants, by ablating specific parts of the data manipulation model.",
"Among different variants, without data augmentation, the performance degrades rapidly.",
"Meanwhile, without weighting or instance filter also decreases the performance.",
"This implies that the neural dialogue generation model not only benefits from more training samples but also reaps greater advantages from those effective rather than inefficient instances.",
"Impact of Training Data Scale We explore the impact of training data scale on the data manipulation framework by comparing a model trained on half amount of the training data in DailyDialog.",
"As presented in Table 7, with only 50% amount of training data, our model achieves a greater performance boost, which affirms the effectiveness and robustness of the proposed approach.",
"Existing approaches to improving neural dialogue generation models mainly target on building more powerful learning systems, using extra information such as conversation topics (Xing et al., 2017), persona profile (Song et al., 2019), user emotions (Zhou et al., 2018), or out-sourcing knowledge (Liu et al., 2018).",
"Another popular framework for dialogue generation is variational autoen-coder (Kingma and Welling, 2014; Zhao et al., 2017; Shen et al., 2017), in which a latent variable is introduced to benefit the dialogue model with more diverse response generation.",
"Contrasted with previous researches, we investigate to improve the dialogue model from a different angle, i.e., adapting the training examples using data manipulation techniques.",
"Data augmentation is an effective way to improve the performance of neural models.",
"To name a few, Kurata et al. (2016) propose to generate more utterances by introducing noise to the decoding process.",
"Kobayashi (2018); Wu et al. (2019) demonstrate that contextual augmentation using label-conditional language models helps to improve the neural networks classifier on text classi-fication tasks.",
"Sennrich et al. (2016) boost neural machine translation models using back-translation.",
"Xie et al. (2017); Andreas (2020) design manually-specified strategies for data augmentation.",
"Hou et al. (2018) utilize a sequence-to-sequence model to produce diverse utterances for language understanding.",
"Li et al. (2019); Niu and Bansal (2019) propose to generate sentences for dialogue augmentation.",
"Compared with previous augmentation approaches for dialogue generation, augmented sentences in our framework are selectively generated using the pretrained models and the augmentation process is additionally fine-tuned jointly with the training of dialogue generation.",
"Regarding data weighting, past methods (Jiang and Zhai, 2007; Rebbapragada and Brodley, 2007; Wang et al., 2017; Ren et al., 2018; Hu et al., 2019) have been proposed to manage the problem of training set biases or label noises.",
"Lison and Bibauw (2017) propose to enhance the retrieval-based dialog system with a weighting model.",
"Shang et al. (2018) likewise design a matching network to calibrate the dialogue model training through instance weighting.",
"Cai et al. (2020) investigate curriculum learning to adapt the instance effect on dialogue model training according to the sample complexity.",
"Whereas our proposed framework learns to reweight not only the original training examples but also the augmented examples.",
"Another difference is that, we directly derive data weights based on their gradient directions on a validation set, instead of separately training a external weighting model.",
"Csaky et al. (2019) claim that high-entropy utterances in the training set lead to those boring generated responses and thus propose to ameliorate such issue by simply removing training instances with high entropy.",
"Although data filtering is a straightforward approach to alleviate the problem of noisy data, the informative training samples remain untouched and insufficient.",
"Whereas our method holds the promise of generating more valid training data and alleviating the negative noises in the meantime.",
"Note that either data augmentation or instance reweighting can be considered band-aid solution: simply augmenting all training data risks introducing more noisy conversations as such low-quality examples prevail in human-generated dialogues, whilst adapting the sample effect merely by instance reweighting is also suboptimal since effective training samples remain insufficient.",
"The proposed learning-to-manipulate framework organically integrates these two schemes, which collectively fulfill the entire goal.",
"In this work, we consider the automated data manipulation for open-domain dialogue systems.",
"To induce the model learning from effective instances, we propose a learnable data manipulation model to augment effective training samples and reduce the weights of inefficient samples.",
"The resulting data manipulation model is fully end-to-end and can be trained jointly with the dialogue generation model.",
"Experiments conducted on two public conversation datasets show that our proposed framework is able to boost the performance of existing dialogue systems.",
"Our learning-to-manipulate framework for neural dialogue generation is not limited to the elaborately designed manipulation skills in this paper.",
"Future work will investigate other data manipulation techniques (e.g., data synthesis), which can be further integrated to improve the performance.",
"We would like to thank all the reviewers for their insightful and valuable comments and suggestions.",
"This work is supported by the National Natural Science Foundation of China-Joint Fund for Ba-sic Research of General Technology under Grant U1836111 and U1736106.",
"Hongshen Chen and Xiaofang Zhao are the corresponding authors."
]
| [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"other",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"method",
"other",
"other",
"method",
"other",
"other",
"method",
"objective",
"abstain",
"objective",
"method",
"abstain",
"other",
"other",
"other"
]
|
[
"In order to deeply understand the capability of pretrained language models in text generation and conduct a diagnostic evaluation, we propose TGEA 1 , an error-annotated dataset with multiple benchmark tasks for text generation from pretrained language models (PLMs).",
"We use carefully selected prompt words to guide GPT-2 to generate candidate sentences, from which we select 47K for error annotation.",
"Crowdsourced workers manually check each of these sentences and detect 12k erroneous sentences.",
"We create an error taxonomy to cover 24 types of errors occurring in these erroneous sentences according to the na-ture of errors with respect to linguistics and knowledge (e.g., common sense).",
"For each erroneous span in PLM-generated sentences, we also detect another span that is closely associated with it.",
"Each error is hence manually labeled with comprehensive annotations, including the span of the error, the associated span, minimal correction to the error, the type of the error, and rationale behind the error.",
"Apart from the fully annotated dataset, we also present a detailed description of the data collection procedure, statistics and analysis of the dataset.",
"This is the first dataset with comprehensive annotations for PLM-generated texts, which facilitates the diagnostic evaluation of PLM-based text generation.",
"Furthermore, we use TGEA as a benchmark dataset and propose a series of automatic diagnosis tasks, including error detection, error type classification, associated span detection, error rationale generation, to further promote future study on the automatic error detection and correction on texts generated by pretrained language models.",
"Pretrained language models (Devlin et al., 2019; Liu et al., 2019; Raffel et al., 2020; Brown et al., 2020), which are trained on a huge amount of data via self-supervised learning, have made remarkable progress on both natural language understanding (NLU) (Wang et al., 2018, 2019) and natural language generation (NLG) (Liu and Lapata, 2019; Weng et al., 2020; Cao et al., 2020).",
"On several NLU datasets, PLM-based neural models have gradually achieved human-level performance in terms of automatic evaluation metrics (e.g., accuracy, F 1 ) (He et al., 2020; Zhang et al., 2021).",
"In order to deeply understand and analyze the capability of PLMs on NLU, a variety of more challenging NLU datasets have been proposed (Warstadt et al., 2020; Cui et al., 2020a; Jain et al., 2020; Talmor et al., 2020).",
"These datasets can be used not only to obtain knowledge on how PLM-based models work and what they learn, but also to define new NLU tasks and to serve as a benchmark for future progress.",
"For example, evaluating and analyzing PLM-based models on learning document structures with a carefully created benchmark test suite (Chen et al., 2019), helps to develop new methods to enhance the capability of these models on discourse modeling (Iter et al., 2020).",
"Knowing the weakness of current PLM-based models in commonsense reasoning (Zhou et al., 2020) has inspired people to develop various reasoning datasets (Cui et al., 2020a; Zhang et al., 2020b).",
"On the other hand, state-of-the-art PLMs are able to generate texts that are even not distinguishable from human-written texts by human evaluators (Radford et al., 2019; Brown et al., 2020).",
"This makes us curious about the capability of PLMs on text generation.",
"Are they really reaching human-level performance on text generation?",
"In contrast to the studies of PLMs on NLU, research on the capability of PLMs on NLG is quite limited, especially in dataset building and diagnostic evaluation of text generation errors.",
"In this paper, in order to recognize the perimeter of text generation capability of PLMs, we propose TGEA, an error-annotated dataset with multiple benchmark tasks for text generation from pretrained language models.",
"The original raw data are collected from texts generated by a Chinese GPT-2 model.",
"The entire data collection and annotation procedure is visualized in Figure",
"1. The goals and contributions of building TGEA are as follows.",
"TGEA, to the best of our knowledge, is the first dataset built on machine-generated texts from state-of-the-art pretrained language models with rich annotations.",
"The key interest of this dataset is detecting and annotating text generation errors from PLMs.",
"Therefore it is different from conventional text generation datasets (e.g., Multi-News (Fabbri et al., 2019), TextCaps (Sidorov et al., 2020)) that are constructed to train models to learn text generation (e.g., generating texts from images or long documents).",
"It is also different from grammatical error correction (GEC) datasets (Zhao et al., 2018; Flachs et al., 2020) that are built from human-written texts usually by second language learners.",
"TGEA provides rich semantic information for text generation errors, including error types, associated text spans, error corrections and rationals behind errors, as shown in Figure",
"1. Marking text spans that are closely related to erroneous words allows us to detect long-distance dependencies of errors or reasoning chains related to errors.",
"Rationales behind errors directly explain why errors are annotated.",
"All these error-centered manual annotations not only increase the interpretability of our dataset, but also facilitate a comprehensive diagnostic evaluation of pretrained language models on text generation.",
"We created an error taxonomy for TGEA, which covers 24 error types in a two-level hierarchy.",
"With this error taxonomy, we not only obtain a high agreement on manual error annotation but also recognize the strengths and weaknesses of GPT-2 on text generation by estimating a distribution over these 24 error types.",
"Comparing our dataset with GEC datasets, we find that humans and GPT-2 have T(cid:76)(cid:95)(cid:91) P(cid:89)(cid:86)(cid:84)(cid:87)(cid:91): C(cid:86)(cid:89)(cid:89)ec(cid:91) I(cid:85)c(cid:86)(cid:89)(cid:89)ec(cid:91) E(cid:89)(cid:89)(cid:86)(cid:89) (cid:91)(cid:96)(cid:87)e C(cid:83)a(cid:90)(cid:90)ifica(cid:91)i(cid:86)(cid:85) \u0000 E(cid:89)(cid:89)(cid:86)(cid:85)e(cid:86)(cid:92)(cid:90) a(cid:85)d A(cid:90)(cid:90)(cid:86)cia(cid:91)ed S(cid:87)a(cid:85) De(cid:91)ec(cid:91)i(cid:86)(cid:85) \u0000 E(cid:89)(cid:89)(cid:86)(cid:89) c(cid:86)(cid:89)(cid:89)ec(cid:91)i(cid:86)(cid:85) Ra(cid:91)i(cid:86)(cid:85)a(cid:83) Ge(cid:85)e(cid:89)a(cid:91)i(cid:86)(cid:85) C(cid:79)(cid:80)(cid:85)(cid:76)(cid:90)(cid:76) GPT-2 (cid:25543)(cid:17157)(cid:9102)(cid:11014)(cid:19119)(cid:19325)",
"a very different error distribution, especially on errors related to commonsense reasoning.",
"TGEA not only exhibits text generation errors from pretrained language models, but also can serve as a dataset to train various models to automatically detect and correct these errors, like GEC datasets for training models to automatically correct human errors.",
"We define 5 benchmark tasks over our dataset, i.e., erroneous sentence detection, erroneous span and associated span detection, error type classification, error correction and error rationale generation.",
"For all these tasks, we provide experimental results using state-of-the-art models as baselines.",
"Our work is related to GEC datasets in error annotation and correction (machine vs. human errors).",
"It is also partially related to commonsense reasoning datasets that have been proposed recently in that our dataset includes commonsense reasoning errors and rationales behind these errors.",
"Our dataset is not related to conventional text generation datasets (Vougiouklis et al., 2017; Wiseman et al., 2017; Parikh et al., 2020) for training text generation models.",
"A comprehensive comparison to GEC datasets and commonsense reasoning datasets is shown in Table",
"1. Dataset Task CommonsenseReasoning Rationales Machine-GeneratedTexts Domain #Sentences Language FCE GEC (cid:54) (cid:54) (cid:54) Essay 34K EN AESW GEC (cid:54) (cid:54) (cid:54) Journal articles 1.2M EN JFLEG GEC (cid:54) (cid:54) (cid:54) TOFEL Exam 1,511 EN CMEG GEC (cid:54) (cid:54) (cid:54) Web doc/Essay 8K EN CWEB GEC (cid:54) (cid:54) (cid:54) Web doc 13K EN CGEC GEC (cid:54) (cid:54) (cid:54) Essay 0.71M ZH WSC Coreference Resolution (cid:51) (cid:54) (cid:54) Open 273 EN HellaSwag Plausible Inference (cid:51) (cid:54) WikiHow articles 70K EN Social IQA Question Answering (cid:51) (cid:54) (cid:54) Social situations 38K EN CosmosQA Reading comprehension (cid:51) (cid:54) (cid:54) Narratives 35K EN PIQA Plausible Inference (cid:51) (cid:54) (cid:54) Physical situations 21K EN Abductive NLI Plausible Inference (cid:51) (cid:54) (cid:54) ROCStories 200K EN WinoWhy Reason Explanation (cid:51) (cid:51) Open 2,865 EN TGEA (ours) Multiple tasks (cid:51) (cid:51) Open 47K ZH Table 1: Comparison between our dataset and other datasets.",
"FCE (Yannakoudakis et al., 2011) is an early large-scale English grammatical error correction dataset, where raw texts are produced by English learners taking the First Certificate in English exams.",
"AESW (Daudaravicius et al., 2016) is a GEC dataset from a professional editing company.",
"In addition to common grammatical errors, AESW covers style issues as it contains texts mainly from scholarly papers.",
"JFLEG (Napoles et al., 2017) is a GEC dataset built from TOFEL Exams, which does not force annotators to make minimal edits, preferring holistic fluency rewrites.",
"CMEG (Napoles et al., 2019) is different from general grammatical error correction datasets with texts from second language learners.",
"It uses articles or blogs (e.g., Wiki, Yahoo)) written by native English speakers to explore grammatical error phenomena in different domains.",
"CWEB (Flachs et al., 2020) also uses website texts in English, such as blogs.",
"The difference between CWEB and CMEG is that the percentage of erroneous tokens in the former is smaller than the latter as the purpose of CWEB is to study grammatical error correction in low error density domains.",
"CGEC (Zhao et al., 2018) is a large-scale Chinese grammatical error correction dataset, derived from wrong sentences written by Chinese learners in the process of learning Chinese as a second language.",
"In addition to the difference in text sources (i.e., human-written vs. machine-generated), other sig-nificant differences between our dataset and existing GEC datasets are that our dataset contains commonsense reasoning errors and provides associated text span annotations and rationales for errors, as shown in Table",
"1. 2.2 Commonsense Datasets A variety of commonsense datasets have been proposed.",
"Roemmele et al. (2011) introduce COPA that focuses on commonsense causal reasoning.",
"Levesque et al. (2012) present Winograd Scheme Challenge (WSC), a dataset testing commonsense reasoning in the form of anaphora resolution.",
"Wino-grande, a larger version of WSC, is introduced by Sakaguchi et al. (2020), which contains 44 , 000 examples.",
"Winowhy (Zhang et al., 2020a) asks annotators to provide reasons for their decisions to WSC.",
"In this aspect, the differences of our dataset from Winowhy are twofold.",
"First, we provide reasons for errors rather than correct decisions to anaphora.",
"Second, we provide reasons for all text generation errors, rather than only errors related to commonsense reasoning.",
"In addition to COPA and WSC-style datasets, many large crowdsourced datasets have been also proposed recently.",
"CommonsenseQA (Talmor et al., 2019), a commonsense question answering dataset, has been constructed from ConceptNet.",
"HellaSwag (Zellers et al., 2019b) and Abductive NLI (Bhagavatula et al., 2020) evaluate commonsense reasoning in the form of natural language inference.",
"CosmosQA (Huang et al., 2019) is a dataset with multi-choice questions that require commonsense reading comprehension.",
"Beyond datasets for evaluating commonsense reasoning, there are other datasets providing commonsense knowledge.",
"PIQA (Bisk et al., 2020) focuses on physical commonsense knowledge while SocialIQA (Sap et al., 2019) on social commonsense knowledge.",
"Commonsense datasets in multiple languages or languages other than English have also been created recently.",
"XCOPA (Ponti et al., 2020) is a multilingual dataset for causal commonsense reasoning in 11 typologically different languages.",
"Chinese Level-1 Error Type Example Inappropriate combination (cid:58)(cid:58)(cid:58) [ ]",
"commonsense datasets, such as Mandarinograd (Bernard and Han, 2020) consisting of 154 Chinese Winograd scheme examples and CLUEWSC2020 (Xu et al., 2020) containing 1838 Winograd scheme examples, have been proposed.",
"In the aspect of commonsense reasoning, our dataset is different from the mentioned commonsense datasets in that we detect and annotate errors in machine-generated texts, which violates common sense, rather than creating examples to examine the commonsense reasoning ability of machines.",
"Before crowdsourced workers manually annotate errors in machine-generated texts, we need to create an error taxonomy for such error coding.",
"Three principles are used to guide the design of the error taxonomy: coverage, exclusiveness and easiness.",
"The coverage rule requires that the error system can cover almost all different types of errors in machine-generated texts.",
"The exclusiveness requirement indicates that each error type is not overlapping with other error types in the taxonomy.",
"The final easiness principle means that the error coding system is easy to be used by annotators.",
"With these three principles and aid from a linguist, we created an error taxonomy in a two-level hierarchy, which was revised in our pre-annotation stage.",
"The first level of the error taxonomy includes 5 error types.",
"combined in a sentence.",
"Such errors include not only lexical collocation errors but also long-distance syntactic constituency combination errors (e.g., inappropriate subject-object com-bination).",
"This error type is similar to replac-ing error in some GEC datasets (e.g., CWEB (Flachs et al., 2020)) as one element of an inappropriate combination should be usually replaced with other expressions.",
"As we want to find text spans associated with erroneous words/phrases, we term this error type as in-appropriate combination.",
"We further divide this error type into five subtypes at the second level.",
"Missing.",
"Grammatical constituencies or words are missing.",
"5 subtypes are defined under this error type.",
"Redundancy.",
"Words or phrases are unnecessary.",
"5 subtypes are also defined.",
"Discourse Error.",
"This error type is defined for inter-sentential cohesion/coherence errors (e.g., coreference errors, incorrect discourse connectives).",
"Commonsense Error.",
"This error code is for errors related to commonsense reasoning.",
"We divide this error type into 8 subtypes according to the type of commonsense knowledge type required (e.g., time, spatial, number).",
"All other errors that cannot be categorized into the aforementioned error types are grouped into Other.",
"Table 2 displays examples for the above defined error types.",
"24 error subtypes are displayed in Figure 2 and examples of these subtypes are shown in Appendix.",
"Raw texts in our dataset are collected from a pretrained Chinese GPT-2 (NEZHA-Gen) 2 , which generates texts according to a system prompt.",
"NEZHA-Gen has 12 layers and 12 attention heads and is trained on Chinese Wikipedia and news data (see Appendix for more details on the hyperparameters of NEZHA-Gen).",
"As it is easy for NEZHA-Gen to generate high-quality texts with high-frequency prompt words, we create a list of prompt words according to their frequency to guarantee that there are sufficient erroneous sentences in collected raw texts.",
"By doing so, we have found that GPT has a better chance to generate wrong sentences with such prompts.",
"Specifically, we have randomly sampled 2M sentences from the data used to train NEZHA-Gen.",
"The sampled sentences are then word-segmented and POS-tagged by Baidu LAC tool 3 (Jiao et al., 2018).",
"We then select and sort nouns in a descending order according to their frequencies in the sampled corpus.",
"Nouns ranking in the range of top [40%, 60%] are selected as prompts.",
"We further filter out noisy texts from texts generated with these selected prompts.",
"Noisy texts are either texts containing no more than 15 characters or texts where Chinese characters account for less 70% of all characters.",
"There are 5 stages in error annotation, as shown in Figure",
"1. We introduce each of them in this subsection.",
"(1) Erroneous text detection.",
"Texts generated by NEZHA-Gen with prompt words are present to annotators one by one.",
"The first stage of annotation is hence to detect erroneous texts for subsequent annotations.",
"Corresponding tags are annotated for texts being manually checked.",
"(2) Erroneous and associated span detection.",
"The next task for annotators is to detect erroneous and associated text spans in detected erroneous texts.",
"For erroneous span detection, as a text may contain several spans that can be edited or the text can be corrected in different ways, which span should be regarded as erroneous is closely related to the way that we correct the text.",
"Therefore, the basic principle that guides the annotation of erro-2 github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-Gen-TensorFlow 3 github.com/baidu/lac neous spans is also the rule that we use for error correction: making minimal edits, which is also used in GEC datasets (Flachs et al., 2020; Napoles et al., 2017).",
"In addition to the minimal edit principle, we also provide the following specific rules for annotators: If annotators feel that a text is ambiguous and that it is difficult to correct the text, the text can be discarded without any further annotations.",
"If there are several spans that can be edited, the first erroneous span is preferred to be edited.",
"If the number of errors to be corrected in a text is larger than 4, the text is removed.",
"Following these rules, annotators have removed 4,291 texts, which account for only 8.36% of all detected erroneous texts in the first stage.",
"In addition to erroneous span annotation, unlike GEC datasets (Daudaravicius et al., 2016; Zhao et al., 2018), we also detect a text span that is closely related to the already detected erroneous span with respect to the error, and term this span as associated span.",
"In Table 2, we show examples with annotated erroneous and associated text spans.",
"For an inappropriate combination, the associated span is usually a span that should not co-occur with the erroneous span.",
"(3) Error correction.",
"After detecting erroneous spans in a given text, annotators are required to make corrections following the minimal edit principle.",
"Annotators are also required to use common words for error correction to make the corrected text as fluent as possible.",
"(4) Error type classification.",
"Once annotators detect both erroneous and associated spans as well as provide corrections, they are becoming quite aware of these errors.",
"Hence, we now ask them to categorize the annotated errors into error types defined in our error taxonomy.",
"First, they select the primary type from the level-1 error types.",
"Then, if there are level-2 error subtypes, annotators continue to select a subtype.",
"We observe that errors annotated with other only account for 5.70%, suggesting that our error taxonomy has good coverage.",
"(5) Rationale generation.",
"Partially inspired by previous datasets that provide explanations together with corresponding annotations, e.g., e-SNLI (Cam-buru et al., 2018), Winowhy (Zhang et al., 2020a) Task IAA (%) Kappa (%) Erroneous text detection 87.5 62.1 Erroneous and associated span detection 51.2 Error type classification 73.3 55.7 Table 3: Inter-annotator agreement results.",
"and R4C (Inoue et al., 2020), we ask annotators to give a reason for each error to justify their annotations.",
"To the best of our knowledge, no GEC datasets provide explanations for error corrections.",
"We believe that annotated rationales can be used to improve the interpretability of neural models trained on our dataset.",
"In order to ensure the quality of error annotations, we have adopted a very strict quality control protocol during annotation.",
"First, we train two reviewers with 1K machine-generated texts.",
"The annotation consistency of the two reviewers on the 1K texts is very high, with an average IAA of 92.3% and Cohen's Kappa (McHugh, 2012) of 82.6% across the annotation tasks (1), (2) and (4).",
"For the texts annotated by the two reviewers, we have conducted an evaluation.",
"The average accuracy of all tasks is 96.3% and 97.4% respectively.",
"Second, 200 candidate workers participate in a pre-annotation stage.",
"The two reviewers will review annotations from these participants to distinguish whether the annotation is correct or not.",
"Only participants who have reached an accuracy of > 90% in every tasks can join in the next stage.",
"As a result, 20 participants have passed the training in the pre-annotation stage.",
"We then divide them into two groups and ask them to annotate the same 500 texts.",
"The inter-annotator IAA and Cohen's Kappa are shown in Table 3, which suggests that the 20 annotators are ready for final annotation.",
"Third, in order to further ensure annotation quality, we have carried out iterative verification and amendment.",
"The two reviewers will review each annotated text.",
"If they found the annotation is wrong, the unqualified data will be returned for amendment until they are qualified.",
"Following this strict quality control protocol, we complete the annotation on 47K selected machine-generated texts.",
"We randomly sample 1K annotated texts.",
"The average accuracy over the three tasks (i.e., (1), (2) and (4)) is 89.6%, 88.5%, 84.3% respectively.",
"Overall statistics.",
"We reshuffle all annotated texts and divide them into the training/dev/test sets with a proportion of 8:1:1.",
"As shown in Table 4, the training set contains 27,096 correct texts and 9,740 erroneous texts.",
"Both the development and test set contain 4,706 texts, among which 1,218 texts are erroneous.",
"Not surprisingly, most erroneous texts contain only one error.",
"After Chinese word segmentation via Jieba 4 , there are 1,208,719 tokens in total.",
"On average, there are 25.68 tokens in each text.",
"Annotation statistics.",
"As shown in Table 4, each erroneous text span contains 2.94 tokens while each associated span is composed of 4.27 tokens.",
"The average distance from an erroneous text span to its associated span is 7.03 tokens, which is about 1/3 of the average text length.",
"We further show the percentages of both level-1 and level-2 error types in Figure",
"2. We observe that only 5.7% cases cannot be categorized into our defined error types.",
"The inappropriate combination, missing and redundancy error, which are the main error types in GEC datasets, account for 64.85% in our dataset.",
"In addition to these errors, we see 18.96% commonsense errors and 10.48% discourse errors, which are usually not very common in GEC datasets.",
"However, these two types of errors with high percentages in our dataset suggest that pretrained language models can be further improved on both commonsense reasoning and discourse modeling.",
"We use our dataset as a benchmark and propose 5 tasks that are defined for errors in texts generated by PLMs.",
"We provide baseline results for these tasks in this section.",
"We employ three BERT-style Chinese PLMs as baselines in our experiments, namely BERT-wwm-ext, RoBERTa-wwm-ext-large developed by Cui et al. (2020b) 5 and ALBERT-Chinese-large 6 .",
"For notational simiplicity, we denote them as BERT zh , RoBERTa zh and ALBERT zh respectively.",
"Please refer to the Appendix for the model hyperparameter settings of each task.",
"Task definition.",
"This is a text classification task to judge whether a given text is erroneous.",
"In order to avoid data imbalance, we use the same number of correct and erroneous texts for training.",
"Model.",
"The three Chinese PLMs are used with standard text-classification fine-tuning.",
"Results.",
"All models perform just < 14% better than chance (random guessing), as shown in Table 5.",
"We also provide human performance on this task.",
"The best model RoBERTa zh is worse than human performance by 26 points.",
"This suggests that automatically detecting erroneous texts generated by pretrained language models is very challenging even in the balanced classification scenario.",
"Task definition.",
"We define the detection of the two types of spans as a joint task as they are closely related to each other.",
"The joint task is similar to named entity recognition (NER) (a sequence labeling task) and it requires to recognize the erroneous and associated text spans simultaneously.",
"NER-style word-level tags are hence annotated for each erroneous text.",
"Model.",
"The three Chinese PLMs with NER-like fine-tuning are evaluated for this task.",
"Since this is a 3-class token classification task, we report class-F 1 on erroneous and associated span.",
"The class-F 1 on class X is calculated like a normal F 1 for a binary classification task, by treating the target class X as the positive class and all other classes as negative.",
"Results.",
"As shown in Table 5, all models are very poor in this task, indicating the difficulty of automatically detecting erroneous and associated spans.",
"However, we have found that models can benefit much from the joint detection over the detection of a single type of span (either erroneous or associated span).",
"Our preliminary experiments on the detection of only erroneous span show that the best model can only achieve 26.42% erroneous classF 1 on the test set, while the joint task achieves 27.66% erroneous classF 1 on the test set.",
"Task definition.",
"Again this is a text classification task.",
"We only perform classification over level-1 error types in the form of 5-way classification.",
"Model.",
"We use models similar to the first task.",
"Results.",
"The overall accuracy and MacroF 1 (shown in Table 5) are very low.",
"However, we find some error types are easier than others.",
"The accuracy on the classification of redundancy errors is 53.91%, the highest among all error types.",
"Task definition.",
"This task is the same as GEC, which transforms an erroneous text into a correct sequence.",
"Model.",
"we use the state-of-the-art BERT-GEC model (Kaneko et al., 2020) as the baseline for this task, which is an encoder-decoder model using representations learned by PLMs as additional inputs.",
"Following Wang et al. (2020) we feed representations learned by BERT zh and RoBERTa zh into Task Model Dev Test Accuracy (%) Accuracy (%) Erroneous text detection Random 50.00 50.00 ALBERT zh 63.59 63.30 BERT zh 65.15 64.94 RoBERTa zh 66.67 66.79 Human 92.35 93.57 ErroneousclassF 1 (%) AssociatedclassF 1 (%) ErroneousclassF 1 (%) AssociatedclassF 1 (%) Erroneous and associated span detection Random 01.71 04.23 01.74 04.22 ALBERT zh 27.36 27.44 28.10 26.24 BERT zh 27.85 26.93 27.66 25.30 RoBERTa zh 28.17 27.08 27.75 27.12 Accuracy (%) MacroF 1 (%) Accuracy (%) MacroF 1 (%) Error type classification Random 24.25 20.00 24.25 20.00 ALBERT zh 34.76 21.04 34.38 20.56 BERT zh 44.35 33.01 41.31 31.05 RoBERTa zh 44.44 36.10 44.16 37.20 P (%) R (%) F 0 .",
"Results.",
"We report precision, recall and F 0 .",
"5 scores using the official Max-Match tool (Dahlmeier and Ng, 2012).",
"As shown in Table 5, the best RoBERTa zh GEC model achieves a very low F 0 .",
"5 of 0.93% and 0.98% on the development and test set respectively.",
"We speculate that the reasons for this are twofold.",
"First, comparing with GEC data on human-written texts, our dataset is relatively small.",
"Second, our dataset contains error types that are very different from those in previous GEC datasets (Zhao et al., 2018; Flachs et al., 2020).",
"Punctuation, spelling and other word-character-level errors, which are easy to be corrected, are rare in TGEA although they are quite common in GEC datasets.",
"In contrast, TGEA contains more complicated errors that can only be corrected with knowledge of common sense, long-distance or inter-sentential dependencies, etc. 5.5 Rationale Generation Task definition.",
"generation errors from an erroneous text.",
"Model.",
"We use NEZHA-Gen as the baseline for this task.",
"We restructure annotated texts in our dataset in the form of { T, , R } ( { T, The reason behind the errors in this sentence is: , R } ), where T is an erroneous sentence, while R is the error rational provided by annotators.",
"We then fine-tune NEZHA-Gen on the reformatted training set and evaluate the fine-tuned model on the reformatted development and test set.",
"We report BLEU (Papineni et al., 2002), Rouge-L (Lin, 2004) and BERT Score (Zhang et al., 2020c).",
"Results.",
"It can be expected that results in these metrics will be very low due to the high difficulty of this task.",
"We analyze generated texts from the baseline and find that generated rationales are usually much longer than reference rationales provided by human annotators.",
"This could result in the low BLEU score since long hypotheses are penalized in BLEU computation.",
"We also experiment zero-shot generation on the test set.",
"The results are { BLEU = 0 .",
"04% , Rouge-L = 6 .",
"83% , BERT Score = 54 .",
"27% } , indicating that fine-tuning on the annotated training set can improve this task.",
"We suggest that this generation task could be reformulated as a multi-choice question answering task by providing alternative rationales as distractors, similar to VCR (Zellers et al., 2019a).",
"We leave this to our future work.",
"Since we use machine-generated texts for error annotation, hyperparameters of models (e.g., sampling strategies, model size), model types (e.g., GPT-2, GPT-3 or other PLMs for text generation), and genres of texts used to train PLMs, etc., all",
"have impacts on generated texts and hence on error types and error distribution.",
"A straightforward way to mitigate this issue is to collect raw texts from multiple models with different hyperparameters, neural architectures and text genres.",
"This will lead to an expanded dataset with a much larger number of instances to be manually annotated, which is expensive and time-consuming.",
"Yet another issue with this is that it may result in a bunch of data due to inconsistency across different models and difficulty in setting the proportion of each data source.",
"Instead, we focus on consistently annotating errors for texts generated from a single source.",
"In order to make TGEA as general and representative as possible, we use GPT-2 that is not only currently state of the art in text generation but also easily available.",
"We also adopt standard and widely-used hyperparameters (see Appendix for more details) for NEZHA-Gen to generate texts.",
"Additionally, we use a random sampling strategy with top k = 30 .",
"For setting k , we have analyzed 500 examples with different values of k , and found that adjusting k has a reasonable impact on the percentage of redundancy errors.",
"Except for the extreme case of k = 1 , the types of errors and the distribution of them do not change significantly.",
"Take commonsense errors as an example, which is the biggest difference from human-written texts.",
"When k varies in a range of { 5, 10, 20, 30, 50 } , the percentage of commonsense errors is 18.6% 5.8%.",
"Redundancy errors account for > 95% when k = 1 (while commonsense errors account for 0.8%), but sharply drop to 37.4% as k = 5 , and the form of repetition changes from same-word repetition to a mixed repetition of synonymous/same-word, suggesting that a simple repetition penalty may not be sufficient to deal with semantic redundancy.",
"When k { 10 , 20 , 30 , 50 } , the percentage of redundancy errors is very close to the result reported in Figure",
"2. When k > 30 , many generated sentences are completely incomprehensible.",
"A larger k will also reduce the generation efficiency.",
"Therefore, we chose a sampling strategy of k = 30 , which is the trade-off between text quality and generation efficiency.",
"In this paper, we have presented TGEA, the first dataset with a variety of manual annotations on errors occurring texts generated by pretrained language",
"language models.",
"For each erroneous text generated by a Chinese GPT-2 model, our crowdsourced annotators detect erroneous text spans with their associated text spans and provide error types defined in a two-level hierarchical taxonomy as well as rationales behind detected errors.",
"We elaborate the 5 annotation stages for building TGEA with a strict annotation quality control protocol.",
"We also report baseline results of the 5 benchmark tasks on TGEA.",
"The low results suggest that our dataset is a challenging testbed for future work on automatic detection of erroneous spans and types as well as producing error corrections and rationales for texts generated by PLMs.",
"TGEA is featured with wide error type coverage, rich semantic annotation and functional diversity, which can not only be used for deep diagnostic analysis on the text generation capability of pretrained language models, but also facilitate and promote the research of automatic and interpretable error correction for PLM-generated texts.",
"The present research was supported by Huawei.",
"We would like to thank the anonymous reviewers for their insightful comments.",
"We also want to thank MindSpore 7 for the partial suppoort of this work, which is a new deep learning computing framework.",
"The corresponding author is Deyi Xiong ([email protected])."
]
| [
"objective",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"method",
"result",
"result",
"objective",
"objective",
"method",
"result",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"method",
"method",
"result",
"result",
"abstain",
"other",
"other",
"other",
"other"
]
|
[
"Abstractive summarization for long-document or multi-document remains challenging for the Seq2Seq architecture, as Seq2Seq is not good at analyzing long-distance relations in text.",
"In this paper, we present BASS, a novel framework for Boosting Abstractive Summarization based on a unified Semantic graph, which aggregates co-referent phrases distributing across a long range of context and conveys rich relations between phrases.",
"Further, a graph-based encoder-decoder model is proposed to improve both the document representation and summary generation process by leveraging the graph structure.",
"Specifically, several graph augmentation methods are designed to encode both the explicit and implicit relations in the text while the graph-propagation attention mechanism is developed in the decoder to select salient content into the summary.",
"Empirical results show that the proposed architecture brings substantial improvements for both long-document and multi-document summarization tasks.",
"Nowadays, the sequence-to-sequence (Seq2Seq) based summarization models have gained unprecedented popularity (Rush et al., 2015; See et al., 2017; Lewis et al., 2020).",
"However, complex summarization scenarios such as long-document or multi-document summarization (MDS), still bring great challenges to Seq2Seq models (Cohan et al., 2018; Liu et al., 2018).",
"In a long document numerous details and salient content may distribute evenly (Sharma et al., 2019) while multiple documents may contain repeated, redundant or contradictory information (Radev, 2000).",
"These problems make Seq2Seq models struggle with content selection and organization which mainly depend Work is done during an internship at Baidu Inc.",
"on the long source sequence (Shao et al., 2017).",
"Thus, how to exploit deep semantic structure in the complex text input is a key to further promote summarization performance.",
"Compared with sequence, graph can aggregate relevant disjoint context by uniformly representing them as nodes and their relations as edges.",
"This greatly benefits global structure learning and long-distance relation modeling.",
"Several previous works have attempted to leverage sentence-relation graph to improve long sequence summarization, where nodes are sentences and edges are similarity or discourse relations between sentences (Li et al., 2020).",
"However, the sentence-relation graph is not flexi-ble for fine-grained (such as entities) information aggregation and relation modeling.",
"Some other works also proposed to construct local knowledge graph by OpenIE to improve Seq2Seq models (Fan et al., 2019; Huang et al., 2020).",
"However, the OpenIE-based graph only contains sparse relations between partially extracted phrases, which cannot reflect the global structure and rich relations of the overall sequence.",
"For better modeling the long-distance relations and global structure of a long sequence, we propose to apply a phrase-level unified semantic graph to facilitate content selection and organization.",
"Based on fine-grained phrases extracted from dependency parsing, our graph is suitable for information aggregation with the help of coreference resolution that substantially compresses the input and benefits content selection.",
"Furthermore, relations between phrases play an important role in organizing the salient content when generating summaries.",
"For example, in Figure 1 the phrases Albert Einstein, the great prize and explanation of the of the pho-toelectric which distribute in different sentences are easily aggregated through their semantic relations to compose the final summary sentence.",
"We further propose a graph-based encoder-decoder model based on the unified semantic graph.",
"The graph-encoder effectively encodes long sequences by explicitly modeling the relations between phrases and capturing the global structure based on the semantic graph.",
"Besides, several graph augmentation methods are also applied during graph encoding to tap the potential semantic relations.",
"For the decoding procedure, the graph decoder incorporates the graph structure by graph propagate attention to guide the summary generation process, which can help select salient content and organize them into a coherent summary.",
"We conduct extensive experiments on both the long-document summarization dataset BIGPATENT and MDS dataset WikiSUM to validate the effectiveness of our model.",
"Experiment results demonstrate that our graph-based model significantly improves the performance of both long-document and multi-document summarization over several strong baselines.",
"Our main contributions are summarized as follows: We present the unified semantic graph which aggregates co-referent phrases distributed in context for better modeling the long-distance relations and global structure in long-document summarization and MDS.",
"We propose a graph-based encoder-decoder model to improve both the document representation and summary generation process of the Seq2Seq architecture by leveraging the graph structure.",
"Automatic and human evaluation on both long-document summarization and MDS outperform several strong baselines and validate the effectiveness of our graph-based model.",
"Abstractive summarization aims to generate a fluent and concise summary for the given input document (Rush et al., 2015).",
"Most works apply Seq2Seq architecture to implicitly learn the summarization procedure (See et al., 2017; Gehrmann et al., 2018; Paulus et al., 2017; Celikyilmaz et al., 2018).",
"More recently, significant improvements have been achieved by applying pre-trained language models as encoder (Liu and Lapata, 2019b; Rothe et al., 2020) or pre-training the generation process leveraging a large-scale of unlabeled corpus (Dong et al., 2019; Lewis et al., 2020; Qi et al., 2020; Zhang et al., 2020a).",
"In MDS, most of the previous models apply extractive methods (Erkan and Radev, 2004; Cho et al., 2019).",
"Due to the lack of large-scale datasets, some attempts on abstractive methods transfer single document summarization (SDS) models to MDS (Lebanoff et al., 2018; Yang et al., 2019) or unsupervised methods based on auto-encoder (Chu and Liu, 2019; Brazinskas et al., 2020; Amplayo and Lapata, 2020).",
"After the release of several large MDS datasets (Liu et al., 2018; Fabbri et al., 2019), some supervised abstractive models for MDS appear (Liu and Lapata, 2019a; Li et al., 2020).",
"Their works also emphasize the importance of modeling cross-document relations in MDS.",
"Explicit structures play an important role in recent deep learning-based extractive and abstractive summarization methods (Li et al., 2018a,b; Liu et al., 2019a).",
"Different structures benefit summarization models from different aspects.",
"Constituency parsing greatly benefits content selection Input Length 800 1600 2400 3000 #Nodes 140 291 467 579 #Edges 154 332 568 703 Table 1: Illustration of how the average number of nodes and edges in the graph changes when the input sequence becomes longer on WikiSUM.",
"and compression for extractive models.",
"Cao et al. (2015) propose to extract salient sentences based on their constituency parsing trees.",
"Xu and Durrett (2019) and Desai et al. (2020) jointly select and compress salient content based on syntax structure and syntax rules.",
"Dependency parsing helps summarization models in semantic understanding.",
"Jin et al. (2020) incorporate semantic dependency graphs of input sentences to help the summarization models generate sentences with better semantic relevance .",
"Besides sentence-level structures, document-level structures also attract a lot of attention.",
"Fernandes et al. (2019) build a simple graph consisting of sentences, tokens and POS for summary generation.",
"By incorporating RST trees, Xu et al. (2020) propose a discourse-aware model to extract sentences.",
"Similarly, structures from semantic analysis also help.",
"Liu et al. (2015) and Liao et al. (2018) propose to guide summarization with Abstract Meaning Representation (AMR) for a better comprehension of the input context.",
"(Li and Zhuge, 2019) propose semantic link networks based MDS but without graph neural networks.",
"Recently, the local knowledge graph by OpenIE attracts great attention.",
"Leveraging OpenIE extracted tuples, Fan et al. (2019) compress and reduce redundancy in multi-document inputs in MDS.",
"Their work mainly focus on the efficiency in processing long sequences.",
"Huang et al. (2020) utilize OpenIE-based graph for boosting the faithfulness of the generated summaries.",
"Compared with their work, our phrase-level semantic graph focus on modeling long-distance relations and semantic structures.",
"In this section, we introduce the definition and construction of the unified semantic graph.",
"The unified semantic graph is a heterogeneous graph defined as G = ( V, E ) , where V and E are the set of nodes and edges.",
"Every node in V represents a concept merged from co-referent phrases.",
"For example, in Figure 1 the node Albert Einstein is merged from phases Albert Einstein and his which indicate the same person by coreference resolution.",
"Defined as a heterogeneous graph G , every node v V and every edge e ij E in our graph belongs to a type of phrase and dependency parsing relation, respectively.",
"Determined by the type of phrases merged from, nodes are categorized into three different types: Noun phrase (N), Verb phrase (V), Other phrase (O).",
"We neglect dependency relations in edges as they mainly indicate sentence syntax.",
"Instead, the meta-paths (Sun et al., 2011) in the unified semantic graph convey various semantic relations.",
"Notice that most O such as adjective phrases, adverb phrases function as modifiers, and the meta-path O-N indicates modification relation.",
"The meta-path N-N between Noun phrases represents appositive relation or appositional relation.",
"Furthermore, two-hop meta-path represents more complex semantic relations in graph.",
"For example, N-V-N like [ Albert Einstein ]-[ won ]-[ the physics Nobel Prize ] indicates SVO (subjectverbobject) relation.",
"It is essential to effectively model the two-hop meta-path for complex semantic relation modeling.",
"To construct the semantic graph, we extract phrases and their relations from sentences by first merging tokens into phrases and then merging co-referent phrases into nodes.",
"We employ CoreNLP (Man-ning et al., 2014) to obtain coreference chains of the input sequence and the dependency parsing tree of each sentence.",
"Based on the dependency parsing tree, we merge consecutive tokens that form a complete semantic unit into a phrase.",
"Afterwards, we merge the same phrases from different positions and phrases in the same coreference chain to form the nodes in the semantic graph.",
"The final statistics of the unified semantic graph on WikiSUM are illustrated in table 1, which indicates that the scale of the graph expands moderately with the inputs.",
"This also demonstrates how the unified semantic graph compresses long-text information.",
"In this section, we introduce our graph-based abstractive summarization model, which mainly consists of a graph encoder and a graph decoder, as shown in Figure 2.",
"In the encoding stage, our Albert Einstein was a theoretical physicist .",
"model takes a document or the concatenation of a set of documents as text input (represented as x = { x k } ), and encodes it by a text encoder to obtain a sequence of local token representations.",
"The graph encoder further takes the unified semantic graph as graph input (represented as G = ( V, E ) in section 3.1), and explicitly model the semantic relations in graph to obtain global graph representations.",
"Based on several novel graph-augmentation methods, the graph encoder also effectively taps the implicit semantic relations across the text input.",
"In the decoding stage, the graph decoder leverages the graph structure to guide the summary generation process by a novel graph-propagate attention, which facilitates salient content selection and organization for generating more informative and coherent summaries.",
"To better represent local features in sequence, we apply the pre-trained language model RoBERTa (Liu et al., 2019b) as our text encoder.",
"As the maximum positional embedding length of RoBERTa is 512, we extend the positional embedding length and randomly initialize the extended part.",
"To be specific, in every layer, the representation of every node is only updated by it's neighbors by self attention.",
"After we obtain token representations by the text encoder, we further model the graph structure to obtain node representations.",
"We initialize node representations in the graph based on token representations and the token-to-node alignment information from graph construction.",
"After initialization, we apply graph encoding layers to model the explicit semantic relations features and additionally apply several graph augmentation methods to learn the implicit structure conveyed by the graph.",
"Node Initialization Similar to graph construction in section 3.2, we initialize graph representations following the two-level merging, token merging and phrase merging.",
"The token merging compresses and abstracts local token features into higher-level phrase representations.",
"The phrase merging aggregates co-referent phrases in a wide context, which captures long-distance and cross-document relations.",
"To be simple, these two merging steps are implemented by average pooling.",
"Graph Encoding Layer Following previous works in graph-to-sequence learning (Koncel-Kedziorski et al., 2019; Yao et al., 2020), we apply Transformer layers for graph modeling by applying the graph adjacent matrix as self-attention mask.",
"Graph Augmentation Following previous works (Bastings et al., 2017; Koncel-Kedziorski et al., 2019), we add reverse edges and self-loop edges in graph as the original directed edges are not enough for learning backward information.",
"For better utilizing the properties of the united semantic graph, we further propose two novel graph augmentation methods.",
"Supernode As the graph becomes larger, noises introduced by imperfect graph construction also increase, which may cause disconnected sub-graphs.",
"To strengthen the robustness of graph modeling and learn better global representations, we add a special supernode connected with every other node in the graph to increase the connectivity.",
"Shortcut Edges Indicated by previous works, graph neural networks are weak at modeling multihop relations (Abu-El-Haija et al., 2019).",
"However, as mentioned in section 3.1, the meta-paths of length two represent rich semantic structures that require further modeling the two-hop relations between nodes.",
"As illustrated in Figure 2, in a N-V-N meta-path [ Albert Einstein ]-[ was ]-[ a theoretical physicist ], the relations [ Albert Einstein ]-[ was ] and [ was ]-[ a theoretical physicist ] are obviously less important than the two-hop relation [ Albert Einstein ][ a theoretical physicist ].",
"Therefore we add shortcut edges between every node and its two-hop relation neighbors, represented as blue edges in Figure 2.",
"We have also attempted other complex methods such as MixHop (Abu-El-Haija et al., 2019), but we find shortcut edges are more efficient and effective.",
"The effectiveness of these graph augmentation methods has also been validated in section 6.2.",
"Token and node representations benefit summary generation in different aspects.",
"Token representations are better at capturing local features while graph representations provide global and abstract features.",
"For leveraging both representations, we apply a stack of Transformer-based graph decoding layers as the decoder which attends to both representations and fuse them for generating summaries.",
"Let y l 1 t denotes the representation of t -th summary token output by ( l 1 )-th graph decoding layer.",
"For the graph attention, we apply multi-head attention using y l 1 t as query and node representations V = { v j } as keys and values: t,j = ( y l 1 t WQ )( v j WK ) T d head (1) where WQ , WK R d d are parameter weights, t,j denote the salient score for node j to y l 1 t .",
"We then calculate the global graph vector g t as weighted sum over values of nodes: g t = (cid:80) j Softmax ( t,j )( v j WV ) where WV R d d is a learnable parameter.",
"We also obtain contextu-alized text vector c t similar to the procedure above by calculating multi-head attention between y l 1 t and token representations.",
"Afterwards, we use a graph fusion layer which is a feed-forward neural network to fuse the concatenation of the two features: d lt = W Td ([ g t , c t ]) , where W d R 2 d d is the linear transformation parameter and d lt is the hybrid representation of tokens and graph.",
"After layer-norm and feed-forward layer, the l -th graph decoding layer output y lt is used as the input of the next layer and also used for generating the t th token in the final layer.",
"Graph-propagate Attention When applying multi-head attention to graph, it only attends to node representations linearly, neglecting the graph structure.",
"Inspired by Klicpera et al. (2019), we propose the graph-propagate attention to leverage the graph structure to guide the summary generation process.",
"By further utilizing semantic structure, the decoder is more efficient in selecting and organizing salient content.",
"Without extra parameters, the graph-propagation attention can be conveniently applied to the conventional multi-head attention for structure-aware learning.",
"Graph-propagate attention consists of two steps: salient score prediction and score propagation.",
"In the first step, we predict the salient score for every node linearly.",
"We apply the output of multi-head attention t R | v | C in Equation 1 as salient scores, where | v | is the number of nodes in the graph and C is the number of attention heads.",
"C is regarded as C digits or channels of the salient score for every node.",
"We then make the salient score structure-aware through score propagation.",
"Though PageR-ank can propagate salient scores over the entire graph, it leads to over-smoothed scores, as in every summary decoding step only parts of the content are salient.",
"Therefore, for each node we only propagate its salient score p times in the graph, aggregating at most p -hop relations.",
"Let 0 t = t denotes the initial salient score predicted in previous step, the salient score after p -th propagation is: pt = A p 1 t + (1 ) 0 t (2) where A = AD 1 is a degree-normalized adjacent matrix of the graph 1 , and (0 , 1] is the teleport 1 Adjacent matrix A contains self-loop and reverse edges.",
"probability which defines the salient score has the probability to propagate towards the neighbor nodes and 1 to restart from initial.",
"The graph-propagation procedure can also be formulated as: pt = ( p A p + (1 )( p 1 (cid:88) i =0 i A i )) t (3) After p steps of salient score propagation, the graph vector is then calculated by weighted sum of node values: g (cid:48) t = (cid:88) j Softmax ( pt,j )( v j WV ) (4) where for the convenience of expression, the concatenation of multi-head is omitted.",
"The output of fusing g (cid:48) t and c t is then applied to generate the t th summary token as mentioned before.",
"In this section, we describe the datasets of our experiments and various implementation details.",
"We evaluate our model on a SDS dataset and an MDS dataset, namely BIGPATENT (Sharma et al.,",
"2019) and WikiSUM (Liu et al., 2018).",
"BIGPATENT is a large-scale patent document summarization dataset with an average input of 3572.8 words and a reference with average length of 116.5 words.",
"BIGPATENT is a highly abstractive summarization dataset with salient content evenly distributed in the input.",
"We follow the standard splits of Sharma et al. (2019) for training, validation, and testing (1,207,222/67,068/67,072).",
"WikiSUM is a large-scale MDS dataset.",
"Following Liu and Lapata (2019a), we treat the generation of lead Wikipedia sections as an MDS task.",
"To be specific, we directly utilize the preprocessed results from Liu and Lapata (2019a), which split source documents into multiple paragraphs and rank the paragraphs based on their titles to select top-40 paragraphs as source input.",
"The average length of each paragraph and the target summary are 70.1 tokens and 139.4 tokens, respectively.",
"We concatenate all the paragraphs as the input sequence.",
"We use the standard splits of Liu and Lapata (2019a) for training, validation, and testing (1,579,360/38,144/38,205).",
"We train all the abstractive models by max likelihood estimation with label smoothing (label smoothing factor 0.1).",
"As we fine-tune the pretrained language model RoBERTa as text encoder, we apply two different Adam optimizers (Kingma and Ba, 2015) with 1 = 0 .",
"9 and 2 = 0 .",
"998 to train the pre-trained part and other parts of the model (Liu and Lapata, 2019b).",
"The learning rate and warmup steps are 2e-3 and 20,000 for the pretrained part and 0.1 and 10,000 for other parts.",
"As noticed from experiments, when the learning rate is high, graph-based models suffer from unstable training caused by the gradient explosion in the text encoder.",
"Gradient clipping with a very small maximum gradient norm (0.2 in our work) solves this problem.",
"All the models are trained for 300,000 steps on BIGPATENT and WikiSUM with 8 GPUs (NVIDIA Tesla V100).",
"We apply dropout (with the probability of 0.1) before all linear layers.",
"In our model, the number of graph-encoder layers and graph-decoder layers are set as 2 and 6, respectively.",
"The hidden size of both graph encoding and graph decoding layers is 768 in alignment with RoBERTa, and the feed-forward size is 2048 for parameter efficiency.",
"For graph-propagation attention, the parameter is 0.9, and the propagation steps p is 2.",
"During decoding, we apply beam search with beam size 5 and length penalty with factor 0.9.",
"Trigram blocking is used to reduce repetitions.",
"We evaluate the quality of generated summaries using ROUGEF 1 (Lin, 2004) and BERTScore (Zhang",
"et al., 2020b).",
"For ROUGE, we report unigram and bigram overlap between system summaries and reference summaries (ROUGE-1, ROUGE-2).",
"We report sentence-level ROUGE-L for the BIGPATENT dataset and summary-level ROUGE-L for the WikiSUM for a fair comparison with previous works.",
"We also report BERTScore 2 F 1 , a better metric at evaluating semantic similarity between system summaries and reference summaries.",
"Results on MDS Table 2 summarizes the evaluation results on the WikiSUM dataset.",
"We compare our model with several strong abstractive and extractive baselines.",
"As listed in the top block, Lead and LexRank (Erkan and Radev, 2004) are two classic extractive methods.",
"The second block shows the results of several different abstractive methods.",
"TransS2S is the Transformer-based encoder-decoder model.",
"By replacing the Transformer encoder in TransS2S with BERT (Devlin et al., 2019) or RoBERTa and training with two optimizers (Liu and Lapata, 2019b), we obtain two strong baselines BERTS2S and RoBERTaS2S.",
"T-DMCA is the best model presented by Liu et al. (2018) for summarizing long sequence.",
"HT is the best model presented by Liu and Lapata (2019a) with the hierarchical Transformer encoder and a flat Transformer decoder.",
"GraphSum, presented by Li et al. (2020), leverages paragraph-level explicit graph by the graph encoder and decoder, which gives the current best performance on WikiSUM.",
"We report the 2 We apply roberta-large L17 no-idf version as the metric model and rescale with baseline setting according to suggestions on https://github.com/Tiiiger/bert score.",
"best results of GraphSum with RoBERTa and the input length is about 2400 tokens.",
"The last block reports the results of our model BASS with the input lengths of 2400 and 3000.",
"Compared with all the baselines, our model BASS achieves great improvements on all the four metrics.",
"The results demonstrates the effectiveness of our phrase-level semantic graph comparing with other RoBERTa based models, RoBERTaS2S (without graph) and GraphSum (sentence-relation graph).",
"Furthermore, the phrase-level semantic graph improves the semantic relevance of the generated summaries and references, as the BERTScore improvements of BASS is obvious.",
"Results on SDS Table 3 shows our experiment results along with other SDS baselines.",
"Similar to WikiSUM, we also report LexRank, TransS2S, and RoBERTaS2S.",
"Besides, we report the performance of several other baselines.",
"ORACLE is the upper-bound of current extrative models.",
"Seq2seq is based on LSTM encoder-decoder with attention mechanism (Bahdanau et al., 2015).",
"Pointer and Pointer+cov are pointer-generation (See et al., 2017) with and without coverage mechanism, respectively.",
"FastAbs (Chen and Bansal, 2018) is an abstractive method by jointly training sentence extraction and compression.",
"TLM (Pilault et al., 2020) is a recent long-document summarization method based on language model.",
"We also report the performances of recent pretrianing-based SOTA text generation models BART (large) and Peagua-sus (base) on BIGPATENT, which both contain a parameter size of 406M .",
"The last block shows the results of our model, which contains a parameter size of 201M .",
"The results show that BASS consistently outperforms RoBERTaS2S, and comparable with current large SOTA models with only half of the parameter size.",
"This further demonstrates the effectiveness of our graph-augmented model on long-document summarization.",
"For a thorough understanding of BASS, we conduct several experiments on the WikiSUM test set, including the effects of the graph structure and input length.",
"We also validate the effectiveness of the graph-augmentation methods in graph encoder and the graph-propagation attention in graph decoder by ablation studies.",
"Graph Structure Analysis To analyze how the unified semantic graph benefits summarization learning, we conduct ablation studies on the graph structures.",
"Illustrated in Table 4, after removing explicit relations between phrases by fully connecting all the nodes, the R-1 metric drops obviously which indicates the relations between phrases improve the informativeness of generated summaries.",
"After further removing phrase merging, we observe a performance decrease in all the metrics, which indicates the long-distance relations benefit both the informativeness and fluency of summary.",
"Ablation Study The experimental results of removing supernode and shortcut edges from the unified semantic graph prove the effectiveness of graph augmentation methods in the graph encoder.",
"Experimental results without the gaph-propagation attention confirms that the structure of the unified semantic graph is also beneficial for decoding.",
"Overall, the performance of the model drops the most when removing shortcut edges which indicates the rich potential information is beneficial for summarization.",
"Finally, after removing all the graph-relevant components, performance dramatically drops on all the metrics.",
"Length Comparison According to Liu et al. (2018), input length affects the summarization performance seriously for Seq2Seq models as most of them are not efficient at handling longer sequences.",
"The basic TransS2S achieves its best performance at the input length of 800, while longer input hurts performance.",
"Several previous models achieve bet-0.8k 1.6k 2.4k 3k 41.0 41.5 42.0 42.5 43.0 43.5 44.0 R-1 HTGSumBASS 0.8k 1.6k 2.4k 3k 26.5 27.0 27.5 28.0 28.5 R-2 0.8k 1.6k 2.4k 3k 35.0 35.5 36.0 36.5 37.0 37.5 38.0 R-L Figure 3: Comparison of HT, GraphSum (GSum in fig-ure), BASS under various length of input tokens.",
"ter performance when utilizing longer sequences.",
"As illustrated in Figure 3, the performance of HT remains stable when the input length is longer than 800.",
"Leveraging the power of sentence-level graph, GraphSum achieves the best performance at 2,400 but its performance begins to decrease when the input length reaches 3000.",
"Unlike previous methods, ROUGE-1 of BASS significantly increased in 3000 indicates that the unified semantic graph benefits salient information selection even though the input length is extreme.",
"Abastractiveness Analysis We also study the abstractiveness of BASS and other summarization systems on WikiSUM.",
"We calculate the average novel n-grams to the source input, which reflects the abstractiveness of a summarization system (See et al., 2017).",
"Illustrated in Figure 4, BASS generates more abstract summaries comparing to recent models, GraphSum, HT, and weaker than RoBERTaS2S.",
"Summarized from observation, we draw to a conclusion that RoBERTaS2S usually generates context irrelevant contents due to the strong pretrained RoBERTa encoder but a randomly initialized decoder that relays on the long-text input poorly.",
"Graph-based decoders of BASS and GraphSum alleviate this phenomenon.",
"In addition to the above automatic evaluations, we also conduct human evaluations to assess the performance of systems.",
"Because the patent dataset BIGPATENT contains lots of terminologies and requires professional background knowledge for annotators, we select WikiSUM as the dataset for evaluations.",
"As Wikipedia entries can be summarized in many different aspects, annotators will naturally favor systems with longer outputs.",
"Thus we first filter instances that the summaries of different systems are significantly different in lengths 1 2 3 N-gram 0 10 20 30 40 50 60 70 80 % n o v e l n g r a m HT TransformerS2S GraphSum BASS RoBERTaS2S Reference Figure 4: Illustration of novel n-grams in generated summaries form different systems.",
"and then randomly select 100 instances.",
"We invite 2 annotators to assess the summaries of different models independently.",
"Annotators evaluate the overall quality of summaries by ranking them taking into account the following criterias: (1) Informativeness : whether the summary conveys important and faithful facts of the input?",
"(2) Fluency : whether the summary is fluent, grammatical, and coherent?",
"(3) Succinctness : whether the summary is concise and dose not describe too many details?",
"Summaries with the same quality get the same order.",
"All systems get score 2,1,-1,2 for ranking 1,2,3,4 respectively.",
"The rating of each system is averaged by the scores of all test instances.",
"The results of our system and the other three strong baselines are shown in Table 6.",
"The percentage of rankings and the overall scores are both reported.",
"Summarized from the results, our model BASS is able to generate higher quality summaries.",
"Some examples are also shown in the appendix.",
"Specifically, BASS generates fluent and concise summaries containing more salient content compared with other systems.",
"The human evaluation results further validate the effectiveness of our semantic graph-based model.",
"In this paper, we propose to leverage the unified semantic graph to improve the performance of neural abstractive models for long-document summarization and MDS.",
"We further present a graph-based encoder-decoder model to improve both the document representation and summary generation process by leveraging the graph structure.",
"Experiments Model 1 2 3 4 Rating TransS2S 0.32 0.14 0.09 0.45 0 .",
"on both long-document summarization and MDS show that our model outperforms several strong baselines, which demonstrates the effectiveness of our graph-based model and the superiority of the unified semantic graph for long-input abstractive summarization.",
"Though remarkable achievements have been made by neural network-based summarization systems, they still do not actually understand languages and semantics.",
"Incorporating language structures in deep neural networks as prior knowledge is a straightforward and effective way to help summarization systems, as proved by this work and previous works.",
"This work was partially supported by National Key R&D Program of China (No. 2020YFB1406701) and National Natural Science Foundation of China (No. 61876009)."
]
| [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"other"
]
|
[
"Hierarchical text classification is an essential yet challenging subtask of multi-label text classification with a taxonomic hierarchy.",
"Existing methods have difficulties in modeling the hierarchical label structure in a global view.",
"Furthermore, they cannot make full use of the mutual interactions between the text feature space and the label space.",
"In this paper, we formulate the hierarchy as a directed graph and introduce hierarchy-aware structure encoders for modeling label dependencies.",
"Based on the hierarchy encoder, we propose a novel end-to-end hierarchy-aware global model (Hi-AGM) with two variants.",
"A multi-label attention variant (HiAGM-LA) learns hierarchy-aware label embeddings through the hierarchy encoder and conducts inductive fusion of label-aware text features.",
"A text feature propagation model (HiAGM-TP) is proposed as the deductive variant that directly feeds text features into hierarchy encoders.",
"Compared with previous works, both HiAGM-LA and HiAGM-TP achieve significant and consistent improvements on three benchmark datasets.",
"Text classification is widely used in Natural Language Processing (NLP) applications, such as sentimental analysis (Pang and Lee, 2007), information retrieval (Liu et al., 2015), and document categorization (Yang et al., 2016).",
"Hierarchical text classification (HTC) is a particular multi-label text classification (MLC) problem, where the classification result corresponds to one or more nodes of a taxonomic hierarchy.",
"The taxonomic hierarchy is commonly modeled as a tree or a directed acyclic graph, as depicted in Figure",
"1. Existing approaches for HTC could be categorized into two groups: local approach and global This work was done during intern at Alibaba Group.",
"approach.",
"The first group tends to constructs multiple classification models and then traverse the hierarchy in a top-down manner.",
"Previous local studies (Wehrmann et al., 2018; Shimura et al., 2018; Banerjee et al., 2019) propose to overcome the data imbalance on child nodes by learning from parent one.",
"However, these models contain a large number of parameters and easily lead to exposure bias for the lack of holistic structural information.",
"The global approach treats HTC problem as a flat MLC problem, and uses one single classifier for all classes.",
"Recent global methods introduce various strategies to utilize structural information of top-down paths, such as recursive regularization (Gopal and Yang, 2013), reinforcement learning (Mao et al., 2019) and meta-learning (Wu et al., 2019).",
"There is so far no global method that encodes the holistic label structure for label correlation features.",
"Moreover, these methods still exploit the hierarchy in a shallow manner, thus ignoring the fine-grained label correlation information that has proved to be more fruitful in our work.",
"In this paper, we formulate the hierarchy as a directed graph and utilize prior probabilities of label dependencies to aggregate node information.",
"A hierarchy-aware global model (HiAGM) is proposed to enhance textual information with the label structural features.",
"It comprises a traditional text encoder for extracting textual information and a hierarchy-aware structure encoder for modeling hierarchical label relations.",
"The hierarchy-aware structure encoder could be either a TreeLSTM or a hierarchy-GCN where hierarchical prior knowledge is integrated.",
"Moreover, these two structure encoders are bidirectionally calculated, allowing them to capture label correlation information in both top-down and bottom-up manners.",
"As a result, HiAGM is more robust than previous top-down models and is able to alleviate the problems caused by exposure bias and imbalanced data.",
"To aggregate text features and label structural features, we present two variants of HiAGM, a multi-label attention model HiAGM-LA and a text feature propagation model HiAGM-TP.",
"Both variants extract hierarchy-aware text features based on the structure encoders.",
"HiAGM-LA extracts the inductive label-wise text features while HiAGM-TP generates hybrid information in a deductive manner.",
"Specifically, HiAGM-LA updates the label embedding across the holistic hierarchy and then employs node outputs as the hierarchy-aware label representations.",
"Finally, it conducts multi-label attention for label-aware text features.",
"On the other hand, HiAGM-TP directly utilizes text features as the input of the structure encoder in a serial dataflow.",
"Hence it propagates textual information throughout the overall hierarchy.",
"The hidden state of each node in the entire hierarchy represents the class-specific textual information.",
"The major contributions of this paper are: With the prior hierarchy knowledge, we adopt typical structure encoders for modeling label dependencies in both top-down and bottom-up manners, which has not been investigated for hierarchical text classification.",
"We propose a novel end-to-end hierarchy-aware global model (HiAGM).",
"We further present two variants for label-wise text features, a hierarchy-aware multi-label attention model (HiAGM-LA) and a hierarchy-aware text feature propagation model (HiAGM-TP).",
"We empirically demonstrate that both variants of HiAGM achieve consistent improvements on various datasets when using different structure encoders.",
"Our best model outperforms the state-of-the-art model by 3.25% of Macro-F1 and 0.66% of Micro-F1 on RCV1-V2.",
"Existing works for HTC could be categorized into local and global approaches.",
"Local approaches could be subdivided into local classifier per node (LCN) (Banerjee et al., 2019), local classifier per parent node (LCPN) (Dumais and Chen, 2000), and local classifier per level (LCL)(Shimura et al., 2018; Wehrmann et al., 2018; Kowsari et al., 2017).",
"Banerjee et al. (2019) transfers parameters of the parent model for child models as LCN.",
"Wehrmann et al. (2018) alleviates exposure bias problem by the hybrid of LCL and global optimizations.",
"Peng et al. (2018) decomposes the hierarchy into subgraphs and conducts Text-GCN on n-gram tokens.",
"The global approach improves flat MLC models with the hierarchy information.",
"Cai and Hofmann (2004) modifies SVM to Hierarchical-SVM by decomposition.",
"Gopal and Yang (2013) proposes a simple recursive regularization of parameters among adjacent classes.",
"Deep learning architectures are also employed in global models, such as sequence-to-sequence (Yang et al., 2018), meta-learning (Wu et al., 2019), reinforcement learning (Mao et al., 2019), and capsule network (Peng et al., 2019).",
"Those models mainly focus on improving decoders based on the constraint of hierarchical paths.",
"In contrast, we propose an effective hierarchy-aware global model, HiAGM, that extracts label-wise text features with hierarchy encoders based on prior hierarchy information.",
"Moreover, the attention mechanism is introduced in MLC by Mullenbach et al. (2018) for ICD coding.",
"Rios and Kavuluru (2018) trains label representation through basic GraphCNN and conducts mutli-label attention with residual shortcuts.",
"At-tentionXML (You et al., 2019) converts MLC to a multi-label attention LCL model by label clusters.",
"Huang et al. (2019) improves HMCN (Wehrmann et al., 2018) with label attention per level.",
"Our HiAGM-LA, however, employs multi-label attention in a single model with a simplified structure encoder, reducing the computational complexity.",
"Recent works, in semantic analysis (Chen et al., 2017b), semantic role labeling (He et al., 2018) and machine translation (Chen et al., 2017a), shows the improvement on sentence representation of syntax 1 https://github.com/Alibaba-NLP/HiAGM Figure 2: Example of the taxonomic hierarchy.",
"encoder, such as Tree-Based RNN (Tai et al., 2015; Chen et al., 2017a) and GraphCNN (Marcheggiani and Titov, 2017).",
"We modify those structure encoders for HTC with fine-grained prior knowledge in both top-down and bottom-up manners.",
"Hierarchical text classification (HTC), a subtask of text classification, organizes the label space with a predefined taxonomic hierarchy.",
"The hierarchy is predefined based on holistic corpus.",
"The hierarchy groups label subsets according to class relations.",
"The taxonomic hierarchy mainly contains the treelike structure and the directed acyclic graph (DAG) structure.",
"Note that DAG can be converted into a tree-like structure by distinguishing each label node as a single-path node.",
"Thus, the taxonomic hierarchy can be simplified as a tree-like structure.",
"As illustrated in Figure 2, we formulate a taxonomic hierarchy as a directed graph G = ( V, E , E ) where V refers to the set of label nodes V = { v 1 , v 2 , . . . , v C } and C denotes the number of label nodes.",
"E = { ( v i , v j ) | i V, j child ( i ) } is the top-down hierarchy path and E = { ( v j , v i ) | i V, j child ( i ) } is the bottom-up hierarchy path.",
"Formally, we define HTC as H = ( X, L ) with a sequence of text objects X = ( x 1 , x 2 , . . . , x N ) and an aligned sequence of supervised label sets L = ( l 1 , l 2 , . . . , l N ) .",
"As depicted in Figure 1, each sample x i corresponds to a label set l i that includes multiple classes.",
"Those corresponding classes belong to either one or more sub-paths in the hierarchy.",
"Note that the sample belongs to the parent node v i in the condition pertaining to the child node v j child ( i ) .",
"As depicted in Figure 3, we propose a H ierarchy-A ware G lobal M odel (HiAGM) that leverages the fine-grained hierarchy information and then aggregates label-wise text features.",
"HiAGM consists of a traditional text encoder for textual information and a hierarchy-aware structure encoder for hierarchical label correlation features.",
"We present two variants of HiAGM for hybrid information aggregation, a multi-label attention model (HiAGM-LA) and a text feature propagation model (HiAGM-TP).",
"HiAGM-LA updates label representations with the structure encoder and generates label-aware text features with multi-label attention mechanism.",
"HiAGM-TP propagates text representations throughout the holistic hierarchy, thus obtaining label-wise text features with the fusion of label correlations.",
"The taxonomic hierarchy describes the hierarchical relations among labels.",
"The major bottleneck of HTC is how to make full use of this established structure.",
"Previous studies directly utilize this hierarchy path in a static method based on a pipeline framework, hierarchical model or label assignment model.",
"In contrast, based on Bayesian statistical inference, HiAGM leverages the prior knowledge of label correlations regarding the predefined hierarchy and corpus.",
"We exploit the prior probability of label dependencies as prior hierarchy knowledge.",
"Suppose that there is a hierarchy path e i,j between the parent node v i and child node v j .",
"This edge feature f ( e i,j ) is represented by the prior probability P ( U j | U i ) and P ( U i | U j ) as: P ( U j | U i ) = P ( U j U i ) P ( U i ) = P ( U j ) P ( U i ) = N j N i , P ( U i | U j ) = P ( U i U j ) P ( U j ) = P ( U j ) P ( U j ) = 1 .",
"0 , (1) where U k means the occurrence of v k and P ( U j | U i ) is the conditional probability of v j given that v i occurs.",
"P ( U j U i ) is the probability of { v j , v i } occurring simultaneously.",
"N k refers to the number of U k in the training subset.",
"Note that the hierarchy ensures U k given that v child ( k ) occurs.",
"We rescale and normalize the prior probabilities of child nodes v child ( k ) to sum total to",
"1. Figure 3: The overall structure of our hierarchy-aware global model.",
"Tree-LSTM and graph convolutional neural networks (GCN) are widely used as structure encoders for aggregating node information in NLP (Tai et al., 2015; Chen et al., 2017a; He et al., 2018; Rios and Kavuluru, 2018).",
"As depicted in Figure 3, HiAGM models fine-grained hierarchy information based on the hierarchy-aware structure encoder.",
"Based on the prior hierarchy information, we improve typical structure encoders for the directed hierarchy graph.",
"Specifically, the top-down dataflow employs the prior hierarchy information as f c ( e i,j ) = N j N i while the bottom-up one adopts f p ( e i,j ) = 1 .",
"0 .",
"Bidirectional Tree-LSTM Tree-LSTM could be utilized as our structure encoder.",
"The implementation of Tree-LSTM is similar to syntax en-coders(Tai et al., 2015; Zhang et al., 2016; Li et al., 2018).",
"The predefined hierarchy is identical to all samples, which allows the mini-batch training method for this recursive computational module.",
"The node transformation is as follows: i k = ( W ( i ) v k + U ( i ) (cid:101) h k + b ( i ) ) , f k,j = ( W ( f ) v k + U ( f ) h j + b ( f ) ) , o k = ( W ( o ) v k + U ( o ) (cid:101) h k + b ( o ) ) , u k = tanh ( W ( u ) v k + U ( u ) (cid:101) h k + b ( u ) ) , c k = i k (cid:12) u k + (cid:88) j f k,j (cid:12) c j , h k = o k (cid:12) tanh ( c k ) , (2) where h k and c k represent the hidden state and memory cell state of node k respectively.",
"To induce label correlations, HiAGM employs a bidirectional Tree-LSTM by the fusion of a child-sum and a top-down module: (cid:101) h k = (cid:88) j child ( k ) f p ( e k,j ) h j , (cid:101) h k = f c ( e k,p ) h p , h bik = h k h k , (3) where h k and h k are separately calculated in the bottom-up and top-down manner as h k = TreeLSTM ( (cid:102) h k ) .",
"indicates the concatenation of hidden states.",
"The final hidden state of node k is the hierarchical node representation h bik .",
"Hierarchy-GCN GCN (Kipf and Welling, 2017) is proposed to enhance node representations based on the local graph structural information.",
"Some NLP studies have improved Text-GCNs for rich word representations upon the syntactic structure and word correlation(Marcheggiani and Titov, 2017; Vashishth et al., 2019; Yao et al., 2019; Peng et al., 2018).",
"We introduce a simple hierarchy-GCN for the hierarchy structure, thus gaining our aforementioned fine-grained hierarchy information.",
"Hierarchy-GCN aggregates dataflows within the top-down, bottom-up, and self-loop edges.",
"In the hierarchy graph, each directed edge represents a pair-wise label correlation feature.",
"Thus, those dataflows should conduct node transformations with edge-wise linear transformations.",
"However, edge-wise transformations shall lead to over-parameterized edge-wise weight matrixes.",
"Our Hierarchy-GCN simplifies this transformation with a weighted adjacent matrix.",
"This weighted adjacent matrix represents the hierarchical prior probability.",
"Formally, Hierarchy-GCN encodes the hidden state of node k based on its associated neighbourhood N ( k ) = { n k , child ( k ) , parent ( k ) } as: u k,j = a k,j v j + b kl , g k,j = ( W d ( j,k ) g v k + b kg ) , h k = ReLU( (cid:88) j N ( k ) g k,j (cid:12) u k,j ) , (4) where W d ( k,j ) g R dim , b l RN dim , and b g RN .",
"d ( j, k ) indicates the hierarchical direction from node j to node k , including top-down, bottom-up, and self-loop edges.",
"Note that a k,j R denotes the hierarchy probability f d ( k,j ) ( e kj ) , where the self-loop edge employs a k,k = 1 , top-down edges use f c ( e j,k ) = N k N j , and bottom-up edges use f p ( e j,k ) = 1 .",
"The holistic edge feature matrix F = { a 0 , 0 , a 0 , 1 , . . . , a C 1 ,C 1 } indicates the weighted adjacent matrix of the directed hierarchy graph.",
"Finally, the output hidden state h k of node k denotes its label representation corresponding to the hierarchy structural information.",
"Previous global models classify labels upon the original textual information and improve the decoder with predefined hierarchy paths.",
"In contrast, we construct a novel end-to-end hierarchy-aware global model (HiAGM) for the mutual interaction of text features and label correlations.",
"It combines a traditional text classification model with a hierarchy encoder, thus obtaining label-wise text features.",
"HiAGM is extended to two variants, a parallel model for an inductive fusion (HiAGM-LA) and a serial model for a deductive fusion (HiAGM-TP).",
"Given a document x = ( w 1 , w 2 , . . . , w s ) , the sequence of token embedding is firstly fed into a bidirectional GRU layer to extract text contextual feature.",
"Then, multiple CNNs are used for generating n-gram features.",
"The concatenation of n-gram features is filtered by a top-k max-pooling layer to extract key information.",
"Finally, by reshaping, we can obtain the continuous text representation S = ( s 1 , . . . , s n ) where s i R d c and d c indicates the output dimension of the CNN layer.",
"n = n k n c refers to the multiplication of top-k number and the number of CNNs.",
"Hierarchy-Aware Multi-Label Attention The first variant of HiAGM is proposed based on multi-label attention, called as HiAGM-LA.",
"Attention mechanism is usually utilized as the memory unit in text classification (Yang et al., 2016; Du et al., 2019).",
"Recent LCL studies (Huang et al., 2019; You et al., 2019) construct one multi-label attention-based model per level so as to avoid optimizing label embedding among different levels.",
"Our HiAGM-LA is similar to those baselines but simplifies multi-label attention LCL models to a global model.",
"Based on our hierarchy encoders, HiAGM-LA could overcome the problem of convergence for label embedding across various levels.",
"Label representations are enhanced with bidirectional hierarchical information.",
"This local structural information makes it feasible to learn label features across different levels in a single model.",
"Formally, suppose that the trainable label embedding of node k is randomly initialized as L k R d l .",
"The initial label embedding L k is directly fed into structure encoders as the input vector of aligned label node x k .",
"Then, the output hidden state h RC d c represents as the hierarchy-aware label features.",
"Given text representation S R n d c , HiAGM-LA calculates the label-wise attention value ki as: kj = e s j h Tk (cid:80) nj =1 e s j h Tk , v k = n (cid:88) i =1 ki s i , (5) Note that ki indicates how informative the i th text feature vector is for the k -th label.",
"We can get the inductive label-aligned text features V RC d c based on multi-label attention.",
"Then it would be fed into the classifier for prediction.",
"Furthermore, we could directly use the hidden state of hierarchy encoders as the pretrained label representations so that HiAGM-LA could be even lighter in the inference process.",
"Hierarchical text feature propagation Graph neural networks are capable of message passing (Gilmer et al., 2017; Duvenaud et al., 2015), learning both local node correlations and overall graph structure.",
"To avoid the noise from heterogeneous fusion, the second variant obtains label-wise text features based on a deductive method.",
"It directly takes text features S as the node inputs and updates textual information through the hierarchy-aware structure encoder.",
"This variant mainly conducts the propagation of text features, called as HiAGM-TP.",
"Formally, node inputs V are reshaped from text features by a single linear transformation: V = M S , (6) where the trainable weight matrix M R ( n d c ) ( C d v ) transforms text features S R n d c to node inputs V RC d v .",
"Given the predefined structure, each sample would update its textual information throughout the same holistic taxonomic hierarchy.",
"In a mini-batch learning manner, the initial node representation V is fed into the hierarchy encoder.",
"The output hidden state h denotes deductive hierarchy-aware text features as the input of the final classifier.",
"Compared with HiAGM-LA, the transformation of HiAGM-TP is conducted on textual information without the fusion of label embedding.",
"Thus, the structure encoder would be activated in both training and inference procedures for passing textual messages across the hierarchy.",
"It could converge much easier but has slightly higher computational complexity than HiAGM-LA.",
"We flatten the hierarchy by taking all nodes as leaf nodes for multi-label classification, no matter it is a leaf node or an internal node.",
"The final hierarchy-aware features are fed into a fully connected layer for prediction.",
"HiAGM is complementary with recursive regularization(Gopal and Yang, 2013) as L r = (cid:80) i C (cid:80) j child ( i ) 12 || w i w j || 2 for the parameters of the final fully connected layer.",
"For multi-label classification, HiAGM uses a binary cross-entropy loss function: L c = (cid:80) Ni =1 (cid:80) Cj =1 [ y ij log ( y (cid:48) ij )+(1 y ij ) log (1 y (cid:48) ij )] where y ij and y (cid:48) ij are the ground truth and sigmoid score for the j-th label of the i-th sample.",
"Thus, the final loss function is L m = L c + L r .",
"In this section, we introduce our experiments with datasets, evaluation metrics, implementation details, comparison, ablation study, and analysis of experimental results.",
"We experiment our proposed architecture on RCV1-V2, Web-of-Science (WOS) and NYTimes (NYT) datasets for comparison and ablation study.",
"Datasets RCV1-V2 (Lewis et al., 2004) and NYT (Sandhaus, 2008) are both news categorization corpora while WOS (Kowsari et al., 2017) includes abstracts of published papers from Web of Science.",
"Those typical text classification datasets Dataset | L | Depth Avg( | L i | ) Train Val Test RCV1 103 4 3.24 20,833 2,316 781,265 WOS 141 2 2.0 30,070 7,518 9,397 NYT 166 8 7.6 23,345 5,834 7,292 Table 1: Data Statistics: | L | is the number of classes.",
"are all annotated with the ground truth of hierarchical taxonomic labels.",
"We use the benchmark split of RCV1-V2 and select a small partial training subset for validation.",
"WOS dataset is randomly splitted into training, validation and test subsets.",
"In NYT, we randomly select and split subsets from original raw data.",
"We also remove samples with no label or only a single one-level label.",
"Note that WOS is for single-path HTC while NYT and RCV1-V2 include multi-path taxonomic tags.",
"The statistics of datasets is shown in Table",
"1. Evaluation Metrics We measure the experimental results with standard evaluation metrics (Gopal and Yang, 2013), including Micro-F1 and Macro-F1.",
"Micro-F1 takes the overall precision and recall of all the instances into account while Macro-F1 equals to the average F1-score of labels.",
"So Micro-F1 gives more weight to frequent labels, while Macro-F1 equally weights all labels.",
"Implementation Details We use a one-layer bi-GRU with 64 hidden units and 3 parallel CNN layers with filter region size of { 2 , 3 , 4 } .",
"The vocabulary is created by the most frequent words with the maximum size of 60,000.",
"We use 300-dimensional pretrained word embedding from GloVe 2 (Penning-ton et al., 2014) and randomly initialize the out-of-vocabulary words above the minimum count of",
"2. The key information pertaining to text classification could be extracted from the beginning statements.",
"Thus, we set the maximum length of token inputs as 256.",
"The fixed threshold for tagging is chosen as 0.5.",
"Dropout is employed in the embedding layer and MLP layer with the rate of 0.5 while in the bi-GRU layer and node transformation with the rate of 0.1 and 0.05 respectively.",
"Additionally, for HiAGM-LA, the label embedding is initialized by Kaiming uniform (He et al., 2015) while the other model parameters are initialized by Xavier uniform (Glorot and Bengio, 2010).",
"We use the Adam optimizer in a mini-batch size of 64 with learning rate 2 https://nlp.stanford.edu/projects/ glove Model Micro Macro Local Models HR-DGCNN-3 (Peng et al., 2018) 76.18 43.34 HMCN (Mao et al., 2019) 80.80 54.60 HFT(M) (Shimura et al., 2018) 80.29 51.40 Htrans (Banerjee et al., 2019) 80.51 58.49 Global Models SGM 4 (Yang et al., 2018) 77.30 47.49 HE-AGCRCNN (Peng et al., 2019) 77.80 51.30 HiLAP-RL (Mao et al., 2019) 83.30 60.10 Baselines TextRCNN 81.57 59.25 TextRCNN+LabelAttention 81.88 59.85 HiAGM-LA TreeLSTM 82.54 61.90 GCN 82.21 61.65 GCN w/o Rec 82.26 61.85 HiAGM-TP TreeLSTM 83.20 62.32 GCN 83.96 63.35 GCN w/o Rec 83.95 63.23 Table 2: Comparison to previous models on RCV1-V2.",
"= 1 10 4 , momentum parameters 1 = 0 .",
"9 , 2 = 0 .",
"999 and (cid:15) = 1 10 6 .",
"The penalty coeffi-cient of recursive regularization is set as 1 10 6 .",
"Our model evaluates the test subset with the best model on the validation subset.",
"In Table 2, we compare the performance of HiAGM to traditional MLC models and the state-of-the-art HTC studies on RCV1-V2.",
"With the recursive regularization for the last MLP layer, those conventional text classification models also obtain competitive performance.",
"As for our proposed architecture, both HiAGM-LA and HiAGM-TP outperform most state-of-the-art results of global and local studies, esspecially in Macro-F1.",
"It shows the strong advancement of our hierarchy encoders on HTC.",
"HiAGM-LA achieves the performance of 61.90% Macro-F1 score and 82.54% Micro-F1 score while HiAGM-TP obtains the best performance of 63.35% Macro-F1 score and 83.96% Micro-F1 score.",
"4 The result is reproduced with benchmark split upon the released project of SGM.",
"HiAGM, we also experiment without recursive regularization.",
"Compared with the state-of-the-art recent work (HiLAP) (Mao et al., 2019), our HiAGM-LA and HiAGM-TP without recursive regularization also achieve competitive improvement by 1.75% and 3.13% in terms of Macro-F1.",
"It demonstrates that the recursive regularization is complementary but not necessary with our proposed architecture.",
"According to Table 4, HiAGM achieves consistent improvement on the performance of HTC among RCV1-V2, WOS and NYT datasets.",
"It indicates the strong improvement of the label-wise text feature on HTC task.",
"The results present that our proposed global model HiAGM has the advanced capability of enhancing text features for HTC.",
"All in all, HiAGM strongly improves the performance on the benchmark dataset RCV1-V2 and the other two classical text classification datasets.",
"Especially, it obtains better results on Macro-F1 score.",
"It indicates that HiAGM has a strong ability to tackle data-sparse classes deep in the hierarchy.",
"Hybrid Information Aggregation According to Table 2, both variants outperform the baseline models and previous studies.",
"It denotes that the enhanced text feature is beneficial for HTC.",
"We clarify the ablation study of two variants and structure encoders in Table",
"3. Both HiAGM-LA and HiAGM-TP are trained with fixed prior probability.",
"With the help of the recursive computation process, bidirectional Tree-LSTM achieves better performance on learning hierarchy-aware label embedding.",
"However, it additionally leads to lower computational efficiency when compared to Hierarchy-GCN.",
"Regarding HiAGM-TP, hierarchy-GCN shows its better performance and efficiency than bidirectional Tree-LSTM.",
"These two variants have various advantages, respectively.",
"To be specific, HiAGM-TP has better performance than HiAGM-LA in both Bi-Model RCV1-V2 RCV1-V2-R WOS NYT Micro-F1 Macro-F1 Micro-F1 Macro-F1 Micro-F1 Macro-F1 Micro-F1 Macro-F1 Global Text Classification Baseline TextRNN 81.10 51.09 87.78 70.42 77.94 69.65 70.29 53.06 TextCNN 79.37 55.45 84.97 68.06 82.00 76.18 70.11 56.84 TextRCNN 81.57 59.25 88.32 72.23 83.55 76.99 70.83 56.18 HiAGM-LA GCN 82.21 61.65 88.49 73.14 84.61 79.37 72.35 58.67 TreeLSTM 82.54 61.90 88.47 72.81 84.82 79.51 72.50 58.86 HiAGM-TP GCN 83.96 63.35 88.64 74.00 85.82 80.28 74.97 60.83 TreeLSTM 83.20 62.32 88.86 74.16 85.18 79.95 74.43 60.76 Table 4: Experimental results of our proposed HiAGM-LA and HiAGM-TP on various datasets.",
"TreeLSTM and Hierarchy-GCN encoders.",
"The multi-label attention variant, HiAGM-LA, would somehow induce noises from the randomly initialized label embedding.",
"Otherwise, HiAGM-TP aggregates the fusion of local structural information and text feature maps, without the negative impact of label embedding.",
"As for efficiency, HiAGM-LA is more computationally efficient than HiAGM-TP, especially in the inference process.",
"The label representation from hierarchy encoders could be utilized as pretrained label embedding for multi-label attention during inference.",
"Thus, HiAGM-LA omits the hierarchy-aware structure encoder module after training.",
"We recommend HiAGM-TP for high performance while we also suggest HiAGM-LA for empirically good performance and faster inference.",
"GCN Layers The impact of GCN layers is also an important issue for HiAGM.",
"As illustrated in Figure 4, the one-layer structure encoder consistently performs best in both HiAGM-LA and HiAGM-TP.",
"It indicates that the correlation between non-adjacent nodes is not essential for HTC but somehow noisy for hierarchical information aggregation.",
"This empirical conclusion is consistent with the implementation of recursive regularization (Peng et al., 2018; Gopal and Yang, 2013)and transfer learning (Banerjee et al., 2019; Shimura et al., 2018) between adjacent labels or levels.",
"Prior Probability According to the aforementioned comparisons, our simplified structure encoders with prior probabilities is undoubtedly beneficial for HTC.",
"We also investigate different choices of prior probabilities with hierarchy-GCN encoder on the HiAGM-TP variant, clarified as Table 5.",
"Note that the weighted adjacent matrix is initialized by prior probabilities.",
"The simple weighted adjacent matrix performs better than the complex edge-wise weight matrix for node transformation.",
"The fixed weighted adjacent matrix also achieves better results than the original unweighted adjacent matrix and the trainable randomly initialized one.",
"It demonstrates that the prior probability of the hierarchy is capable of representing hierarchical label dependencies.",
"Furthermore, the best result is obtained by the setting that obeys the calculating direction of prior probability.",
"When comparing the results of the fixed adjacent matrix and trainable one, we can find that the weighted adjacent matrix could be finetuned for higher flexibility and better performance.",
"interac-Figure 4: Ablation study on the depth of GCN.",
"tions perform worse than the others that allow propagation throughout the hierarchy paths.",
"As analyzed on GCN layers, the interaction between non-adjacent nodes would lead to negative impact on the HTC.",
"We also validate this conclusion based on the ablation study of prior probability.",
"Performance Study We analyze the improvement on performance by dividing labels based on their levels.",
"We compute level-based Micro-F1 scores of NYT on baseline, HiAGM-LA, and HiAGM-TP.",
"Figure 5 shows that our models retain a better performance than the baseline on all levels, especially among deep levels.",
"In this paper, we propose a novel end-to-end hierarchy-aware global model that extracts the label structural information for aggregating label-wise text features.",
"We present a bidirectional TreeLSTM and a hierarchy-GCN as the hierarchy-aware structure encoder.",
"Furthermore, our framework is extended into a parallel variant based on multi-label attention and a serial variant of text feature propagation.",
"Our approaches empirically achieve significant and consistent improvement on three distinct datasets, especially on the low-frequency labels.",
"Specifically, both variants outperform the state-of-the-art model on the RCV1-V2 benchmark dataset.",
"And our best model obtains a Macro-F1 score of 63.35% and a Micro-F1 score of 83.96%.",
"We thank all the anonymous reviewers for their valuable suggestions.",
"This research work was supported by the National Natural Science Foundation of China (Grant No.61772337, U1736207)."
]
| [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"objective",
"result",
"abstain",
"result",
"other",
"other"
]
|
[
"We recast dependency parsing as a sequence labeling problem, exploring several encodings of dependency trees as labels.",
"While dependency parsing by means of sequence labeling had been attempted in existing work, results suggested that the technique was impractical.",
"We show instead that with a conventional BILSTM -based model it is possible to obtain fast and accurate parsers.",
"These parsers are conceptually simple, not needing traditional parsing algorithms or auxiliary structures.",
"However, experiments on the PTB and a sample of UD treebanks show that they provide a good speed-accuracy tradeoff, with results competitive with more complex approaches.",
"The application of neural architectures to syntactic parsing, and especially the ability of long short-term memories (LSTMs) to obtain context-aware feature representations (Hochreiter and Schmid-huber, 1997), has made it possible to parse natural language with conceptually simpler models than before.",
"For example, in dependency parsing, the rich feature models with dozens of features used in transition-based approaches (Zhang and Nivre, 2011) can be simplified when using feedforward neural networks (Chen and Manning, 2014), and even more with BiLSTM architectures (Kiper-wasser and Goldberg, 2016), where in fact two positional features can suffice (Shi et al., 2017).",
"Similarly, in graph-based approaches, Dozat and Manning (2017) have shown that an arc-factored model can achieve state-of-the-art accuracy, without the need for the higher-order features used in systems like (Koo and Collins, 2010).",
"In the same way, neural feature representations have made it possible to relax the need for structured representations.",
"This is the case of sequence-to-sequence models that translate sentences into linearized trees, which were first applied to constituent (Vinyals et al., 2015) and later to dependency parsing (Wiseman and Rush, 2016; Zhang et al., 2017b; Li et al., 2018).",
"Recently, Gomez-Rodrguez and Vilares (2018) have shown that sequence labeling models, where each word is associated with a label (thus simpler than sequence to sequence, where the mapping from input to output is not one to one) can learn constituent parsing.",
"Contribution We show that sequence labeling is useful for dependency parsing, in contrast to previous work (Spoustova and Spousta, 2010; Li et al., 2018).",
"We explore four different encodings to represent dependency trees for a sentence of length n as a set of n labels associated with its words.",
"We then use these representations to perform dependency parsing with an off-the-shelf sequence labeling model.",
"The results show that we produce models with an excellent speed-accuracy tradeoff, without requiring any explicit parsing algorithm or auxiliary structure (e.g. stack or buffer).",
"The source code is available at https://github.",
"com/mstrise/dep2label 2 Parsing as sequence labeling Sequence labeling is a structured prediction problem where a single output label is generated for every input token.",
"This is the case of tasks such as PoS tagging, chunking or named-entity recognition, for which different approaches obtain accurate results (Brill, 1995; Ramshaw and Marcus, 1999; Reimers and Gurevych, 2017).",
"On the contrary, previous work on dependency parsing as sequence labeling is vague and reports results that are significantly lower than those provided by transition-, graph-based or sequence-to-sequence models (Dyer et al., 2015; Kiperwasser and Goldberg, 2016; Dozat and Manning, 2017; Zhang et al., 2017a).",
"Spoustova and Spousta (2010) encoded dependency trees using a relative PoS-based scheme to represent the head of a node, to then train an averaged perceptron.",
"They did not provide comparable results, but claimed that the accuracy was between 5-10% below the state of the art in the pre-deep learning era.",
"Recently, Li et al. (2018) used a relative positional encoding of head indexes with respect to the target token.",
"This is used to train Bidirectional LSTM-CRF sequence-to-sequence models (Huang et al., 2015), that make use of sub-root decomposition.",
"They compared their performance against an equivalent BiLSTM-CRF labeling model.",
"The reported UAS for the sequence labeling model was 87.6% on the Penn Treebank, more than 8 points below the current best model (Ma et al., 2018), concluding that sequence-to-sequence models are required to obtain competitive results.",
"Given a sentence w 1 . . . w n , we associate the words with nodes { 0 , 1 , . . . , n } , where the extra node 0 is used as a dummy root for the sentence.",
"A dependency parser will find a set of labeled relations encoded as edges of the form ( h, d, l ) , where h { 0 , 1 , . . . , n } is the head, d { 1 , . . . , n } the dependent, and l a dependency label.",
"The resulting dependency graph must be acyclic and such that each node in { 1 , . . . , n } has exactly one head, so it will be a directed tree rooted at node 0 .",
"Thus, to encode a dependency tree, it suffices to encode the unique head position and dependency label associated with each word of w 1 . . . w n .",
"To do so, we will give each word w i a discrete label of the form ( x i , l i ) , where l i is the dependency label and x i encodes the position of the head in one of the following four ways (see also Figure 1): 1. Naive positional encoding: x i directly stores the position of the head, i.e., a label ( x i , l i ) encodes an edge ( x i , i, l i ) .",
"This is the encoding used in the CoNLL file format.",
"2. Relative positional encoding: x i stores the difference between the head index minus that of the dependent, i.e., ( x i , l i ) encodes an edge ( i + x i , i, l i ) .",
"This was the encoding used for the sequence-to-sequence and sequence labeling models in (Li et al., 2018), as well as for the sequence-to-sequence model in (Kiper-wasser and Ballesteros, 2018).",
"3. Relative PoS-based encoding: x i is a tuple p i , o i .",
"If o i > 0 , the head of w i is the o i th closest among the words to the right of w i that have PoS tag p i .",
"If o i < 0 , the head of w i is the o i th closest among the words to the left of w i that have PoS tag p i .",
"For example, ( V, 2) means the second verb to the left of w i .",
"This scheme is closer to the notion of valency, and was used by Spoustova and Spousta (2010).",
"4. Bracketing-based encoding: based on (Yli-Jyra, 2012; Yli-Jyra and Gomez-Rodrguez, 2017).",
"In each label ( x i , l i ) , the component x i is a string following the regular expression",
"(<)?((\\)*|(/)*)(>)?",
"where the presence of character < means that w i 1 has an incoming arc from the right, k copies of character \\ mean that w i has k outgoing arcs towards the left, k copies of / mean that w i 1 has k outgoing arcs towards the right, and the presence of > means that w i has an incoming arc from the left.",
"Thus, each right dependency from a word i to j is encoded by a ( / , > ) pair in the label components x i +1 and x j , and each left dependency from j to i by a ( < , \\ ) pair in the label components x i +1 and x j .",
"Note that the intuition that explains why information related to a word is encoded in a neighboring node is that each x i corresponds to a fencepost position (i.e., x i represents the space between w i 1 and w i ), and the character pair associated to an arc is encoded in the most external fencepost positions covered by that arc.",
"These pairs act as pairs of matching brackets, which can be decoded using a stack to reconstruct the dependencies.",
"The first three encodings can represent any dependency tree, as they encode any valid head position for each node, while the bracketing encoding only supports projective trees, as it assumes that brackets are properly nested.",
"All the encodings are total and injective, but they are not surjective: head indexes can be out of range in the first three encodings, brackets can be unbalanced in encoding 4, and all the encodings can generate graphs with cycles.",
"We will deal with ill-formed trees later.",
"We use a standard encoder-decoder network, to show that dependency parsing as sequence label<",
"Encoder We use bidirectional LSTMs (Hochre-iter and Schmidhuber, 1997; Schuster and Pali-wal, 1997).",
"Let LSTM ( x ) be an abstraction of a long short-term memory network that processes the sequence of vectors x = [ x 1 , ..., x | x | ] , then output for x i is defined as h i = BiLSTM ( x , i ) = LSTM l ( x [1: i ] ) LSTM r ( x [ | x | : i ] ) .",
"We consider stacked BiLSTMs, where the output h mi of the m th BiLSTM layer is fed as input to the m +1th layer.",
"Unless otherwise specified, the input token at a given time step is the concatenation of a word, PoS tag, and another word embedding learned through a character LSTM.",
"Decoder We use a feed-forward network, which is fed the output of the last BiLSTM.",
"The output is computed as P ( y i | h i ) = softmax ( W h i + b ) .",
"Well-formedness",
"(i) Each token must be assigned a head (one must be the dummy root), and",
"(ii) the graph must be acyclic.",
"If no token is the real root (no head is the dummy root), we search for candidates by relying on the three most likely labels for each token.",
"1 If none is found, we assign it to the first token of the sentence.",
"The single-head constraint is ensured by the nature of the encodings themselves, but some of the predicted head indexes might be out of bounds.",
"If so, we attach those tokens to the real root.",
"If a cycle exists, we do the same for the leftmost token in the cycle.",
"We use the English Penn Treebank (PTB) (Marcus et al., 1993) and its splits for parsing.",
"We transform it into Stanford Dependencies (De Marn-effe et al., 2006) and obtain the predicted PoS tags using Stanford tagger (Toutanova et al., 2003).",
"We also select a sample of UDv2.2 treebanks (Nivre et al., 2018): Ancient-Greek PROIEL , Czech PDT , Chinese GSD , English EWT , Finnish TDT , 1 If single-rooted trees are a prerequisite, the most probable node will be selected among multiple root nodes.",
"Hebrew HTB , Kazakh KTB and Tamil TTB , as a representative sample, following (de Lhoneux et al., 2017).",
"As evaluation metrics, we use Labeled (LAS) and Unlabeled Attachment Score (UAS).",
"We measure speed in sentences/second, both on a single core of a CPU 2 and on a GPU 3 .",
"Setup We use NCRFpp as our sequence labeling framework (Yang and Zhang, 2018).",
"For PTB, we use the embeddings by Ling et al. (2015), for comparison to BIST parser (Kiperwasser and Goldberg, 2016), which uses a similar architecture, but also needs a parsing algorithm and auxiliary structures.",
"For UD, we follow an end-to-end setup and run UDPipe 4 (Straka and Strakova, 2017) for tok-enization and tagging.",
"We use the pretrained word embeddings by Ginter et al. (2017).",
"Appendix A contains additional hyperparameters.",
"We first examine the four encodings on the PTB dev set.",
"Table 1 shows the results and also compares them against Li et al. (2018), who proposed seq2seq and sequence labeling models that use a relative positional encoding.",
"As the relative PoS-based encoding and bracketing-based encoding provide the best results, we will conduct the rest of our experiments with these two encodings.",
"Furthermore, we perform a small hyperparameter search involving encoding, number of hidden layers, their dimension and presence of character embeddings, as these parameters influence speed and accuracy.",
"From now on, we write P zx , y for a PoS-based encoding model and B zx , y for a bracketing-based encoding 2 Intel Core i7-7700 CPU 4.2 GHz.",
"model, where z indicates whether character representation was used in the model, x the number of BiLSTM layers, and y the word hidden vector dimension.",
"We take as starting points (1) the hyperparameters used by the BIST parser (Kiperwasser and Goldberg, 2016), as it uses a BiLSTM architecture analogous to ours, with the difference that it employs a transition-based algorithm that uses a stack data structure instead of plain sequence labeling without explicit representation of structure, and (2) the best hyperparameters used by Gomez-Rodrguez and Vilares (2018) for constituent parsing as sequence labeling, as it is an analogous task for a different parsing formalism.",
"From there, we explore different combinations of parameters and evaluate 20 models on the PTB development set, with respect to accuracy (UAS) and speed (sentences/second on a single CPU core), obtaining the Pareto front in Figure 2. The two starting models based on previous literature ( P 2 , 250 and PC 2 , 800 , respectively) happen to be in the Pareto front, confirming that they are reasonable hyperparameter choices also for this setting.",
"In addition, we select two more models from the Pareto front (models P C2 , 400 and B 2 , 250 ) for our test set experiments on PTB, as they also provide a good balance between speed and accuracy.",
"Table 2 compares the chosen models, on the PTB test set, against state-of-the-art mod-5",
"based on the test set.",
"7 Tamil was run on gold segmented and tokenized inputs, as there is no pretrained UDpipe model.",
"We did not use pretrained word embeddings either.",
"els.",
"Contrary to previous dependency-parsing-as-sequence-labeling attempts, we are competitive and provide a good speed-accuracy tradeoff.",
"For instance, the P C2 , 800 model runs faster than the BIST parser (Kiperwasser and Goldberg, 2016) while being almost as accurate (-0.18 LAS).",
"This comes in spite of its simplicity.",
"While our BiLSTM architecture is similar to that of BIST, the sequence labeling approach does not need a stack, a specific transition system or a dynamic oracle.",
"Using the BIST hyperparameters for our model ( P 2 , 250 ) yields further increases in speed, at some cost to accuracy: 3.34x faster and -0.04 LAS score than the graph-based model, and 3.51x faster and Treebank P C2 , 800 KG (transition-based) PoS type (sent/s) UAS LAS (sent/s) UAS LAS CPU CPU Ancient Greek XPOS 123 1 75.31 70.87 116 4 69.43 64.41 Chinese UPOS 105 0 63.20 59.12 73 1 64.69 60.45 Czech UPOS 125 1 89.10 86.68 94 3 89.25 86.11 English UPOS 139 1 81.48 78.64 120 2 82.22 79.00 Finnish UPOS 168 0 80.12 76.22 127 3 80.99 76.63 Hebrew equal PoS 120 0 63.04 58.66 70 1 63.56 58.80 Kazakh XPOS 283 3 32.93 17.07 178 5 23.09 12.73 Tamil 7 UPOS 150 2 71.59 64.00 127 3 75.41 68.58 Table 4: Comparison on UD-CoNLL18 test sets.",
"We now extend our experiments to the sample of UD-CoNLL18 treebanks.",
"To this end, we focus on the PC 2 , 800 model and since our PoS tag-based encoding can be influenced by the specific PoS tags used, we first conduct an experiment on the development sets to determine what tag set (UPoS, the universal PoS tag set, common to all languages, or XPoS, extended language-specific PoS tags) produces the best results for each dataset.",
"Table 3 shows how the number of unique UPoS and XPoS tags found in the training set differs in various languages.",
"The results suggest that the performance of our system can be influenced by the size of the tag set.",
"It appears that a very large tag set (for instance the XPoS tag set for Czech and Tamil) can hurt the performance of the model and significantly slow down the system, as it results into a large number of distinct labels for the sequence labeling model, increasing sparsity and making the classification harder.",
"In case of Ancient Greek and Kazakh, the best performance is achieved with the XPoS-based encoding.",
"In these corpora, the tag set is slightly bigger than the UPoS tag set.",
"One can argue that the XPoS tags in this case were possibly more fine-grained and hence provided additional useful information to the system facilitating a correct label prediction, without being so large as to produce excessive sparsity.",
"Table 4 shows experiments on the UD test sets, with the chosen PoS tag set for each corpus.",
"P C2 , 800 outperforms transition-based BIST in LAS in 3 out of 8 treebanks, 8 and is clearly faster in all analyzed 8 For Ancient Greek, this may be related to the large languages.",
"We believe that the variations between languages in terms of LAS difference with respect to BIST can be largely due to differences in the accuracy and granularity of predicted PoS tags, since our chosen encoding relies on them to encode arcs.",
"The bracketing-based encoding, which does not use PoS tags, may be more robust to this.",
"On the other hand, finding the optimal granularity of PoS tags for the PoS-based encoding can be an interesting avenue for future work.",
"In this work, we have also examined the impact of the training data size on the performance of our system compared to the performance of BIST parser.",
"The results in Figure 3 suggest that our model requires more data during the training than BIST parser in order to achieve similar performance.",
"The performance is slightly worse when little training data is available, but later on our model reduces the gap when increasing the training data size.",
"This paper has explored fast and accurate dependency parsing as sequence labeling.",
"We tested four different encodings, training a standard BiLSTM-based architecture.",
"In contrast to previous work, our results on the PTB and a subset of UD treebanks show that this paradigm can obtain competitive results, despite not using any parsing algorithm nor external structures to parse sentences.",
"This work has received funding from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the TELEPARES-UDC project (FFI2014-51978-C2-2-R) and the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01).",
"We gratefully acknowledge NVIDIA Corporation for the donation of a GTX Titan X GPU.",
"amount of non-projectivity (BIST is a projective parser).",
"For extra comparison, a non-projective variant of BIST (Smith et al., 2018) obtains 71.58 LAS with mono-treebank training, but from better segmentation and morphology than used here.",
"UDpipe (Straka and Strakov a, 2017) obtains 67.57 LAS.",
"Czech and Kazakh have a medium amount of non-projectivity."
]
| [
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other"
]
|
[
"State-of-the-art deep neural networks require large-scale labeled training data that is often expensive to obtain or not available for many tasks.",
"Weak supervision in the form of domain-specific rules has been shown to be useful in such settings to automatically generate weakly labeled training data.",
"However, learning with weak rules is challenging due to their inherent heuristic and noisy nature.",
"An additional challenge is rule coverage and overlap, where prior work on weak supervision only considers instances that are covered by weak rules, thus leaving valuable unlabeled data behind.",
"In this work, we develop a weak supervision framework (ASTRA 1 ) that leverages all the available data for a given task.",
"To this end, we leverage task-specific unlabeled data through self-training with a model (student) that considers contextualized representations and predicts pseudo-labels for instances that may not be covered by weak rules.",
"We further develop a rule attention network (teacher) that learns how to aggregate student pseudo-labels with weak rule labels, conditioned on their fidelity and the underlying context of an instance.",
"Finally, we construct a semi-supervised learning objective for end-to-end training with unlabeled data, domain-specific rules, and a small amount of labeled data.",
"Extensive experiments on six benchmark datasets for text classification demonstrate the effectiveness of our approach with significant improvements over state-of-the-art baselines.",
"The success of state-of-the-art neural networks crucially hinges on the availability of large amounts of annotated training data.",
"While recent advances on language model pre-training (Peters et al., 2018; Most of the work was done while the first author was an intern at Microsoft Research.",
"1 ASTRA: we A kly-supervised S elfTRA ining.",
"Our code is publicly available at https://github.com/ microsoft/ASTRA .",
"Devlin et al., 2019; Radford et al., 2019) reduce the annotation bottleneck, they still require large amounts of labeled data for obtaining state-of-the-art performances on downstream tasks.",
"However, it is prohibitively expensive to obtain large-scale labeled data for every new task, therefore posing a significant challenge for supervised learning.",
"In order to mitigate labeled data scarcity, recent works have tapped into weak or noisy sources of supervision, such as regular expression patterns (Augenstein et al., 2016), class-indicative keywords (Ren et al., 2018b; Karamanolakis et al., 2019), alignment rules over existing knowledge bases (Mintz et al., 2009; Xu et al., 2013) or heuristic labeling functions (Ratner et al., 2017; Bach et al., 2019; Badene et al., 2019; Awasthi et al., 2020).",
"These different types of sources can be used as weak rules for heuristically annotating large amounts of unlabeled data.",
"For instance, consider the question type classification task from the TREC dataset with regular expression patterns such as: label all questions containing the token when as numeric (e.g., When was Shakespeare born?\").",
"Approaches relying on such weak rules typically suffer from the following challenges.",
"(i) Noise.",
"Rules by their heuristic nature rely on shallow patterns and may predict wrong labels for many instances.",
"For example, the question When would such a rule be justified?\" refers to circumstances rather than numeric expressions.",
"(ii) Coverage.",
"Rules generally have a low coverage as they assign labels to only specific subsets of instances.",
"(iii) Conflicts.",
"Different rules may generate conflicting predictions for the same instance, making it challenging to train a robust classifier.",
"To address the challenges with conflicting and noisy rules, existing approaches learn weights indicating how much to trust individual rules.",
"In the absence of large-scale manual annotations, the rule weights are usually learned via mutual agreement and disagreement of rules over unlabeled data (Ratner et al., 2017; Platanios et al., 2017; Sachan et al., 2018; Bach et al., 2019; Ratner et al., 2019; Awasthi et al., 2020).",
"For instance, such techniques would up-weight rules that agree with each other (as they are more likely to be correct), and down-weight such rules otherwise.",
"An important drawback of these approaches is low coverage since rules assign weak labels to only a subset of the data, thus leading to low rule overlap to compute rule agreement.",
"For instance, in our experiments on six real-world datasets, we observe that 66% of the instances are covered by fewer than 2 rules and 40% of the instances are not covered by any rule at all.",
"Rule sparsity limits the effectiveness of previous approaches, thus leading to strong assumptions, such as, that each rule has the same weight across all instances (Ratner et al., 2017; Bach et al., 2019; Ratner et al., 2019), or that additional supervision is available in the form of labeled exemplars used to create such rules in the first place (Awasthi et al., 2020).",
"Most importantly, all these works ignore (as a data pre-processing step) unlabeled instances that are not covered by any of the rules, thus leaving potentially valuable data behind.",
"Overview of our method.",
"In this work, we present a weak supervision framework, namely ASTRA, that considers all task-specific unlabeled instances and domain-specific rules without strong assumptions about the nature or source of the rules.",
"ASTRA makes effective use of a small amount of labeled data, lots of task-specific unlabeled data, and domain-specific rules through iterative teacher-student co-training (see Figure 1).",
"A student model based on contextualized representations provides pseudo-labels for all instances, thereby, allowing us to leverage all unlabeled data including instances that are not covered by any heuristic rules.",
"To deal with the noisy nature of heuristic rules and pseudo-labels from the student, we develop a rule attention (teacher) network that learns to predict the fidelity of these rules and pseudo-labels conditioned on the context of the instances to which they apply.",
"We develop a semi-supervised learning objective based on minimum entropy regularization to learn all of the above tasks jointly without the requirement of additional rule-exemplar supervision.",
"Overall, we make the following contributions: We propose an iterative self-training mechanism for training deep neural networks with weak supervision by making effective use of task-specific unlabeled data and domain-specific heuristic rules.",
"The self-trained student model predictions augment the weak supervision framework with instances that are not covered by rules.",
"We propose a rule attention teacher network (RAN) for combining multiple rules and student model predictions with instance-specific weights conditioned on the corresponding contexts.",
"Furthermore, we construct a semi-supervised learning objective for training RAN without strong assumptions about the structure or nature of the weak rules.",
"We demonstrate the effectiveness of our approach on several benchmark datasets for text classification where our method significantly outperforms state-of-the-art weak supervision methods.",
"We now present our approach, ASTRA, that leverages a small amount of labeled data, a large amount of unlabeled data, and domain-specific heuristic rules.",
"Our architecture has two main components: the base student model (Section 2.1) and the rule attention teacher network (Section 2.2), which are iteratively co-trained in a self-training framework.",
"Formally, let X denote the instance space and Y = { 1 , . . . , K } denote the label space for a K class classification task.",
"We consider a small set of manually-labeled examples DL = { ( x l , y l ) } , where x l X and y l Y and a large set of unlabeled examples DU = { x i } .",
"We also consider a set of pre-defined heuristic rules R = { r j } , where each rule r j has the general form of a labeling function that considers as input an instance x i X (and potentially additional side information), and RAN (Teacher) (3) Semi-supervised RAN training Few Labeled data (1) Self-training (2) Rule attention network Rule Embeddings Rule Attention Rule weights Instance x i q ji ji Rules r j Rule labels q i RAN prediction Unlabeled data Student PseudoLabeled data Few Labeled data Figure 2: Our ASTRA framework for self-training with weak supervision.",
"either assigns a weak label q ji { 0 , 1 } K (one-hot encoding) or does not apply, i.e., does not assign a label for x i .",
"Our goal is to leverage DL , DU , and R to train a classifier that, given an unseen test instance x (cid:48) X , predicts a label y (cid:48) Y .",
"In the rest of this section, we present our ASTRA framework for addressing this problem.",
"Our self-training framework starts with a base model trained on the available small labeled set DL",
"The model is then applied to unlabeled data DU to obtain pseudo-labeled instances.",
"In classic self-training (Riloff, 1996; Nigam and Ghani, 2000), the student model's pseudo-labeled instances are directly used to augment the training dataset and iteratively re-train the student.",
"In our setting, we augment the self-training process with weak labels drawn from our teacher model that also considers rules in R (described in the next section).",
"The overall self-training process can be formulated as: min E x l ,y l DL [ log p ( y l | x l )]+ E x DUE y q ( y | x ) [ log p ( y | x )] (1) where, p ( y | x ) is the conditional distribution under student's parameters ; R is a hyper-parameter controlling the relative importance of the two terms; and q ( y | x ) is the conditional distribution under the teacher's parameters from the last iteration that is fixed in the current iteration.",
"Our Rule Attention Teacher Network (RAN) aggregates multiple weak sources of supervision with trainable weights and computes a soft weak label",
"q i for an unlabeled instance x i .",
"One of the potential drawbacks of relying only on heuristic rules is that a lot of data get left behind.",
"Heuristic rules by nature (e.g., regular expression patterns, keywords) apply to only a subset of the data.",
"Therefore, a substantial number of instances are not covered by any rules and thus are not considered in prior weakly supervised learning approaches (Ratner et al., 2017; Awasthi et al., 2020).",
"To address this challenge and leverage contextual information from all available task-specific unlabeled data, we leverage the corresponding pseudo-labels predicted by the base student model (from Section 2.1).",
"To this end, we apply the student to the unlabeled data x DU and obtain pseudo-label predictions as p ( y | x ) .",
"These predictions are used to augment the set of already available weak rule labels to increase rule coverage.",
"Let R i R be the set of all heuristic rules that apply to instance x i .",
"The objective of RAN is to aggregate the weak labels predicted by all rules r j R i and the student pseudo-label p ( y | x i ) to compute a soft label q i for every instance x i from the unlabeled set DU .",
"In other words, RAN considers the student as an additional source of weak rule.",
"Aggregating all rule labels into a single label q i via simple majority voting (i.e., predicting the label assigned by the majority of rules) may not be effective as it treats all rules equally, while in practice, certain rules are more accurate than others.",
"RAN predicts pseudo-labels q i by aggregating rules with trainable weights a ( ) i [0 , 1] that capture their fidelity towards an instance x i as: q i = 1 Z i (cid:18) (cid:88) j : r j R i a ji q ji + a Si p ( y | x i )+ a ui u (cid:19) , (2) where a ji and a Si are the fidelity weights for the heuristic rule labels q ji and the student assigned pseudo-label p ( y | x i ) for an instance x i , respectively; u is a uniform rule distribution that assigns equal probabilities for all the K classes as u = [ 1 K , . . . , 1 K ] ; a ui is the weight assigned to the uniform rule for x i , which is computed as a function of the rest of the rule weights: a ui = ( | R i | +1 (cid:80) j : r j R i a ji a Si ) ; and Z i is a normalization coefficient to ensure that q i is a valid probability distribution.",
"u acts as a uniform smoothing factor that prevents overfitting for sparse settings, for instance, when a single weak rule applies to an instance.",
"The above coupling between rules and instances via their corresponding embeddings e j and h i alIn [26]: p1=np",
".array ([1,0]) p2=np",
".array ([1,0]) plot_loss_curve (p1,p2,scipy_entr ) p1=[1 0] p2=[0 1] Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD 1 2 q 1 q 2",
"According to Eq.",
"(2), a rule r j with higher fidelity weight a ji contributes more to the computation of q i .",
"If a ji = 1 r j { R i p } , then RAN reduces to majority voting.",
"If a ji = 0 r j { R i p } , then RAN ignores all rules and predicts q i = u .",
"Note the distinction of our setting to recent works like Snorkel (Ratner et al., 2017), that learns global rule-weights a ji = a j x i by ignoring the instance-specific rule fidelity.",
"Our proposed approach is flexible but at the same time challenging as we do not assume prior knowledge of the internal structure of the labeling functions r j R .",
"In order to effectively compute rule fidelities, RAN considers instance embeddings that capture the context of instances beyond the shallow patterns considered by rules.",
"In particular, we model the weight a ji of rule r j as a function of the context of the instance x i and r j through an attention-based mechanism.",
"Consider h i R d (cid:48) to be the hidden state representation of x i from the base student model.",
"Also, consider the (trainable) embedding of each rule r j as e j = g ( r j ) R d .",
"We use e j as a query vector with sigmoid attention to compute instance-specific rule attention weights as: a j i = ( f ( h i ) T e j ) [0 , 1] , (3) where f is a multi-layer perceptron that projects h i to R d and ( ) is the sigmoid function.",
"Rule embedding allows us to exploit the similarity between different rules in terms of instances to which they apply, and further leverage their semantics for modeling agreement.",
"RAN computes the student's weight a Si using the same procedure as for computing the rule weights a ji .",
"Note that the rule predictions q ji are considered fixed, while we estimate their attention weights.",
"In [179]: p1=np.array([1,0]) p2=np.array([0,1]) plot_loss_curve(p1,p2,scipy_entr) p1=np.array([1,0]) p2=np.array([1,0]) plot_loss_curve(p1,p2,scipy_entr) p1=np.array([0,1]) p1=[1 0] p2=[1 0] Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD 1 2 q 1 = q 2",
"Figure 3: Variation in unsupervised entropy loss with instance-specific rule predictions and attention weights encouraging rule agreement.",
"Consider this illustration with two rules for a given instance.",
"When rule predictions disagree ( q 1 (cid:54) = q 2 ), minimum loss is achieved for attention weights a 1 =0, a 2 =1 or a 1 =1, a 2 =0.",
"When rule predictions agree ( q 1 = q 2 ), minimum loss is achieved for attention weights a 1 = a 2 =1.",
"For instances covered by three rules, if q 1 = q 2 (cid:54) = q 3 , the minimum loss is achieved for a 1 = a 2 =1 and a 3 =0.",
"lows us to obtain representations where similar rules apply to similar contexts, and model their agreements via the attention weights a ji .",
"To this end, the trainable parameters of RAN ( f and g ) are shared across all rules and instances.",
"Next, we describe how to train RAN.",
"Learning to predict instance-specific weights a ( ) i for the weak sources (including rules and student pseudo-labels) is challenging due to the absence of any explicit knowledge about the source quality and limited amount of labeled training data.",
"We thus treat the weights a ( ) i as latent variables and propose a semi-supervised objective for training RAN with supervision on the coarser level of q i : LRAN = (cid:88) ( x i ,y i ) DL y i log q i (cid:88) x i DU q i log q i .",
"Given task-specific labeled data DL , the first term in Eq.",
"(4) minimizes the cross-entropy loss between the teacher's label q i and the corresponding clean label y i for the instance x i .",
"This term penalizes weak sources that assign labels q ( ) i that contradict with the ground-truth label y i by assigning a low instance-specific fidelity weight a ( ) i .",
"The second term in Eq.",
"(4) minimizes the entropy of the aggregated pseudo-label q i on unlabeled data DU .",
"Minimum entropy regularization is effective in settings with small amounts of labeled TREC SMS YouTube CENSUS MIT-R Spouse Labeled Training Data ( | DL | ) 68 69 100 83 1842 100 Unlabeled Training Data ( | DU | ) 5K 5K 2K 10K 65K 22K Test Data 500 500 250 16K 14K 3K #Classes 6 2 2 2 9 2 #Rules 68 73 10 83 15 9 Rule Accuracy (Majority Voting) 60.9% 48.4% 82.2% 80.1% 40.9% 44.2% Rule Coverage (instances in DU covered by 1 rule) 95% 40% 87% 100% 14% 25% Rule Overlap (instances in DU covered by 2 rules) 46% 9% 48% 94% 1% 8% Table 1: Dataset statistics.",
"data by leveraging unlabeled data (Grandvalet and Bengio, 2005), and is highly beneficial in our setting because it encourages RAN to predict weights that maximize rule agreement.",
"Since the teacher label q i is obtained by aggregating weak labels q ( ) i , entropy minimization encourages RAN to predict higher instance-specific weights a ( ) i to sources that agree in their labels for x i , and lower weights when there are disagreements between weak sources aggregated across all the unlabeled instances.",
"Figure 3 plots the minimum entropy loss over unlabeled data over two scenarios where two rules agree or disagree with each other for a given instance.",
"The optimal instance-specific fidelity weights a ( ) i are 1 when rules agree with each other, thereby, assigning credits to both rules, and only one of them when they disagree.",
"We use this unsupervised entropy loss in conjunction with cross-entropy loss over labeled data to ensure grounding.",
"End-to-end Learning: Algorithm 1 presents an overview of our learning mechanism.",
"We first use the small amount of labeled data to train a base student model that generates pseudo-labels and augments heuristic rules over unlabeled data.",
"Our RAN network computes fidelity weights to combine these different weak labels via minimum entropy regularization to obtain an aggregated pseudo-label for every unlabeled instance.",
"This is used to re-train the student model with the above student-teacher training repeated till convergence.",
"Datasets.",
"We evaluate our framework on the following six benchmark datasets for weak supervision from Ratner et al. (2017) and Awasthi et al. (2020).",
"(1) Question classification from TREC-6 into 6 categories (Abbreviation, Entity, Description, Human, Location, Numeric-value); (2) Spam classification of SMS messages; (3) Spam classification of YouTube comments; (4) Income classification on the CENSUS dataset on whether a person earns more than $50K or not; (5) Slot-filling in sentences on restaurant search queries in the MIT-R dataset: each token is classified into 9 classes (Location, Hours, Amenity, Price, Cuisine, Dish, Restaurant Name, Rating, Other); (6) Relation classification in the Spouse dataset, whether pairs of people mentioned in a sentence are/were married or not.",
"Table 1 shows the dataset statistics along with the amount of labeled, unlabeled data and domain-specific rules for each dataset.",
"For a fair comparison, we use exactly the same set of rules as in the previous work for the benchmark datasets.",
"These rules include regular expression patterns, lexicons, and knowledge bases for weak supervision.",
"Most of these rules were constructed manually, except for the CENSUS dataset, where rules have been automatically extracted with a coverage of 100%.",
"On average across all the datasets, 66% of the instances are covered by fewer than 2 rules, whereas 40% are not covered by any rule at all demonstrating the sparsity in our setting.",
"We also report the accuracy of the rules in terms of majority voting Method Learning to Weight Unlabeled Rules Instances (no rules) Majority -Snorkel (Ratner et al., 2017) (cid:88) -PosteriorReg (Hu et al., 2016) (cid:88) -L2R (Ren et al., 2018a) -(cid:88) ImplyLoss (Awasthi et al., 2020) (cid:88) (cid:88) Self-train -(cid:88) ASTRA (cid:88) (cid:88) (cid:88) Table 2: ASTRA learns rule-specific and instance-specific attention weights and leverages task-specific unlabeled data where no rules apply.",
"on the task-specific unlabeled datasets.",
"Additional details on the dataset and examples of rules are presented in the Appendix.",
"Evaluation.",
"We train ASTRA five times for five different random splits of the labeled training data and evaluate on held-out test data.",
"We report the average performance as well as the standard deviation across multiple runs.",
"We report the same evaluation metrics as used in prior works (Ratner et al., 2017; Awasthi et al., 2020) for a fair comparison.",
"Model configuration.",
"Our student model consists of embeddings from pre-trained language models like ELMO (Peters et al., 2018) or BERT (Devlin et al., 2019) for generating contextualized representations for an instance, followed by a softmax classification layer.",
"The RAN teacher model considers a rule embedding layer and a multilayer perceptron for mapping the contextualized representation for an instance to the rule embedding space.",
"Refer to the Appendix for more details.",
"Baselines.",
"We compare our method with the following methods:",
"(a) Majority predicts the majority vote of the rules with ties resolved by predicting a random class.",
"(b) LabeledOnly trains classifiers using only labeled data (fully supervised baseline).",
"(c) Self-train (Nigam and Ghani, 2000; Lee, 2013) leverages both labeled and unlabeled data for iterative self-training on pseudo-labeled predictions over task-specific unlabeled data.",
"This baseline ignores domain-specific rules.",
"(e) Snorkel+Labeled (Ratner et al., 2017) trains classifiers using weakly-labeled data with a generative model.",
"The model is trained on unlabeled data for computing rule weights in an unsupervised fashion, and learns a single weight per rule across all instances.",
"It is further fine-tuned on labeled data.",
"(f) L2R (Ren et al., 2018b) learns to re-weight noisy or weak labels from domain-specific rules via meta-learning.",
"It learns instance-specific but not rule-specific weights.",
"(g) PosteriorReg (Hu et al., 2016) trains classifiers using rules as soft constraints via posterior regularization (Ganchev et al., 2010).",
"(h) ImplyLoss (Awasthi et al., 2020) leverages exemplar -based supervision as additional knowledge for learning instance-specific and rule-specific weights by minimizing an implication loss over unlabeled data.",
"This requires maintaining a record of all instances used to create the weak rules in the first place.",
"Table 2 shows a summary of the different methods contrasting them on how they learn the weights (rule-specific or instance-specific) and if they leverage task-specific unlabeled data that are not covered by any rules.",
"Overall results.",
"Table 3 summarizes the main results across all datasets.",
"Among all the semi-supervised methods that leverage weak supervision from domain-specific rules, ASTRA outperforms Snorkel by 6 .",
"1% in average accuracy across all datasets by learning instance-specific rule weights in conjunction with self-training over unlabeled instances where weak rules do not apply.",
"Similarly, ASTRA also improves over a recent work and the best performing baseline ImplyLoss by 3 .",
"1% on average.",
"Notably, our method does not require additional supervision at the level of exemplars used to create rules in contrast to ImplyLoss.",
"Self-training over unlabeled data.",
"Recent works for tasks like image classification (Li et al., 2019; Xie et al., 2020; Zoph et al., 2020), neural sequence generation (Zhang and Zong, 2016; He et al., 2019) and few-shot text classification (Mukherjee and Awadallah, 2020; Wang et al., 2020) show the effectiveness of self-training methods in exploiting task-specific unlabeled data with stochastic regularization techniques like dropouts and data augmentation.",
"We also make similar observations for our weakly supervised tasks, where classic self-train methods (Self-train) leveraging only a few task-specific labeled examples and lots of unlabeled data outperform weakly supervised methods like Snorkel and PosteriorReg that have additional access to domain-specific rules.",
"Self-training with weak supervision.",
"Our framework ASTRA provides an efficient method to incorporate weak supervision from domain-specific rules to augment the self-training framework and improves by 6% over classic self-training.",
"compared to classic self-training, consider Figure 4, which depicts the gradual performance improvement over iterations.",
"The student models in classic self-training and ASTRA have exactly the same architecture.",
"However, the latter is guided by a better teacher (RAN) that learns to aggregate noisy rules and pseudo-labels over unlabeled data.",
"Impact of rule sparsity and coverage for weak supervision.",
"In this experiment, we compare the performance of various methods by varying the proportion of available domain-specific rules.",
"To this end, we randomly choose a subset of the rules (varying the proportion from 10% to 100% ) and train various weak supervision methods.",
"For each setting, we repeat experiments with multiple rule splits and report aggregated results in Figure 5.",
"We observe that ASTRA is effective across all settings with the most impact at high levels of rule sparsity.",
"For instance, with 10% of domain-specific rules available, ASTRA outperforms ImplyLoss by 12% and Snorkel+Labeled by 19% .",
"This performance improvement is made possible by incorporating self-training in our framework to obtain pseudo-labels for task-specific unlabeled instances, and further re-weighting them with other domain-specific rules via the rule attention network.",
"Correspondingly, Table 4 shows the increase in data coverage for every task given by the proportion of unlabeled instances that are now covered by at least two weak sources (from multiple rules and pseudo-labels) in contrast to just considering the rules.",
"ASTRA teacher marginally outperforms the student model on an aggregate having access to domain-specific rules.",
"ASTRA student that is self-trained over task-specific unlabeled data and guided by an efficient teacher model significantly outper-% Overlap TREC YTube SMS MITR CEN.",
"Through minimum entropy regularization in our semi-supervised learning objective (Eq.",
"(4)), ASTRA leverages the agreement between various weak sources (including rules and pseudo-labels) over task-specific unlabeled data.",
"Removing this component results in an accuracy drop of 1 .",
"4% on an aggregate demonstrating its usefulness.",
"Fine-tuning the student on labeled data is important for effective self-training: ignoring DL in the step 2.3 in Algorithm 1, leads to 1.6% lower accuracy than ASTRA.",
"There is significant performance drop on removing the student's pseudo-labels ( p ( ) ) from the rule attention network in Eq.",
"(2).",
"This significantly limits the coverage of the teacher ignoring unlabeled instances where weak rules do not apply, thereby, degrading the overall performance by 3 .",
"2% .",
"Table 6 shows a question in the TREC-6 dataset that was correctly classified by the ASTRA teacher as an Entity type (ENTY).",
"Note that the majority voting of the four weak rules that apply to this instance (Rule 8, 24, 42, and 61) leads to an incorrect prediction of Human (HUM) type.",
"The ASTRA teacher aggregates all the heuristic rule labels and the student pseudo-label with their (computed) fidelity weights for the correct prediction.",
"Refer to Table 7 for more illustrative examples on how ASTRA aggregates various weak supervision sources with corresponding attention weights shown in parantheses.",
"In Example 1 where no rules apply, the student leverages the context of the sentence (e.g., semantics of president) to predict the HUM label.",
"While in Example 2, the teacher down-weights the incorrect student (as well as conflicting rules) and upweights the appropriate rule to predict the correct ENTY label.",
"In example 3, ASTRA predicts the correct label ENTY relying only on the student as both rules report noisy labels.",
"In this section, we discuss related work on self-training and learning with noisy labels or rules.",
"Refer to Hedderich et al. (2021) for a thorough survey of approaches addressing low-resource scenarios.",
"Self-Training.",
"Self-training (Yarowsky, 1995; Nigam and Ghani, 2000; Lee, 2013) as one of the earliest semi-supervised learning approaches (Chapelle et al., 2009) trains a base model (student) on a small amount of labeled data; applies it to pseudo-label (task-specific) unlabeled data; uses pseudo-labels to augment the labeled data; and re-trains the student in an iterative manner.",
"Self-training has recently been shown to obtain state-of-the-art performance for tasks like image classification (Li et al., 2019; Xie et al., 2020; Zoph et al., 2020), few-shot text classification (Mukher-jee and Awadallah, 2020; Wang et al., 2020), and neural machine translation (Zhang and Zong, 2016; He et al., 2019) and has shown complementary advantages to unsupervised pre-training (Zoph et al., 2020).",
"A typical issue in self-training is error propagation from noisy pseudo-labels.",
"This is addressed in ASTRA via rule attention network that computes the fidelity of pseudo-labels instead of directly using them to re-train the student.",
"Learning with Noisy Labels.",
"Classification under label noise from a single source has been an active research topic (Frnay and Verleysen, 2013).",
"A major line of research focuses on correcting noisy labels by learning label corruption matrices (Patrini et al., 2017; Hendrycks et al., 2018; Zheng et al., 2021).",
"More related to our work are the instance re-weighting approaches (Ren et al., 2018b; Shu et al., 2019), which learn to up-weight and down-weight instances with cleaner and noisy labels respectively.",
"However, these operate only at instance-level and do not consider rule-specific importance.",
"Our approach learns both instanceand rule-specific fidelity weights and substantially outperforms Ren Text What was President Lyndon Johnson 's reform program called ?",
"Learning with Multiple Rules.",
"To address the challenges with multiple noisy rules, existing approaches learn rule weights based on mutual rule agreements with some strong assumptions.",
"For instance, Meng et al. (2018); Karamanolakis et al. (2019); Mekala and Shang (2020) denoise seed words using vector representations of their semantics.",
"However it is difficult to generalize these approaches from seed words to more general labeling functions that only predict heuristic labels (as in our datasets).",
"Ratner et al. (2017); Sachan et al. (2018); Ratner et al. (2019) assume each rule to be equally accurate across all the instances that it covers.",
"Awasthi et al. (2020) learn rule-specific and instance-specific weights but assume access to labeled exemplars that were used to create the rule in the first place.",
"Most importantly, all these works ignore unlabeled instances that are not covered by any of the rules, while our approach leverages all unlabeled instances via self-training.",
"We developed a weak supervision framework, ASTRA, that efficiently trains classifiers by integrating task-specific unlabeled data, few labeled",
"data, and domain-specific knowledge expressed as rules.",
"Our framework improves data coverage by employing self-training with a student model.",
"This considers contextualized representations of instances and predicts pseudo-labels for all instances, including those that are not covered by heuristic rules.",
"Additionally, we developed a rule attention network, RAN, to aggregate various weak sources of supervision (heuristic rules and student pseudo-labels) with instance-specific weights, and employed a semi-supervised objective for training RAN without strong assumptions about the nature or structure of the weak sources.",
"Extensive experiments on several benchmark datasets demonstrate our effectiveness, particularly at high levels of rule sparsity.",
"In future work, we plan to extend our framework to support a broader range of natural language understanding tasks and explore alternative techniques for rule embedding.",
"In this work, we introduce a framework for training of neural network models with few labeled examples and domain-specific knowledge.",
"This work is likely to increase the progress of NLP applications for domains with limited annotated resources but access to domain-specific knowledge.",
"While it is not only expensive to acquire large amounts of labeled data for every task and language, in many cases, we cannot perform large-scale labeling due to access constraints from privacy and compliance concerns.",
"To this end, our framework can be used for applications in finance, legal, healthcare, retail and other domains where adoption of deep neural network may have been hindered due to lack of large-scale manual annotations on sensitive data.",
"While our framework accelerates the progress of NLP, it also suffers from associated societal implications of automation ranging from job losses for workers who provide annotations as a service.",
"Additionally, it involves deep neural models that are compute intensive and has a negative impact on the environment in terms of carbon footprint.",
"The latter concern is partly alleviated in our work by leveraging pre-trained language models and not training from scratch, thereby, leading to efficient and faster compute."
]
| [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method"
]
|
[
"To increase trust in artificial intelligence systems, a promising research direction consists of designing neural models capable of generating natural language explanations for their predictions.",
"In this work, we show that such models are nonetheless prone to generating mutually inconsistent explanations, such as Because there is a dog in the image. and Because there is no dog in the [same] image. , exposing flaws in either the decision-making process of the model or in the generation of the explanations.",
"We introduce a simple yet effective adversarial framework for sanity checking models against the generation of inconsistent natural language explanations.",
"Moreover, as part of the framework, we address the problem of adversarial attacks with full target sequences, a scenario that was not previously addressed in sequence-to-sequence attacks.",
"Finally, we apply our framework on a state-of-the-art neural natural language inference model that provides natural language explanations for its predictions.",
"Our framework shows that this model is capable of generating a significant number of inconsistent explanations.",
"In order to explain the predictions produced by accurate yet black-box neural models, a growing number of works propose extending these models with natural language explanation generation modules, thus obtaining models that explain themselves in human language (Hendricks et al., 2016; Camburu et al., 2018; Park et al., 2018; Kim et al., 2018; Ling et al., 2017).",
"In this work, we first draw attention to the fact that such models, while appealing, are nonetheless prone to generating inconsistent explanations.",
"We define two explanations to be inconsistent if they provide contradictory arguments about the instances and predictions that they aim to explain.",
"For example, consider a visual question answering (VQA) task (Park et al., 2018) and two instances where the image is the same but the questions are different, say Is there an animal in the image? and Can you see a Husky in the image? .",
"If for the first instance a model predicts Yes. and generates the explanation Because there is a dog in the image. , while for the second instance the same model predicts No. and generates the explanation Because there is no dog in the image. , then the model is producing inconsistent explanations.",
"Inconsistent explanations reveal at least one of the following undesired behaviors: (i ) at least one of the explanations is not faithfully describing the decision mechanism of the model, or (ii ) the model relied on a faulty decision mechanism for at least one of the instances.",
"Note that, for a pair of inconsistent explanations, further investigation would be needed to conclude which of these two behaviors is the actual one (and might vary for each instance).",
"Indeed, a pair of inconsistent explanations does not necessarily imply at least one unfaithful explanation.",
"In our previous example, if the image contains a dog, it is possible that the model identifies the dog when it processes the image together with the first question, and that the model does not identify the dog when it processes the image together with the second question, hence both explanations would faithfully reflect the decision mechanism of the model even if they are inconsistent.",
"Similarly, a pair of inconsistent explanations does not necessarily imply that the model relies on a faulty decision mechanism, because the explanations may not faithfully describe the decision mechanism of the model.",
"We here will not investigate the problem of identifying which of the two undesired behaviors is true for a pair of inconsistent explanations.",
"checking if models are robust against generating inconsistent natural language explanations.",
"Given a model m that produces natural language explanations for its predictions, and an instance x , our framework aims to generate inputs x that cause the model to produce explanations that are inconsistent with the explanation produced for x .",
"Thus, our framework falls under the category of adversarial methods , i.e., searching for inputs that cause a model to produce undesired answers (Biggio et al., 2013; Szegedy et al., 2014).",
"As part of our framework, we address the problem of adversarial attacks with full target sequences, a scenario that has not been previously addressed in sequence-to-sequence attacks, and which can be useful for other areas, such as dialog systems.",
"Finally, we apply our framework on a state-of-the-art neural natural language inference model that generates natural language explanations for its decisions (Camburu et al., 2018).",
"We show that this model can generate a significant number of inconsistent explanations.",
"Given a model m that can jointly produce predictions and natural language explanations, we propose a framework that, for any given instance x , attempts to generate new instances for which the model produces explanations that are inconsistent with the explanation produced for x ; we refer to the latter as e m ( x ) .",
"We approach the problem in two high-level steps.",
"Given an instance x , (A ) we create a list of explanations that are inconsistent with the explanation generated by the model on x , and (B ) given an inconsistent explanation from the list created in A, we find an input that causes the model to generate this precise inconsistent explanation.",
"Setup.",
"Our setup has three desired properties that make it different from commonly researched adversarial settings in natural language processing: At step (B), the model has to generate a full target sequence : the goal is to generate the exact explanation that was identified at step (A) as inconsistent with the explanation e m ( x ) .",
"Adversarial inputs do not have to be a paraphrase or a small perturbation of the original input, since our objective is to generate inconsistent explanations rather than incorrect predictions these can eventually happen as a byproduct.",
"To our knowledge, this work is the first to tackle this problem setting, especially due to the challenging requirement of generating a full target sequence see Section 4 for comparison with existing works.",
"Context-dependent inconsistencies.",
"In certain tasks, instances consist of a context (such as an image or a paragraph), and some assessment to be made about the context (such as a question or a hypothesis).",
"Since explanations may refer (some-times implicitly) to the context, the assessment of whether two explanations are inconsistent may also depend on it.",
"For example, in VQA, the inconsistency of the two explanations Because there is a dog in the image. and Because there is no dog in the image. depends on the image.",
"However, if the image is the same, the two explanations are inconsistent regardless of which questions were asked on that image.",
"For such a reason, given an instance x , we differentiate between parts of the instance that will remain fixed in our method (referred to as context parts and denoted as x c ) and parts of the instance that our method will vary in order to obtain inconsistencies (referred to as variable parts and denoted as x v ).",
"Hence, x = ( x c , x v ) .",
"In our VQA example, x c is the image, and x v is the question.",
"Stand-alone inconsistencies.",
"Furthermore, we note that there are cases for which explanations are inconsistent regardless of the input.",
"For example, explanations formed purely of background knowledge such as A woman is a person. and A woman is not a person. 1 are always inconsistent (and sometimes outrageous), regardless of the instances that lead to them.",
"For these cases, our method can treat the whole input as variable, i.e., x c = and x v = x .",
"1. Reverse the explanation generator module of model m by training a REVEXPL model to map from the generated explanation and the context part of the input to the variable part of the input, i.e., REVEXPL ( x c , e m ( x )) = x v .",
"(a) Create a list of statements that are inconsistent with e , we call it I e .",
"(b) Query REVEXPL on each e I e and the context x c .",
"Get the new variable part x v = REVEXPL ( x c , e ) of a reverse input x = ( x c , x v ) , which may cause the m to produce inconsistent explanations.",
"(c) Query m on each reverse input to get a reverse explanation e m ( x ) .",
"(d) Check if each reverse explanation e m ( x ) is indeed inconsistent with e by checking if e m ( x ) I e .",
"To execute step (2a), note that explanations are by nature logical sentences.",
"Hence, for any task, one may define a set of logical rules to transform an explanation into an inconsistent counterpart, such as negation or replacement of task-essential tokens with task-specific antonyms.",
"For example, in explanations for self-driving cars (Kim et al., 2018), one can replace green light with red light , or the 1 Which was generated by the model in our experiments. road is empty with the road is crowded (which are task-specific antonyms), to get inconsistent (and hazardous) explanations such as The car accelerates because there is a red light. .",
"Another strategy to obtain inconsistent explanations consists of swapping explanations from mutually exclusive labels.",
"For example, assume a recommender system predicts that movie X is a bad recommendation for user Y because X is a horror movie. , implying that user Y does not like horror movies.",
"If it also predicts that movie Z is a good recommendation to the same user Y because Z is a horror movie. , then we have an inconsistency, as the latter would imply that user Y likes horror movies.",
"While this step requires a degree of specific adjustment to the task at hand, we consider it a small price to pay to ensure that the deployed system is coherent.",
"Also, note that this step can eventually be automated, for example, by training a neural network to generate task-specific inconsistencies after crowd-sourcing a dataset of inconsistent explanations for a task at hand we leave this as future work.",
"Finally, to execute step (2d), our framework currently checks for an exact string match between a reverse explanation and any of the inconsistent explanations created at step (2a).",
"Alternatively, one can train a model to identify if a pair of explanations forms an inconsistency, which we also leave as future work.",
"We consider the task of natural language inference (NLI) (Bowman et al., 2015), which consists of detecting whether a pair of sentences, called premise and hypothesis , are in a relation of: entailment , if the premise entails the hypothesis; contradiction , if the premise contradicts the hypothesis; or neutral , if neither entailment nor contradiction holds.",
"For example, a pair with premise Two doctors perform surgery on patient. and hypothesis Two doctors are performing surgery on a man. constitutes a neutral pair.",
"The SNLI corpus (Bowman et al., 2015) of 570 K such human-written instances enabled a plethora of works on this task (Rocktaschel et al., 2015; Munkhdalai and Yu, 2016; Liu et al., 2016).",
"Recently, Camburu et al. (2018) augmented SNLI with crowd-sourced free-form explanations of the ground-truth label, called e-SNLI.",
"An explanation from e-SNLI for the neutral pair above is Not every patient is a man. .",
"Their best model for generating explanations, called EXPLAINTHENPREDICTATTENTION (here-after called ETPA), is a sequence-to-sequence attention model that uses two bidirectional LSTM networks (Hochreiter and Schmidhuber, 1997) for encoding the premise and hypothesis, and an LSTM decoder for generating the explanation while separately attending over the tokens of the premise and hypothesis.",
"Subsequently, they predict the label solely based on the explanation via a separately trained network, which maps an explanation to a label.",
"We show that our framework is able to make ETPA 2 generate a significant number of inconsistent explanations.",
"We highlight that our final goal is not a label attack, even if, for this particular model in which the label is predicted solely from the explanation, we implicitly also have a label attack with high probability.",
"3 In our experiments, we set x c as the premise (as this represents the given context in this task) and x v as the hypothesis.",
"However, note that due to the nature of SNLI for which decisions are based mostly on commonsense knowledge, the explanations are most of the time independent of the premise, such as A dog is an animal. hence, it would be possible to also reverse the premise and not just the hypothesis; we leave this as future work.",
"For the REVEXPL model, we use the same neural architecture and hyperparameters used by Camburu et al. (2018) for ETPA.",
"REVEXPL takes as input a premise-explanation pair, and produce a hypothesis.",
"Our trained REVEXPL model is able to reconstruct exactly the same (according to string matching) hypothesis with 32 .",
"78% test accuracy.",
"Creating I e .",
"To execute step (2a), we employ negation and swapping explanations.",
"For negation, we simply remove the tokens not and n't if they are present.",
"If these tokens appear more than once in an explanation, we create multiple inconsistencies by removing only one occurrence at a time.",
"We do not attempt to add negation tokens, as this may result in grammatically incorrect sentences.",
"For swapping explanations, we note that the explanations in e-SNLI largely follow a set of label-2 We use the pretrained model from https://github.",
"specific templates.",
"This is a natural consequence of the task and the SNLI dataset and not a requirement in the collection of the e-SNLI.",
"For example, annotators often used One cannot X and Y simultaneously. to explain a contradiction, Just because X, doesn't mean Y. for neutral, or X implies Y. for entailment.",
"Since any two labels are mutually exclusive, transforming an explanation from one template to a template of another label should automatically create an inconsistency.",
"For example, for the explanation of the contradiction One cannot eat and sleep simultaneously. , we match X to eat and Y to sleep , and create the inconsistent explanation Eat implies sleep. using the entailment template X implies Y. .",
"Thus, for each label, we created a list of the most used templates that we manually identified among e-SNLI, which can be found in Appendix A. A running example of creating inconsistent explanations by swapping is given in Appendix A.1.",
"If there is no negation and no template match, we discarded the instance.",
"In our experiments, we only discarded 2 .",
"6% of the SNLI test set.",
"We note that this procedure may result in grammatically or semantically incorrect inconsistent explanations.",
"However, as we will see below, our REVEXPL performed well in generating correct and relevant reverse hypotheses even when its input explanations were not correct.",
"This is not surprising, because REVEXPL has been trained to output ground-truth hypotheses.",
"Results and discussion.",
"We identified a total of 1044 pairs of inconsistent explanations starting from the SNLI test set, which contains 9824 instances.",
"First, we noticed that there are, on average, 1 .",
"93 1 .",
"77 distinct reverse hypotheses giving rise to a pair of inconsistent explanation.",
"Since the hypotheses are distinct, each of these instances is a separate valid adversarial inputs.",
"However, if one is strictly interested in the number of distinct pairs of inconsistent explanations, then, after eliminating duplications, we obtain 540 pairs of such inconsistencies.",
"Secondly, since the generation of natural language is always best evaluated by humans, we manually annotated 100 random distinct pairs.",
"We found that 82% of the reverse hypotheses form realistic instances together with the premise.",
"We also found that the majority of the unrealistic instances are due to a repetition of a token in the hypothesis.",
"For example, A kid is riding a helmet with a helmet on training. is a generated reverse hypothesis which is just one token away from a perfectly valid hypothesis.",
"Given our estimation of 82% to be inconsistencies caused by realistic reverse hypotheses, we obtained a total of 443 distinct pairs of inconsistent explanations.",
"While this means that our procedure only has a success rate of 4 .",
"51% , it is nonetheless alarming that this very simple and under-optimized adversarial framework detects a significant number of inconsistencies on a model trained on 570 K examples.",
"In Table 1, we see three examples of detected inconsistencies.",
"More examples can be found in Appendix B. Manual scanning.",
"We were curious to what extent one can find inconsistencies via a brute-force manual scanning.",
"We performed three such experiments, with no success.",
"On the contrary, we noticed a good level of robustness against inconsistencies when scanning through the generic adversarial hypotheses introduced by Carmona et al. (2018).",
"The details are in Appendix C. 4 Related Work An increasing amount of work focuses on providing natural language, free-form explanations (Camburu et al., 2018; Kim et al., 2018; Park et al., 2018; Hendricks et al., 2016) as a more comprehensive and user-friendly alternative to other forms of explainability, such as feature-based explanations (Ribeiro et al., 2016; Lundberg and Lee, 2017).",
"In this work, we bring awareness to the risk of generating inconsistent explanations.",
"Similarly, Hendricks et al. (2017) identify the risk of mentioning attributes from a strong class prior without any evidence being present in the input.",
"Generating adversarial examples.",
"Generating adversarial examples is an active research area in natural language processing (Zhang et al., 2019; Wang et al., 2019).",
"However, most works build on the requirement that the adversarial input should be a small perturbation of an original input (Be-linkov and Bisk, 2017; Hosseini et al., 2017; Cheng et al., 2018), or should be preserving the semantics of the original input (Iyyer et al., 2018).",
"Our setup does not have this requirement, and any pair of task-realistic inputs that causes the model to produce inconsistent explanations suffices.",
"Most importantly, to our knowledge, no previous adversarial attack for sequence-to-sequence models generates full target sequences.",
"For instance, Cheng et al. (2018) require the presence of pre-defined tokens anywhere in the target sequence: they only test with up to 3 required tokens, and their success rate dramatically drops from 99% for 1 token to 37% for 3 tokens for the task of summarization.",
"Similarly, Zhao et al. (2018) proposed an adversarial framework for adding and removing tokens in the target sequence for the task of machine translation.",
"Our scenario would require as many tokens as the desired adversarial explanation, and we also additionally need them to be in a given order, thus tackling a much challenging task.",
"Finally, Minervini and Riedel (2018) attempted to find inputs where a model trained on SNLI violates a set of logical constraints.",
"However, their method needs to enumerate and evaluate a potentially very large set of perturbations of the inputs.",
"Besides the computational overhead, it also may easily generating ungrammatical inputs.",
"Moreover, their scenario does not address the question of automatically producing undesired (inconsistent) sequences.",
"We drew attention that models generating natural language explanations are prone to producing inconsistent explanations.",
"This concern is general and can have a large practical impact.",
"For example, users would likely not accept a self-driving car if its explanation module is prone to state that The car accelerates because there are people crossing the intersection. .",
"We introduced a generic framework for identifying such inconsistencies and showed that the best existing model on e-SNLI can generate a significant number of inconsistencies.",
"Future work will focus on developing more advanced procedures for detecting inconsistencies, and on building robust models that do not generate inconsistencies.",
"Acknowledgments.",
"This work was supported by a JP Morgan PhD Fellowship, the Alan Turing Institute under the EPSRC grant EP/N510129/1, the EPSRC grant EP/R013667/1, the AXA Research Fund, and the EU Horizon 2020 Research and Innovation Programme under the grant 875160."
]
| [
"abstain",
"result",
"abstain",
"abstain",
"method",
"result",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"method",
"result",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
]
|
[
"Chinese pre-trained language models usually exploit contextual character information to learn representations, while ignoring the linguistics knowledge, e.g., word and sentence information.",
"Hence, we propose a task-free enhancement module termed as H eterogeneous L inguistics G raph ( HLG ) to enhance Chinese pre-trained language models by integrating linguistics knowledge.",
"Specifically, we construct a hierarchical heterogeneous graph to model the characteristics linguistics structure of Chinese language, and conduct a graph-based method to summarize and concretize information on different granularities of Chinese linguistics hierarchies.",
"Experimental results demonstrate our model has the ability to improve the performance of vanilla BERT, BERTwwm and ERNIE 1.0 on 6 natural language processing tasks with 10 benchmark datasets.",
"Further, the detailed experimental analyses have proven that this kind of mod-elization achieves more improvements compared with previous strong baseline MWA.",
"Meanwhile, our model introduces far fewer parameters (about half of MWA) and the training/inference speed is about 7x faster than MWA.",
"Our code and processed datasets are available at https://github.com/ lsvih/HLG .",
"Pre-trained Language Models (PLM) (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2018; Yang et al., 2019) have recently demonstrated the effectiveness on a variety of natural language processing (NLP) tasks, such as machine translation and text summarization.",
"For a specific downstream task, the parameters of PLMs can be fine-tuned Both authors contributed equally to this work Corresponding Author with accurately labeled instances or weakly labeled instances of the task to achieve better performance.",
"In recent, there are a series of studies on adapting PLMs for Chinese (Meng et al., 2019; Sun et al., 2019; Cui et al., 2019a; Sun et al., 2020; Wei et al., 2019; Diao et al., 2020; Lai et al., 2021).",
"Many researchers introduce the Chinese-specific linguistics knowledge such as word information into PLMs by conducting elaborate self-supervised tasks to pretrain Chinese PLMs from scratch.",
"Nevertheless, pre-training a PLM is computationally expensive and time-consuming since it needs large-scale Chinese corpus and heavy computational resources.",
"The high cost makes it difficult for researchers to pre-train a PLM from scratch.",
"An alternative way is to integrate the Chinese-specific linguistics knowledge into pre-trained PLMs in the fine-tuning stage in downstream tasks directly.",
"Following this idea, the task-free enhancement module is widely used in the fine-tuning stage by adding an additional adapter in PLMs to integrate external knowledge (Li et al., 2020).",
"As shown in Figure 1, the enhancement module is inserted between PLMs and task-specific module, and its inputs are the hidden representations of PLMs and embeddings of external knowledge.",
"To achieve the goal of integrating external knowledge into PLMs in the fine-tuning stage, the enhancement module should have the following characteristics.",
"First, as a plug-in adapter module in fine-tuning stage, it should maintain consistent output formulation with PLM.",
"Second, it should not introduce unacceptable time or space complexity for training and inference.",
"Third, it should improve the performance of downstream tasks universally.",
"With the core idea of the enhancement module, Li et al. (2020) proposed a multi-source word-aligned model (MWA) to enhance PLMs by integrating Chinese Word Segmentation (CWS) bound-1986 Pre-trained Language Model Task-SpecificModel Fine-tuning Pre-trained Language Model Task-SpecificModel Fine-tuning Enhancement Module ExternalKnowledge Figure 1: The diagram of Enhancement Module framework.",
"aries information implicitly.",
"It first exploits various CWS tools to generate multiple word sequences and then utilizes word-aligned attention with a mixed pooling to integrate the word information into characters.",
"Experimental results show that MWA has the ability to utilize CWS segmentation information to enhance Chinese PLMs to achieve SOTA performance in many downstream NLP tasks.",
"However, MWA has two weaknesses:",
"1) Efficiency Degradation : The model structure of MWA is naturally non-parallel and cannot benefit from GPU acceleration (detailed in 4.3.3), which results in time inefficiencies in both training and inference processes.",
"2) Linguistic Information Loss : MWA utilizes a pooling-based mechanism to perform interaction between characters and words.",
"Such a heuristic method could not make full use of information, resulting in sub-optimal results.",
"To tackle the aforementioned limitations, we propose H eterogeneous L inguistics G raph ( HLG ), which is Graph Neural Network (GNN) based method to integrate CWS information to enhance PLMs.",
"Specifically, the hierarchical CWS information is first conducted by a heterogeneous graph, which contains character nodes, word nodes and sentence nodes.",
"The edge between nodes indicates the inclusion relationship of the grammatical structure between the linguistic hierarchies.",
"Then, a simple but effective multi-step information propagation (MSIP) is proposed to incorporate the linguistics knowledge of heterogeneous graph to enhance Chinese PLMs inductively.",
"In this way, we can obtain adequate information interaction among characters, words and sentences.",
"Furthermore, the internal implementation of HLG is highly parallelized, which is conducive to GPU accelerate and raises the operating efficiency.",
"In summary, we abstract out an adapter component named enhancement module for PLMs to integrate external knowledge during the fine-tuning stage.",
"In this paradigm, we further introduce HLG to integrate CWS information delicately and model it via an effective MSIP.",
"Extensive experiments conducted on 10 benchmark datasets of 6 NLP tasks demonstrate that our model outperforms the BERT, BERTwwm and ERNIE 1.0 significantly and steadily.",
"Comparing with MWA, a strong baseline that also incorporates CWS information to enhance PLMs, our model achieves a steady improvement with the same information.",
"Meanwhile, compared with previous work, MWA, our proposed HLG introduces only half additional parameters and the training/inference speed is about 7x faster.",
"As mentioned in 1, the pre-trained language models (PLMs) have achieved great success in many NLP applications with the 2-stage paradigm of pretraining and fine-tuning.",
"The PLMs usually perform pre-training on large-scale unlabeled corpus in virtue of self-supervised reconstruction tasks.",
"For example, BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2019) is a typical well-known PLM, which conducts masked language modeling and next sentence prediction as pre-training tasks.",
"After completing the pre-training, the PLMs learn substantial contextualized text representations, and then adapt fine-tuning on specific downstream tasks.",
"In Chinese NLP, PLMs are generally character-based models (Li et al., 2019; Cui et al., 2019a).",
"Specifically, given a character sequence: S = [ c 1 , c 2 , ..., c n ] (1) the outputs of Chinese PLMs can be treated as the character-level representations H R n d , where the d is the dimension of representation.",
"As the same as most East-Asian languages, Chinese language is written without explicit word delimiters and the character is the smallest morpheme unit in Chinese linguistic (Cai and Zhao, 2016).",
"Although character-based models could achieve good performance (Li et al., 2019), Li et al. (2020) point out that introducing Chinese Word Segmentation 1987 [CLS] [SEP] Characters Words Sentences mountain forest park S2 west S3 westernmountain forestalpark [SEP] Beijing [CLS] S1 Figure 2: Overview of HLG structure.",
"We give a formality definition of segmenter and its partition strategy .",
"Given a sentence consisting of a sequence of characters as Eq.",
"1, a segmenter is defined as: SEGMENTER : S S (cid:48) where is a partition strategy of sentence.",
"Specifi-cally, partition and group the character sequence S into the word sequence S (cid:48) : ( S ) = S (cid:48) = [ w 1 , w 2 , ..., w m ] (2) where m n and w i = [ c s , c s +1 , ..., c s + l 1 ] is the i -th segmented word with a length of l and s is the index of w i 's first character in S .",
"Namely, the word w i is a sequence of characters { c s , c s +1 , ..., c s + l 1 } , and the sentence S (cid:48) is a sequence of words { w 1 , w 2 , ..., w m } .",
"Li et al. (2020) carried out researches on integrating CWS information into Chinese PLMs.",
"The authors brought an architecture named Multi-source Word-aligned Attention (MWA) to incorporate multi-granularity segmentation via pooling attention weights among characters within the word.",
"Formally, given a character sequence S as Eq.",
"1 and its partition strategy as Eq.",
"2.",
"The character-based representation H could be gained via PLM, MWA conducted self-attention between characters: A = softmax (cid:16) ( KW k )( QW q ) T d (cid:17) where Q and K are both H , d is defined in 2.1, and A represents the attention score matrix.",
"We decompose A over columns as [ a 1 , a 2 , ..., a n ] , and then perform partition on it: ( A ) = [ { a 1 , a 2 } , { a 3 } , ... { a sc , ..., a s + l 1 c } ..., { a n 1 , a n } ] where s and l are defined in 2.2.",
"Pooling each group of partitioned columns: a iw = MixPooling ( { a sc , ..., a s + l 1 c } ) in which MixPooling is defined in Yu et al. (2014).",
"The gained A w = [ a 1 w , a 2 w , ..., a mw ] R n m is the character-to-word attention weight matrix.",
"After performing alignment-wise multiply (Li et al., 2020) between character-to-word attention weight matrix A w and the character-based representation H , the enhanced character-based representation which integrates CWS information can be obtained.",
"In essence, the MWA conducts interaction between characters and words via character-to-word attention weight matrix A w , implicitly summary the information from characters, and performs MixPooling to aggregate the word-based segmentation information and concrete the character-level representation.",
"This section introduces the components of our model HLG which instantiates the enhancement module by exploiting the CWS information.",
"We first briefly explain the graph convolutional network as our base encoder, and then describe the graph construction of HLG.",
"Finally, we give the details of the multi-step information propagation (MSIP) to integrate the CWS information into PLMs.",
"Graph Convolutional Network (GCN) (Bruna et al., 2014; Kipf and Welling, 2017; Defferrard et al., 2016) is a powerful tool to extend the convolution operation from the grid data to irregular graph data.",
"The basic idea of GCN is to aggregate the representations of neighbors to obtain better representation expression of nodes in the graph.",
"For instance, consider a homogeneous graph G = ( V , E ) constructed by nodes set V and edges set E .",
"A R |V||V| is a binary adjacency matrix where each element A ij denotes whether node i has an edge with node j in the edge set E .",
"Formally, a standard GCN layer can be abstracted as: H out = ( AH in W ) , A = Norm ( A ) (3) where H in denotes the input representation matrix, H out is the updated representation matrix, Norm ( ) means row normalizing function, A is the normalized adjacency matrix, ( ) is the ReLU function and W is a parameter matrix.",
"After such convolution operation, the representation H in were aggregated rely on edge connections defined by A , and transformed into H out by linear multiplication and active function.",
"We build a heterogeneous graph G = ( C , W , S , E ) to model the structure of Chinese linguistic, where C , W , S , E denote the character nodeset, word nodeset, sentence nodeset and edge set, respectively.",
"Besides, different from homogeneous graph, HLG models relationship between three granularities of linguistic in a hierarchical way.",
"As presented in Figure 2, G is composed of three hierarchies including characters, words and sentences.",
"In this case, we employed three different CWS tools, and got three different segmentation results, which resulted in three sentences with slightly different semantics.",
"Note that the same word segmentation results in the same position obtained by different CWS tools will be regarded as the same word node to enhance the interaction (e.g., Beijing and park in Figure 2).",
"This purpose is to denoise the mistake word nodes brought by segmenter error.",
"If a word is segmented by multiple segmenters at the same time, the corresponding word node will have a higher vertex degree.",
"Such nodes with higher betweenness centrality will lead to a stronger influence on the followed information propagation and achieve the effect of denoising intuitively, like the vote-based multi-model ensemble.",
"In HLG, only one adjacency matrix A is not enough to describe the hierarchical relationships between characters, words and sentences.",
"Hence, we \" \"#$ \" $#% \" %#$ \" $#\" !",
"conduct two interaction matrices A c 2 w R |W||C| and A w 2 s R |S||W| to indicate aforementioned relationships.",
"To be specific, we take the A c 2 w as an example (the one for A w 2 s is analogous), the element A c 2 w ij denotes whether word i has an edge with character j in the edge set E .",
"Similar to Eq.",
"3, we also denote normalized interaction matrices as A c 2 w and A w 2 s .",
"To model the granularities hierarchical relationships in G , we devise a multi-step information propagation to learn the linguistics knowledge.",
"In CWS, the partition and group processes could be considered as the partition of semantic representation and the aggregation of separated semantic respectively (detailed in 2.2).",
"Inspired by CWS processes, we introduce two operations into MSIP to simulate such processes and named as summarization and concretization.",
"Figure 3 shows the information propagation procedure of MSIP.",
"Summarization.",
"The summarization operation focuses on generalizing hierarchical word and sentence representations (e.g., from character-level to word-level).",
"Specifically, given a heterogeneous graph G and corresponding character representations H c from PLM, the summarization operation can be formulated as follows: H w = ( A c 2 w H c W c 2 w ) , H s = ( A w 2 s H w W w 2 s ) , (4) where the W c 2 w , W w 2 s are parameter matrices, H w , H s are the interim representations of words and sentences.",
"Concretization.",
"Concretization is the inverse operation of summarization, it is used to repartition the semantics from high-level to low-level (e.g. from sentence-level to word-level).",
"To do so, we first calculate the normalized interaction matrices A s 2 w and A w 2 c , which can be simply obtained by first transposed then normalized the predefined interaction matrices A w 2 s and A c 2 w , respectively.",
"Thus, we have: A s 2 w = Norm (cid:0) ( A w 2 s ) (cid:62) (cid:1) , A w 2 c = Norm (cid:0) ( A c 2 w ) (cid:62) (cid:1) , (5) where ( ) (cid:62) is the transpose function.",
"Afterward, the concretization operation is defined as follows: H w (cid:48) = ( A s 2 w H s W s 2 w ) , H c (cid:48) = ( A w 2 c H w (cid:48) W w 2 c ) , (6) where W s 2 w and W w 2 c are parameter matrices, H w (cid:48) and H c (cid:48) are also interim word and character representations, H w (cid:48) denote the final word representations defined in Eq.",
"7.",
"Skip Connection.",
"Intuitively, it is difficult to generate satisfied low-level representations from the high-level representations directly.",
"For example, it is easy to learn a few sentence representations from dozens of word representations, but hard to generate dozens of word representations from a few sentence representations.",
"To mitigate this problem, in this paper, we introduce the skip connection to enhance the MSIP, which is to simulate the self-loop in vanilla GCN.",
"As shown in Figure 3, we add skip connections between the summarization representations and the concretization representations directly.",
"Formally, the skip connection can be simply expressed as: H w (cid:48) = H w (cid:48) + ( H w W w ) , H c (cid:48) = H c (cid:48) + ( H c W c ) , (7) where W w and W c are parameter matrices.",
"Furthermore, H c (cid:48) denote the final representations for characters, which incorporates the fine-grained linguistics knowledge in G .",
"For a fair comparison with MWA, which also gives an enhancement module by incorporating CWS information.",
"We conduct the same experiments on five NLP tasks with various benchmark datasets.",
"Three frequently-used Chinese PLMs: BERT (De-vlin et al., 2019), ERNIE 1.0 (Sun et al., 2019) and BERTwwm (Cui et al., 2019a) are employed as the basic PLM to enhance.",
"Three CWS tools: thulac (Sun et al., 2016a), ictclas (Zhang et al., 2003) and hanlp (He, 2014) are employed to gain the segmentation information.",
"The time of pre-processing including applying CWS tools is ignored in the experimental report.",
"In the production, preprocessing and inference can be asynchronously executed in parallel (while inference a batch of data, the subsequence data can be preprocessed with multiprocess) (Cheng et al., 2019), all three of the CWS tools we've introduced are fast enough to achieve this effect.",
"According to rough estimates and technical reports, the processing speed of these tools are thulac 1221KB/s, ictclas 769KB/s, hanlp 1375KB/s, respectively.",
"Specifically, we instantiate the enhancement module as HLG and incorporate with downstream task-specific model.",
"To verify the effectiveness of HLG, we execute 5 times fine-tuning on 10 benchmark datasets of 6 NLP tasks and report the average score.",
"The tasks include Sentiment Classification (SC), Document Classification (DC), Named Entity Recognition (NER), Sentence Pair Matching (SPM), Natural Language Inference (NLI) and Machine Reading Comprehension (MRC).",
"Specifi-cally, the following benchmark datasets are chosen to evaluate the performance:",
"1) SC : ChnSenti 1 and weibo100k 2 sentiment datasets are used for evaluating the capacity of short text classification.",
"2) DC : THUCNews (Sun et al., 2016b) dataset contains 10 types of news for performing long text classification.",
"3) NER : Ontonotes 4.0 (Weischedel et al., 2011) and MSRA-NER (Levow, 2006a) are used for testing model in sequence tagging task.",
"4) SPM : LCQMC (Liu et al., 2018) and BQ (Chen et al., 2018) are used to evaluate the text matching ability of model.",
"5) NLI : We conduct experiments on the Chinese part of XNLI (Conneau et al., 2018) dataset, and adopt the same pre-processing strategy as ERNIE (Sun et al., 2019).",
"6) MRC : Commonly used datasets DRCD (Shao et al., 2018) and CMRC2018 (Cui et al., 2019b) are tested.",
"CMRC2018 is only evaluated on dev set as same as (Wei et al., 2019; Sun et al., 2020).",
"We implement the presented approach in Py-Torch and fine-tune the downstream tasks on multiple Nvidia Tesla V100 GPUs.",
"The basic architecture of PLMs and pre-trained parameters are provided by Huggingface (Wolf et al., 2020).",
"The initial learning rate and other hyper-parameters refer to the previous works reported (Cui et al., 2019a; Li et al., 2020; Sun et al., 2020).",
"Since the parameters of PLMs have been optimized, while the parameters of HLG and the downstream tasks are untrained.",
"Hence, the learning rate of HLG part is larger than PLM part, we manually tuned the learning rates of PLM and HLG separately.",
"The experimental results are shown in Table 1.",
"Overall, we can observe that both HLG and MWA outperform baseline models (BERT, BERTwwm and ERNIE 1.0).",
"Comparing with WMA, HLG achieves further improvement and significantly outperforms baseline models on 10 tasks.",
"In detail, HLG outperforms MWA on ChnSent, weibo100k, MSRA-NER, ontonotes, LCQMC, BQ, THUCNews and XNLI tasks, and obtains comparable results on DRCD and CMRC2018 datasets.",
"For the text classification tasks, namely SC and DC , HLG respectively achieves 0.88% and 0.84% average improvement on ChnSenti and weibo100k dataset, while MWA gains 0.53% and 0.82%.",
"Meanwhile, HLG obtains 0.35% improvement on the long text multi-classification benchmark THUCNews, and MWA gets 0.31% points.",
"Comparing with text classification tasks, the improvements over NER tasks are more obvious.",
"The main reason may be that CWS explicitly provides the word boundaries, which are important to recognize entities accurately.",
"On the ontonotes dataset, the promotion of HLG (1.28% averagely) is distinctly higher than that of MWA (0.92% av-eragely).",
"Compared to the strong baseline models, the F1 scores of MSRA-NER have improved average 0.13% and 0.10% by HLG and MWA, respectively.",
"HLG achieves the best results on the text matching tasks ( SPM ) and its variant NLI , which brings 1.28% average improvement to LCQMC, 0.26% average improvement to BQ, and 0.78% average improvement to XNLI.",
"The improvements of HLG are much higher than that of MWA (0.3%, 0.23% and 0.55%).",
"As described in Chen et al. (2020) and 1991 No.",
"Lyu et al. (2021), text matching tasks can benefit from the interaction between the paired sentences.",
"HLG follows them to construct graphs over sentence pairs collectively, which naturally obtains advantages in text matching tasks.",
"For MRC task, HLG and MWA achieve comparable results on those datasets.",
"HLG gets an average improvement of 1.41 in EM and 0.85 in F1 score, while MWA gets 1.4 EM and 0.86 F1 score.",
"However, HLG has dominant advantage in training speed and inference speed.",
"Detail analysis of time efficiency is in 4.3.3.",
"We conduct ablation experiments to explore the effectiveness of the number of CWS tools.",
"The ablation experiments are organized on sentiment classification task, ChnSenti and weibo100k dev set.",
"As shown in Table 2, 5 popular CWS tools are added into our model successively according to the order, and we also show the total number of word nodes in our HLG.",
"Meanwhile, the information from multiple word segmentation tools can be integrated at the same time without increasing parameter size in HLG (only the A is changed).",
"Figure 4 shows the performance of BERT+HLG with different numbers of CWS tools on ChnSenti and weibo100k dev sets.",
"Experimental results demonstrate the effectiveness of introducing word segmentation information.",
"We can observe that when the number of CWS tools is larger, the number of generated word nodes gradually increasing to converge, and the performance of the model slightly is not always increasing as the word count.",
"The more CWS tools introduced will bring more diversity but also bring noise caused by segmenter error.",
"In practice, we find using 4 or more CWS tools can slightly increase the performance but take much longer preprocessing time, hence we select the elbow of the curve as the number of CWS tools.",
"That is, using 3 as the number of CWS tools might be a balance between the performance of model and the cost of preprocessing.",
"This number also coincides with the configuration in MWA.",
"In general, the enhancement module should be able to bring performance improvements without unacceptable space complexity.",
"Therefore, we conduct a comparative experiment on XNLI dev set to explore the performance improvement and the space overhead between MWA and HLG.",
"To be specific, the number of parameters in MWA depends on the dimension of PLM's representation and the number of CWS tools K .",
"Concretely, MWA contains K transformer layers and 1 aggregation layer.",
"Nevertheless, our HLG only depends on the dimension of PLM's representation and simply contains 4 basic GCN layers and 2 skip connections.",
"Thus, the number of parameters of 1992 Model Params.",
"size ( MWA ) = K (4 d 2 ) Transformer + d size ( HLG ) = (4 d 2 ) GCN + 2 d 2",
"As discussed before, we employ 3 CWS tools in both MWA and HLG.",
"Table 3 reports the performance of BERT, BERTwwm, and ERNIE 1.0 on the XNLI dev set.",
"Obviously, HLG can get a greater performance improvement with only half additional parameters.",
"It shows that as an enhancement module, HLG is superior to MWA in terms of parameter utilization efficiency.",
"In addition, to verify the impact of the additional parameters , we also conduct an ablation experiment on XNLI dev set that utilizes the random tokenizer, the single-character tokenizer, and sole segmenter to obtain the different word segmentation results, and send those results to HLG to eliminate the additional benefit from the change of neural network structure and the increase of parameters.",
"The results are shown in Table 4, which indicates that the increment of parameters can slightly affect character-based model performance, and the CWS information is significantly useful to promote the performance of character-based PLM.",
"Time efficiency is an important indicator in the real-world production.",
"Less training time and inference time means lower costs.",
"In order to analyze the additional time cost of different enhancement modules, we conduct comparative experiments among BERT, BERT+MWA, and BERT+HLG on ChnSenti, LCQMC and XNLI datasets.",
"For the fair comparison, we remain other hyper-parameters consistent for the three models.",
"As shown in Figure 5, we compare time cost during training and inference between vanilla BERT, BERT+MWA and BERT+HLG.",
"We can observe that the training time and inference time of Figure 5: The training time, inference time of vanilla BERT, BERT+MWA and BERT+HLG on ChnSenti, LCQMC and XNLI benchmarks.",
"BERT+HLG are basically consistent with vanilla BERT.",
"However, when MWA is introduced, the average training time increases by 7 times, and the average inference time increases by 7.6 times.",
"This is because MWA must calculate aligned attention weights token by token, and it cannot benefit from CUDNN parallelization, resulting in terrible operating efficiency.",
"On the contrary, HLG is composed of GCNs, and its internal implementation is basically the simplest non-linear transformation.",
"Therefore, HLG could be maximally accelerated through the optimized matrix operation of CUDNN primitive, which only produces a negligible impact on time efficiency.",
"Pre-training language models, such as ELMo (Pe-ters et al., 2018), BERT (Devlin et al., 2019), XL-NET (Yang et al., 2019) and GPT (Radford et al., 2018), have shown their powerful performances on various natural language processing tasks and have been applied in many applications.",
"In recent past, there are studies adapting PLMs for Chinese with Chinese-specific features such as word information.",
"Glyce (Meng et al., 2019) proposed to use the glyph information of Chinese characters to enhance PLMs.",
"ERNIE 1.0/2.0 (Sun et al., 2019, 2020) and BERTwwm (Cui et al., 2019a) used the whole word mask to learn the structure of words or entities in the pre-training stage and conducted more and better pre-training tasks to perceive large-scale data.",
"NEZHA (Wei et al., 2019) used a series of methods such as functional relative 1993 positional encoding and whole word masking to improve the pre-training tasks, which had brought improvement.",
"ZEN (Diao et al., 2020) adopted n-gram masking to enhance pre-trained encoder and obtained outstanding performance.",
"Lattice-BERT (Lai et al., 2021) introduced word lattice information (Zhang and Yang, 2018) into pre-training framework via lattice position attention.",
"As a fundamental feature of Chinese, word segmentation information is flexibility, granularity, and easy-to-get (Sproat and Emerson, 2003; Levow, 2006b).",
"Further, Zhang et al. (2018); Li et al. (2019, 2020) conducted detailed research and experiments on the application of CWS in deep learning, and found that CWS information can effectively improve the performance of Chinese character-based PLMs.",
"Recently, a lot of works have been proposed to prompt NLP applications by constructing graph on text and modeling with graph neural networks.",
"Yao et al. (2019) first constructed word co-occurrence graph between documents and introduced GCN to modeling and aggregating document representation for text classification.",
"Chen et al. (2020); Lyu et al. (2021) constructed lattice graph to maintain multi-granularity information and external knowledge in Chinese short text matching task.",
"Nguyen and Grishman (2018) proposed performing GCN over dependency trees to extract event trigger.",
"Sui et al. (2019) conducted a character-word interaction graph and performed graph attention network on it to recognize Chinese named entities.",
"Shu et al. (2020) introduced a bipartite-graph based transformer PLM for integrating hierarchical semantic information.",
"In this paper, we propose HLG which acts as the enhancement module to enhance Chinese PLMs with CWS information.",
"The HLG firstly constructs heterogeneous graph based on multiple word segmentations to model the hierarchy of Chinese.",
"Then, the MSIP is proposed to model the fine-grained linguistics knowledge of the heterogeneous graph.",
"Experimental results on 6 NLP tasks with 10 benchmark datasets demonstrate that the performance of our model outperforms previous work, MWA.",
"Besides the performance improvements, HLG introduces only half the additional parameters of MWA and its training/inference speed is 7x faster than MWA.",
"At present, the experimental results of HLG are lagging behind SOTA, and we will try to migrate it to some of the latest PLMs.",
"Besides, HLG has the expansibility to introduce the representation layer of the CWS model directly, or introduce some other information sources such as the knowledge graph, etc.",
"We leave these further improvements to the future.",
"This work is supported by the National Key Research and Development Program of China (grant No.2021YFB3100600 and 2020YFB2103803), the Strategic Priority Research Program of Chinese Academy of Sciences (grant No.XDC02040400), the Youth Innovation Promotion Association of CAS (Grant No. 2021153) and National Natural Science Foundation of China (Grant No.61902394).",
"This work performed while the first author was at IIE, CAS."
]
| [
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other"
]
|
[
"We propose a novel attention network for document annotation with user-generated tags.",
"The network is designed according to the human reading and annotation behaviour.",
"Usually, users try to digest the title and obtain a rough idea about the topic first, and then read the content of the document.",
"Present research shows that the title metadata could largely affect the social annotation.",
"To better utilise this information, we design a framework that separates the title from the content of a document and apply a title-guided attention mechanism over each sentence in the content.",
"We also propose two semantic-based loss regularisers that enforce the output of the network to conform to label semantics, i.e. similarity and subsumption.",
"We analyse each part of the proposed system with two real-world open datasets on publication and question annotation.",
"The integrated approach, Joint Multi-label Attention Network (JMAN), significantly outperformed the Bidirectional Gated Recurrent Unit (Bi-GRU) by around 13%-26% and the Hierarchical Attention Network (HAN) by around 4%-12% on both datasets, with around 10%-30% reduction of training time.",
"Social annotation, or tagging, is a popular functionality allowing users to assign keywords to online resources for better semantic search and recommendation (Vander Wal, 2007; Singer et al., 2014; Gedikli and Jannach, 2014).",
"Common socially annotated textual resources include questions, papers, (micro-)blogs, product reviews, etc.",
"In practice, however, only a limited number of resources is annotated with tags.",
"Annotating a large number of documents requires much cognitive effort and can be time-consuming.",
"This has driven research on document annotation based on existing tag sets (Belem et al., 2017; Nie et al., 2014).",
"Recent studies formalise the automated social annotation task as a multi-label classification problem (Gibaja and Ventura, 2015) and apply deep learning approaches (Li et al., 2016; Huang et al., 2016; Hassan et al., 2018).",
"A strong baseline is the use of Bi-directional RNN (Schuster and Paliwal, 1997) with GRU (Cho et al., 2014) or LSTM (Hochreiter and Schmidhuber, 1997).",
"Another more recent improvement is achieved through Hierarchical Attention Network (HAN) (Yang et al., 2016) which discriminates important words and sentences from others, as adapted in (Hassan et al., 2018) for annotation.",
"These models, however, suffer from two issues:",
"(i) simply scanning over the words and sentences, the models do not fully mimic the way users read and annotate documents, and",
"(ii) semantic relations, similarity and subsumption, among the labels are not considered.",
"Our model focuses on simulating users' reading and annotation behaviour with attention mechanisms.",
"The title of a document is highly abstract while informative about the topics and has a direct impact on users' annotation choice (Lipczak and Milios, 2010), showing high descriptive capacity and effectiveness for annotation (Figueiredo et al., 2013); the content provides complementary information for annotation.",
"Usually, users firstly read the title, and based on their understanding of the title, proceed to the content of the document.",
"To simulate this behaviour, we propose an attention network with separated inputs (title and content) and parallelled attention layers at both the word-level and the sentence-level.",
"One major distinction to previous approaches is to represent the content with a title-guided attention mechanism; this enables the network to discriminate among sentences based on its understanding of the title.",
"various semantic forms and granularities (Peters, 2009; Heymann and Garcia-Molina, 2006).",
"One challenging issue is how to exploit the relations among labels (user-generated tags) (Zhang and Zhou, 2014; Gibaja and Ventura, 2015) to improve the learning performance.",
"Among neural network based methods, a recent attempt is to initialise weights for dedicated neurons in the last layer to memorise the label relations (Kurata et al., 2016; Baker and Korhonen, 2017), however, the limitation is the large number of neurons to be assigned, making it inefficient (or inapplicable) for systems with large number of labels.",
"To incorporate the label semantics inferred from the data or from external knowledge bases into the network, we design two loss regularisers, for similarity and subsumption relations, respectively.",
"The regularisers enforce the output layer of the network to satisfy the semantic constraints of the labels.",
"We propose a parallelled two-layered attention network that simulates users' reading and annotation behaviour for document annotation.",
"The proposed Joint Multi-label Attention Network (J-MAN) approach is depicted in Figure 1.",
"The model inputs the title and content separately into two Bidirectional-RNNs with word-level attention and sentence-level attention mechanisms to capture the important words and sentences.",
"Each target is a multi-hot (as opposed to an one-hot) representation of the labels in the label set y d { 0 , 1 } | T | , where T is a list all labels, 1 indicates that a label appears in the label set of the document d , 0 otherwise.",
"In Figure 1, attention mechanisms are indicated with dotted edges.",
"One key distinction from the HAN model (Yang et al., 2016) is the title-guided sentence-level attention that models the reading order for annotation (the dotted edges linking c t and c ta ).",
"The output layer s d = ( W c c d + b c ) , activated with the sigmoid function , is further constraint by two loss regularisers, emphasising two types of label relations, similarity and subsumption, respectively.",
"For the RNN encoder, we apply the Gated Recurrent Unit (GRU) which can capture long ter-m dependencies and is usually more time-efficient than LSTM (Hochreiter and Schmidhuber, 1997) in training.",
"The Bidirectional-GRU (Bi-GRU) encoder (Cho et al., 2014) concatenates the hidden states generated from two GRUs, one reading the Figure 1: The Proposed Joint Multi-label Attention Network (JMAN) for Social Text Annotation words (or sentences) forward and the other reading them backwards.",
"This helps form a more complete understanding of the current word (or sentence).",
"Hierarchical Attention captures the structure of a document by a word-level attention on each word's hidden state to create a sentence representation, then a sentence-level attention to form a content representation (Yang et al., 2016).",
"The attention coefficients are computed based on the dot product between a non-linearly transformed weight vector of the hidden state and an informative vector, which encodes what is the most informative word (or sentence) in the sequence.",
"This informative vector is commonly treated as a sequence of weights (Yang et al., 2016; Kumar et al., 2018; Hassan et al., 2018), trained along with other weights in the network.",
"We applied parallelled word-level attention on the title and each sentence in the content.",
"The attention coefficient and the final representation of a sequence is calculated as (taking words in title as an example): c t = (cid:88) i i h i = (cid:88) i exp( v wt v i ) (cid:80) j exp( v wt v j ) h i (1) where v i = tanh( W t h i + b t ) is the output of a fully-connected layer of the hidden state h i for each word in the title, v wt is the informative vector for titles, and c t is the resulting title representation.",
"We can compute each sentence representation c s and the content representation c a in a similar manner (see Figure 1).",
"The attention mechanisms above do not capture the interaction between the title and content of the document.",
"Title represents highly abstract while important information about the topics of a document.",
"Selection of the important sentences in the content should conform to the document's general topic, e.g. title.",
"We can thus model the title-guided sentence-level attention as: c ta = (cid:88) r r h r = (cid:88) r exp( c t v r ) (cid:80) k exp( c t v k ) h r (2) where v r = tanh( W s h r + b s ) is a fully connected layer with the hidden state of the r th sentence h r as input and c t is the title representation obtained from Equation 1.",
"Guiding sentence reading through title representation facilitates content understanding, but may lead to an overemphasis on the title in the annotation.",
"In fact, the content itself, carrying more terms, conveys detailed information not covered by the title and may help suggest further tags for annotation (Figueiredo et al., 2013).",
"We thus concatenate the title guided content representation c ta and the content representation c a from the original sentence-label attention, to form a more comprehensive representation of the content.",
"The final content representation is then concatenated with the title representation c d = [ c t , c ta , c a ] .",
"In the experiment, we will show the effectiveness of this design against several variations of the model.",
"Users tend to annotate documents collectively with semantically related tags.",
"Two major semantic relations in user-generated tags are similarity and subsumption (Stock, 2010; Peters, 2009).",
"To deal with this label correlation issue, we propose two loss regularisers jointly learned with the binary cross entropy loss function.",
"The intuition is that the output values of the neural network s d , having the dimensions as the label space | T | , should satisfy semantic relations among labels.",
"Such relations can be inferred from the label sets or observed in external knowledge bases.",
"The whole joint loss is defined as L = LCE + 1 L sim + 2 L sub .",
"LCE is the binary cross entropy loss adopted for multilabel text classification (Nam et al., 2014).",
"L sim and L sub are defined as: L sim = 1 2 (cid:88) d (cid:88) ( j,k ) | T j ,T k y d Sim jk | s dj s dk | 2 L sub = 1 2 (cid:88) d (cid:88) ( j,k ) | T j ,T k y d Sub jk R ( s dj )(1 R ( s dk )) (3) where y d is the label set (annotated tags) of the document d .",
"T is a list of all labels, where j and k are the indices of the list T , corresponding to the indices of nodes s dj and s dk in the output layer s d .",
"R () is the rounding function for binary prediction, R ( s dj ) = 0 if S dj < 0 .",
"5 , otherwise R ( s dj ) = 1 .",
"The similarity matrix Sim (0 , 1) | T || T | indicates pairwise similarity between labels, the larger the value of Sim jk , the more similar the labels T j and T k are.",
"Each element Sub jk in the subsumption matrix Sub { 0 , 1 } | T || T | indicates whether the label T j is a child label of T k .",
"Both the Sim and Sub matrix can be inferred from the training data or from external knowledge bases before training.",
"In implementation, Sim (if thresh-olded) and Sub can be treated as sparse matrix to reduce computational complexity.",
"We also used an adapted version of the loss regularisers in mini-batch training (the same set of label pairs that co-occurred within all documents in the same batch) to further to reduce computational complexity.",
"The rationale is that the less the difference of the two outputs of the similar labels is, the lower the L sim .",
"On the contrary, for output values not re-flecting the label similarity, i.e. large | s dj s dk | 2 when Sim jk is close to 1, the error will be penalised with higher L sim .",
"Given a document and a subsumption pair of labels, if the child label is used for annotation, its parent label has a relatively higher chance being used as well.",
"In L sub , if a subsumption relation < T j T k > presents in the label set y d , the case that the parent label T k is predicted as false, i.e. R ( s dk ) = 0 , when its child label T j is predicted as true, i.e. R ( s dj ) = 1 , will be penalised.",
"Such a case will result in a positive penalty, while the penalty will be 0 in all other cases.",
"Thus, L sim constrains similar labels to have similar outputs, while L sub reinforces each co-occurring subsumption pair to satisfy the dependency of the parent label on the child label.",
"We evaluate our proposed approach for automated social annotation on two representative open datasets in social tagging, Bibsonomy 1 (academ-ic publication annotation) and Zhihu 2 (general do-main social question annotation).",
"For Bibsonomy, we used the cleaned dataset from (Dong et al., 1 https://www.kde.cs.uni-kassel.de/bibsonomy/dumps 2 https://biendata.com/competition/zhihu/ Bibsonomy Precision Recall F 1 Score Time/Fold Bi-GRU .522 .020 .217 .016 .306 .019 1480 92s HAN .572 .008 .246 .012 .344 .013 1164 52s JMAN-s-tg .591 .010 .269 .006 .370 .007 1075 87s JMAN-s-att .586 .009 .269 .005 .369 .006 968 81s JMAN-s .586 .004 .282 .005 .380 .005 894 55s JMAN .592 .009 .284 .006 .384 .007 1044 73s Paired t-tests at 95 percent significance level against the JMAN model. Table 1: Comparison Results on the Bibsonomy dataset Zhihu Precision Recall F 1 Score Time/Fold Bi-GRU .238 .011 .154 .009 .187 .010 1455 69s HAN .257 .012 .167 .010 .203 .011 1387 78s JMAN-s-tg .257 .005 .175 .003 .208 .006 1220 81s JMAN-s-att .254 .007 .174 .005 .207 .005 1275 99s JMAN-s .257 .008 .177 .005 .210 .007 1147 44s JMAN .260 .006 .179 .003 .212 .004 1135 52s Paired t-tests at 95 percent significance level against the JMAN model. Paired t-tests at 90 percent significance level against the JMAN model. Table 2: Comparison Results on the Zhihu dataset 2017) and further selected the tags related to Computer Sciences according to the ACM Computing Classification System 3 and selected the document that have both title and abstract (content); for Zhihu, we randomly sampled around 100,000 questions from the original data dump.",
"The cleaned Bibsonomy dataset has 12,101 documents, 17,619 vocabularies and 5,196 labels; the average number of labels per document is 11.59.",
"The sample Zhihu dataset has 108,168 documents (questions), 62,519 vocabularies and 1,999 labels; the average number of labels per document is 2.45.",
"To calculate Sim , we used cosine similarity, normalised to between 0 and 1, of self-trained skip-gram embedding (Mikolov et al., 2013) on all label sets in each dataset.",
"To obtain Sub , about subsumption relations, for Bibsonomy, we resorted to an external knowledge source Microsoft Concept Graph 4 for label mapping and semantic grounding; for Zhihu, we used the provided crowd-sourced label subsumption relations.",
"We tuned the 1 and 2 in L based on 10-fold cross-validation 5 .",
"We implemented the proposed Joint Multi-label Attention Network (JMAN) model in Figure 1 3 https://www.acm.org/publications/class-2012 4 https://concept.research.microsoft.com/Home 5 1 , 2 were tuned to 1e-4, 1e-1 for Bibsonomy and 1e-3, 1e-1 for Zhihu, respectively.",
"on Tensorflow (Abadi et al., 2016) along with the baselines 6 based on brightmart's implementation 7 of TextRNN and HAN under the MIT license.",
"Two strong baselines were chosen Bi-GRU (Schuster and Paliwal, 1997; Cho et al., 2014) and HAN (Yang et al., 2016; Hassan et al., 2018).",
"Several variations of JMAN were also considered:",
"(i) JMAN-s , the proposed model without semantic-based loss regularisers;",
"(ii) JMAN-s-tg , the proposed model without semantic-based regularisers and title guided sentence-level attention, c d = [ c t , c a ] ;",
"(iii) JMAN-s-att , the proposed model without semantic-based regularisers and the original sentence-level attention, c d = [ c t , c ta ] .",
"We optimised the joint loss L using the Adam optimiser (Kingma and Ba, 2014) and set the number of hidden units as 100, learning rate as 0.01 and dropout rate as 0.5 (Srivastava et al., 2014) for all models.",
"The batch sizes for Bibsonomy and Zhihu were set as 128 and 1,024, respectively.",
"The sequence lengths of the title (also the length of each sentence) and the content were padded to 30 and 300 for Bibsonomy and 25 and 100 for Zhihu.",
"Non-static input embedding for the title and the sentences were initialised as 100-dimension self-trained skip-gram embedding (Mikolov et al., 6 Our code and datasets are available at https://github. com/acadTags/Automated-Social-Annotation . 7 https://github.com/brightmart/text_ classification 2013).",
"We decayed the learning rate by half when the loss on validation set increased and set an ear-ly stopping point when learning rate is below 2e-5.",
"All experiments were run on a GPU server, N-VIDIA GeForce GTX 1080 Ti.",
"We report the mean and the standard deviation of the testing results on models trained with 10-fold cross-validation.",
"The cleaned user-generated tags, i.e. labels, for each dataset were taken as the ground truth and the widely used example-based metrics, Precision, Recall and F 1 score (God-bole and Sarawagi, 2004; Tsoumakas et al., 2010; Zhang and Zhou, 2014), were adopted.",
"The average training time per fold was also recorded.",
"The results with respect to the two datasets are presented in the Table 1 and 2 respectively.",
"Our proposed JMAN model significantly outperforms Bi-GRU and HAN.",
"In terms of F 1 , with the Bibsonomy dataset, the proposed JMAN model provides a 7.8% absolute increase (by 25.5%) over Bi-GRU and 4.0% (by 11.6%) over HAN; on the Zhihu dataset, our model is 2.5% absolutely (by 13.4%) better than Bi-GRU and 0.9% (by 4.4%) than HAN.",
"This is mostly attributed to the boost of recall through modeling the title metadata and the title-guided attention mechanism.",
"The JMAN model also converges (understands) much faster than HAN with around 10.3% (for Bibsonomy) and 18.2% (for Zhihu) less training time per fold and converges even faster than Bi-GRU (by 29.5% and 22.0% for the Bibsonomy and Zhihu dataset in terms of training time, respectively).",
"Recall and F 1 score drop significantly, with training time increased, when the title-guided or the original sentence-level attention is removed.",
"Adding semantic-based loss regularisers further boosts the precision, recall and F 1 of the model.",
"We also noticed that, compared to the results on the Bibsonomy dataset, the improvement on the Zhihu dataset with the proposed model is less sig-nificant.",
"This may be related to the characterstics of the dataset: Zhihu has shorter texts (padded to 1/3 of the Bibsonomy dataset), more vocabularies (over 3 folds), less number of labels (about 40%) and less average number of labels per document (about 1/5) than the Bibsonomy dataset.",
"This would warrant further study on the datasets and on validating the model with datasets from other social media platforms.",
"We proposed a parallelled two-layer attention network for text annotation based on user-generated tags.",
"It models the behaviour how human users read and understand document with the title-guided attention mechanism and leverages label semantics through two loss regularisers to constrain the network outputs.",
"Experimental results show the effectiveness of this method with superi-or performance and training speed.",
"This system can be applied to various types of social media platforms to support document organisation.",
"Future studies will explore the possibility of applying the title-guided attention mechanism to other large datasets on major social media platforms.",
"It is also interesting to see whether the semantic-based loss regularisers can be adapted to improve the performance of the recent pre-trained transferable deep learning models, such as the Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018).",
"We thank all the anonymous reviewers for their constructive feedback.",
"The implementation is based on brightmart's TextRNN and Hierarchical Attention Network under the MIT license 8 .",
"This research is funded by the Research Development Fund at Xi'an Jiaotong-Liverpool University, contract number RDF-14-01-10 and partially supported by the following: The National Natural Science Foundation of China under no. 61876155; The Natural Science Foundation of the Jiangsu Higher Education Institutions of China under no. 17KJD520010; Suzhou Science and Technology Program under no.",
"SYG201712, SZS201613; Natural Science Foundation of Jiangsu Province BK20181189; Key Program Special Fund in XJTLU under no.",
"KSF-A-01, KSF-P-02, and KSF-E-26."
]
| [
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other"
]
|
[
"Neural language models (LMs) such as GPT-2 estimate the probability distribution over the next word by a softmax over the vocabulary.",
"The softmax layer produces the distribution based on the dot products of a single hidden state and the embeddings of words in the vocabulary.",
"However, we discover that this single hidden state cannot produce all probability distributions regardless of the LM size or training data size because the single hidden state embedding cannot be close to the embeddings of all the possible next words simultaneously when there are other interfering word embeddings between them.",
"In this work, we demonstrate the importance of this limitation both theoretically and practically.",
"Our work not only deepens our understanding of softmax bottleneck and mixture of softmax (MoS) but also inspires us to propose multi-facet softmax (MFS) to address the limitations of MoS.",
"Extensive empirical analyses confirm our find-ings and show that against MoS, the proposed MFS achieves two-fold improvements in the perplexity of GPT-2 and BERT.",
"Recently, researchers have found that transformer-based language models (LMs), such as GPT-2, can predict the next/masked word distribution better as their sizes grow (Radford et al., 2019; Brown et al., 2020; Kaplan et al., 2020).",
"Compared to greedily outputting the most probable next word, sampling the next word from the predicted distribution allows a LM to generate more diverse and high-quality text sequences (Holtzman et al., 2020).",
"By autoregressively sampling the next word according to its predicted probability, large LMs can be used to assist creative writing (Akoury et al., 2020), reduce the cost of building datasets (West et al., 2021; Liu et al., 2022), generate codes (Li et al., 2022), solve math problems (Cobbe et al., 2021), etc.",
"As a result, one natural question arises: Do modern language modeling architectures still have restrictions in their ability to represent the appropriate distribution over next words or masked words?",
"In this paper, we discover that, when predicting the next word probabilities given an ambiguous context, GPT-2 is often incapable of assigning the highest probabilities to the appropriate non-synonym candidates.",
"For example, given the input prompt After debating whether to bow to the woman or the king first, the jester decided on the [MASK] , we would expect the distribution over the [MASK] fillers to put high probabilities on woman or king or their synonyms.",
"However, GPT-2 might incorrectly assign the second-highest probability to queen as in Figure 1. In the final softmax layer of GPT-2, the log probabilities of the woman and king are computed based on the dot product between a single hidden state embedding and the global word embeddings of the woman and king , respectively.",
"To have the highest but similar dot products for the two options, the transformer encoder in GPT-2 wants to output the hidden state that is close to the average of the woman embedding and the king embedding.",
"However, the words queen , king , woman , and man tend to form a parallelogram in the embedding space (Mikolov et al., 2013; Ethayarajh et al., 2019; Wang et al., 2019) 1 , which means the man and queen also have a similar average.",
"Therefore, GPT-2 is forced to also output man or queen when it wants to output woman or king .",
"The problem not only happens to GPT-2 or the words whose embeddings form a parallelogram shape.",
"Even though the hidden state embeddings of LMs are contextualized, the embedding of each 1 Section 2.1 provides more background knowledge about the parallelogram shape and the softmax bottleneck.",
"word in the softmax layer is global and static during the inference time.",
"Globally dissimilar words could all become the suitable next word in a context while other interfering words might be between them, which makes the ideal next word embedding distribution have multiple modes and cannot be modeled by the single embedding representation.",
"In this work, we propose theorems showing that given any LM using the output softmax layer, when there are more than N word embeddings in a N 1 dimensional subspace/hyperplane (e.g., four embeddings in a two-dimensional plane), we can always find a set of possible next words (e.g., woman and king ) such that there are some other interfering words between them (e.g., man or queen ).",
"That is, the multimodal next word distribution must exist if a few word embeddings are linearly dependent.",
"Recently, mixture of softmax (MoS) (Yang et al., 2018) regains attention as one of the few effective architecture modifications for transformer LM (Narang et al., 2021; Anonymous, 2021).",
"In the meanwhile, Parthiban et al. (2021) show that the softmax bottleneck (Yang et al., 2018) theory is not sufficient to explain the improvement of MoS.",
"As a remedy, our theorems not only provide geometrical intuitions of why and when the multiple embedding representation such as MoS would do better but also suggest that the softmax bottleneck might not be completely solved even if we adopt a very large hidden state size.",
"For example, no matter how large the hidden state size is, as long as queen king = woman man in the embedding space, the LMs cannot output a pair of words in the longer diagonal of the parallelogram as the top two output words.",
"After better understanding why mixture of softmax (MoS) works well, we propose two enhancements over MoS.",
"The first enhancement considers the hidden states of multiple positions and multiple transformer layers when determining the probability in each softmax; the second enhancement uses different contextualized embeddings to compute the probabilities of different subsets of words in the vocabulary.",
"The resulting method, multi-facet softmax (MFS), significantly outperforms the MoS and the softmax layer in GPT-2 on the perplexity for predicting the next word, especially in ambiguous context and non-English text in OpenWebText (Rad-ford et al., 2019).",
"Finally, we also show that MFS could improve the performance of GPT-2 on ProtoQA (Boratko et al., 2020), a commonsense question answering dataset where each question has multiple acceptable answers.",
"and empirical contributions as follows.",
"Theory : We show the softmax layer using a single embedding is sometimes not able to output an appropriate rank of probabilities on a set of words with linearly dependent embeddings.",
"Method : Addressing two weaknesses in MoS (Yang et al., 2018), we propose multi-facet softmax (MFS), a new alternative to the output softmax layer.",
"MFS can replace the softmax in pre-trained LMs to better handle ambiguous contexts without re-training the LMs from scratch.",
"Analysis : Our comprehensive empirical analyses discover and explain several phenomena, such as",
"a) why using multiple embeddings is usually better than the single embedding with the non linearity,",
"b) why the improvement is larger in ambiguous contexts, less common languages, or GPT-2 compared to BERT, and",
"c) why a LM often confuses with similar words.",
"In this section, we first review the softmax layer of GPT-2 formally and explain why queen king = woman man still tends to hold in contextualized LMs.",
"Next, we present our theoretical analyses, which generalize the woman and king example by showing that the candidate words in a low dimensional subspace would induce the impossibility of ranking some candidates on top of other candidates.",
"The LMs typically use a softmax layer to predict PS ( x | c t ) , the probability of the next word x given the context at the t th position c t :",
"where h c t is the t th hidden state in the context c , and w x is the output word embedding for the word x (i.e., the linear weights that project the hidden state to the logit of the word x ).",
"2 Yang et al. (2018) point out that the log probability distribution over all the words in the vocabulary V is log ( PS ( x | c t )) | x V = h Tc t w x log (cid:0)(cid:80) x (cid:48) exp( h Tc t w x (cid:48) ) (cid:1) | x V .",
"The distribution is a linear projection from the hidden state h c t with dimension D , so the degree of freedom in the distribution is only D (i.e., there cannot be more than D linearly independent log distributions).",
"We call this restriction softmax bottleneck theory.",
"2 Notice that some LMs such as BERT add a bias term for each word before the softmax layer.",
"For simplicity, our theoretical analyses focus on the LMs without the bias term such as GPT-2.",
"During training, the ideal output word embedding w x should be close to the hidden states of the contexts h c t that co-occur with the word x while far away from the other hidden states.",
"This objective is similar to the objective function of Word2Vec (Mikolov et al., 2013) except that the context embeddings are contextualized (Kong et al., 2020; Li et al., 2020).",
"If a context c t has a higher chance to co-occur with queen compared to king , the context also has a higher chance to co-occur with woman compared to man to a similar degree.",
"This is the main reason that makes queen king = woman man in the Word2Vec space (Ethayarajh et al., 2019).",
"Therefore, the same linear relations tend to hold in the output word embedding space of GPT-2 as well (Wang et al., 2019).",
"In addition to words satisfying the analogy relations, the following theorems imply that any linear dependency among the words causes the difficulties of LM in ranking the words in an arbitrary order according to their logits (i.e., dot products between the hidden state and the word embedding).",
"For example, woman + king = queen + man makes a LM unable to assign the highest positive logits to woman and king and output them as the top two words in Figure 1. Theorem 1. If the nonzero output embeddings of N words in a set W are linearly dependent and on one side of a hyperplane through the origin, the single embedding representation cannot produce positive logits for a subset of the words in W that are higher than all the logits of the other words in W .",
"Here, we provide an intuitive justification: if N embeddings are in a subspace whose dimension is smaller than N 1 (e.g., three points in a one-dimensional line), the N embeddings are going to be linearly dependent and some set of words cannot have the top dot products due to the limited degree of freedom in the subspace.",
"In Appendix D, we formally prove the theorem by identifying the sets of words that cannot be ranked top by the single embedding representation.",
"In practice, linear dependency holds approximately instead of exactly.",
"For example, woman = queen + man king + .",
"In this practical condition, the following theorem states that the logits of the 8050 Input Hidden States (#I) Sec. 3.2 dot product",
"interfering words (i.e., man and queen ) cannot be much smaller than the logits of the candidate words (i.e., woman and king ).",
"Theorem 2. Let the output word embeddings in the set W = { w i (cid:54) = 0 | i = 1 ...N } satisfy w 1 = a 2 w 2 + ... + a N w N + , where the constant a 2 , ..., a N are neither all zero nor all negative and || || < (cid:15) .",
"Then, there must be a nontrivial partition P = { G, S } of W such that there is no hidden state || h || r and a threshold r(cid:15) that make min w g G h T w g (1+ ) and max w s S h T w s < , where = 2 1+ (cid:80) i =2 ...N | a i | .",
"In the king-woman example, (1+ ) = (1+ 24 ) = 1 .",
"5 .",
"Assuming || || < (cid:15) = 0 .",
"01 and || h || r = 20 , we can get h T 0 .",
"01 20 = 0 .",
"2 .",
"Then, we cannot find a hidden state h such that h T w king 1 .",
"5 0 .",
"01 20 = 0 .",
"3 and h T w woman 0 .",
"3 but h T w queen < 0 .",
"2 and h T w man < 0 .",
"2 because h T w king + h T w woman = h T w queen + h T w man + h T .",
"The formal proof of Theorem 2 can be found in Appendix D and Appendix B.1 estimates (cid:15) in several language models.",
"Even though, theoretically speaking, outputting woman and king as the top two words is possible due to the appearance of , LMs may not successfully learn to output the optimal h and the optimal hidden state for these four words could lead to the wrong probabilities of the other words.",
"Consequently, LMs sometimes still rank queen or man higher than woman or king in practice.",
"Using multiple embeddings is a natural solution for modeling a multimodal distribution (Bishop, 1994).",
"For instance, we can use three embeddings to capture the high probability on the woman and king but low probability on the man and queen in Figure 1. Inspired by our geometric analysis on the limitation of the single embedding, we improve the state-of-the-art multiple embedding solution, mixture of softmax (MoS) (Yang et al., 2018) by two enhancements: multiple input hidden states and multiple partitions on the vocabulary.",
"Yang et al. (2018) propose mixture of softmax (MoS) to allow a LSTM-based (Hochreiter and Schmidhuber, 1997) LM to produce more linearly independent log probability distributions of the output words given different contexts.",
"As in Figure 2",
"(c), the MoS first uses multiple linear layers L fk to project a hidden state h c t into multiple facet embeddings f c t ,k = L fk ( h c t ) .",
"3 The multiple facets f c t ,k and softmaxes would lead to multiple probability distributions, and output probability is the weighted average of the distributions: P MoS ( x | c t ) = K (cid:88) k =1 c t ,k exp( f T c t ,k w x ) (cid:80) x (cid:48) exp( f Tc t ,k w x (cid:48) ) .",
"The prior weights c t ,k = exp( L k ( h ct )) (cid:80) k (cid:48) exp( L k (cid:48) ( h ct )) , where L k is another linear projection for dynamically generating the weights and the projection goes through a softmax to ensure (cid:80) Kk =1 c t ,k = 1 .",
"To model the multimodal distribution, the facets (i.e., the embeddings for different softmaxes) should be able to move freely.",
"For example, in Figure 1, we have three facets but only have two modes, so the two embeddings are very close to the word king .",
"However, when we want to output three dissimilar top words such as the king , woman , and knight , one of the facets should be moved to be near to the embedding of the knight .",
"Therefore, we want our solution to satisfy two properties:",
"a) the linear transformation matrix in L fk should have a full rank to avoid limiting the degree of freedom in each facet, and",
"b) the relative location of the facets should be context-dependent.",
"MoS cannot satisfy both properties.",
"If the first one is satisfied, the input hidden state is uniquely determined by a facet (e.g., h c t = ( L f 1 ) 1 ( f c t , 1 ) ).",
"Then, there exists a global transformation between two facets (e.g., f c t , 2 = L f 2 (cid:16) ( L f 1 ) 1 ( f c t , 1 ) (cid:17) ), which violates the second property.",
"That is, assuming LM can move every facet freely (i.e., the facet's degree of freedom is the same as the dimension of the hidden state), LM cannot make the first two facets close to woman and king in one context but make the two facets close to woman and knight in another context.",
"In other words, since the facet embeddings are the projection of a single hidden state, the total degree of freedom in all facet embeddings cannot exceed the dimension of the hidden state.",
"Our solution to this issue is using more input hidden states to construct the facets.",
"As the orange box in Figure 2, we first concatenate a W H block of input hidden states into i =0 ...W 1 ,m =0 ...H 1 h M m c t i , where M m is the transformer layer index and t i is the index of the i th to the last word in the context.",
"The W H is fixed as 3 3 in this paper.",
"We make its dimension the same as the original hidden state h Mc t using a linear layer L h plus a GELU activation function (Hendrycks and Gimpel, 2016).",
"Then, we concatenate it with the original hidden state to form a new input hidden state q c t = h Mc t GELU (cid:16) L h ( i,m h M m c t i ) (cid:17) .",
"The new input hidden state is passed through the linear transformation L fk to compute the facets f c t ,k = L fk ( q c t ) and our prior weights c t ,k = exp ( L k ( q ct )) (cid:80) k (cid:48) exp ( L k (cid:48) ( q ct )) .",
"Since the dimension of q c t is larger than the dimension of f c t ,k , the inverse function ( L fk ) 1 no longer exists.",
"The next word distribution could have many modes.",
"However, using many softmaxes significantly increases our computational burden because we need to compute the dot product between each facet and all the word embeddings in our vocabulary.",
"Inspired by our analysis, we propose to split all the words in the vocabulary into multiple partitions 4 and use different facets for different partitions.",
"For example, if we can put any word from { queen , man , woman , king } into one partition and the rest of the words into another partition, we no longer have queen king = woman man in either of the partitions.",
"In this method, each word only belongs to one partition, so we only need to compute one dot product for each word.",
"Thus, the extra computational cost only comes from the extra linear projections for preparing the facets.",
"In many contexts c t , the distribution of the next word has only a single mode and the global similarity between words may be useful.",
"Using the multiple partitions alone might lose the similarity information between words in different partitions.",
"Therefore, we propose to only replace the first softmax layer in MoS with the multiple partition method to learn the global similarity of words in different partitions using the other softmaxes.",
"The architecture is illustrated in Figure 2",
"(d).",
"Formally, we compute the probability using PMP ( x | c t ) = c t , 1 exp(( f j x c t , 1 ) T w x ) (cid:80) x (cid:48) exp(( f j x (cid:48) c t , 1 ) T w x (cid:48) ) + K (cid:88) k =2 c t ,k exp( f Tc t ,k w x ) (cid:80) x (cid:48) exp( f Tc t ,k w x (cid:48) ) , (4) where j x is the partition index that the word x belongs to and f j x c t , 1 is the facet for the j x th partition.",
"4 In this work, we simply put the J n + j th word into j th partition (e.g., when the number of partitions J = 4 , the first partition includes the words with indexes 0 , 4 , 8 , ... ).",
"This simple global partitioning method reduces the chance of putting all the interfering words and candidates in the same partition, while minimizing the extra computational cost in our PyTorch implementation because PyTorch supports strided index slicing without copying the variable.",
"We evaluate different LM architectures by comparing their capability of predicting the next word in Wikipedia 2021 and a subset of OpenWebText (Radford et al., 2019).",
"In addition to perplexity, we also compare their mean reciprocal ranks (MRR) in Appendix C.1.",
"The size of the training, validation, and testing set are 96%, 2%, and 2% of the whole corpus, respectively.",
"After loading the pre-trained GPT-2 models, we train the GPT-2 Small for 1 epoch and GPT-2 Medium for 0.4 epochs.",
"We also test our methods on BERT in Appendix B.2.",
"Please see Appendix G for more details of our experiment setup.",
"We set different numbers of softmaxes, input hidden states, and partitions in our MFS framework to construct our baselines.",
"The configuration of different baselines could be seen in Table 1. Softmax (GPT-2) : Using a single softmax, input hidden state, and partition as in Figure 2",
"(a) and Equation 1. The baseline is the same as the original GPT-2 except that we add one more linear layer that converts the hidden state h Mc t to the facet embedding f c t , 1 as in other methods.",
"SigSoftmax (Kanai et al., 2018) : The same as Softmax except when predicting the next word, Kanai et al. (2018) add some non-linearity into the softmax layer by multiplying the exponent and sigmoid of the logits.",
"Softmax + Multi-input : Letting Softmax access multiple input hidden states as in Figure 2",
"(b) and Equation 3. The method is similar to Tenney et al. (2019); Fan et al. (2020), and Tay et al. (2021).",
"MoS (Yang et al., 2018) : MoS (3) is the mixture of softmax with 3 facets/softmaxes, whose probability comes from Equation 2. We also run the MoS with 4 softmaxes in GPT-2 Small and call the model MoS (4) .",
"DOC (Takase et al., 2018) : Similar to our enhancement using multiple input hidden states, direct output connection (DOC) makes each of their facets coming from a different input hidden state.",
"Other configurations include Softmax + Multi-partition , which adds four partitions into the softmax, MFS w/o Multi-partition , which uses only one partition in MFS and could also be viewed as MoS + Multi-input , and MFS w/o Multi-input , which uses only one input hidden state to generate all facets.",
"Table 1 shows that applying MFS to GPT-2 Small achieves more than 15% of the perplexity improvement between GPT-2 Small and GPT-2 Medium , while only increasing 5% of their size differences.",
"Except for Softmax + Multi-partition , adding multiple input hidden states or partitions in different configurations significantly boost the performances.",
"In Appendix B.3, we further show that the improvement of MFS over Softmax could even become 3-5 times larger in the top 5-10% of the most ambiguous contexts compared to the rest of the contexts, which suggests that some improvements indeed come from successfully modeling multimodal distribution.",
"MFS usually doubles the perplexity improvements between MoS (3) and Softmax but the running time of MFS remains similar to MoS (3) because MFS only needs a few more linear layers, which is more efficient than adding one more softmax as in MoS (4) .",
"DOC is worse than MoS (3) .",
"This may be due to a starvation problem: the facet from the last hidden state h Mc t has the prior probability close to 1 and receives most of the gradients.",
"Finally, compared with Softmax , the mixed results in SigSoftmax suggest that adding non-linearity into the softmax layer without modeling the multimodal distribution might not always improve the models (Parthiban et al., 2021).",
"OpenWebText is mostly composed of English text, but some non-English text in the corpus allows us to compare the capability of different models in a multi-lingual setting.",
"Table 2 shows that multiple embeddings improve the perplexity of the non-English text more than the perplexity of the English text.",
"We hypothesize that the distribution of the next non-English word is more likely to be multi-mode because GPT-2 learns the global token embeddings mostly in the English contexts, which could make the embeddings of similar tokens in non-English contexts far away.",
"In Table 3, we present three contexts from the validation set of different datasets and compare the top three predictions of MFS and Softmax on GPT-2 Small .",
"In OpenWebText and Wikipedia 2021, we can see that Softmax misses the correct answer in its top three predictions.",
"We synthesize a dataset using templates (Ribeiro et al., 2020) to verify whether the softmax layer in the original GPT-2 really has difficulty in learning to output the bimodal distribution in Figure 1 and whether the multiple embedding methods could overcome the problem.",
"First, we collect the four words with semantic analogy relations in Google analogy dataset (Mikolov et al., 2013).",
"Next, we insert two out of the four words into our manually written templates to form the contexts and the templates we used could be found in Appendix G.3.",
"For example, given the context I went to Paris and Germany before, and I love one of the places more, which is , the GPT-2 learns to predict either Paris or Germany .",
"The two words can be either the diagonal words (e.g., king and woman ) or the edge word (e.g., king and queen ) in the parallelogram.",
"Finally, we create a dataset with 122k training contexts, 250k validation contexts, and 122k testing contexts, where the word pairs in the testing set are unseen in the training set to see whether the model could learn to output the bimodal distribution in a general way.",
"5 5 The setting is realistic because any related words could become the next word in some ambiguous contexts and all We load the models pre-trained on OpenWebText and continue fine-tuning the models on the last word of each sentence for 10 epochs.",
"We report the testing performances of the best model selected by the validation loss.",
"Since the sets of the word pairs in the training and testing set are disjoint, updating the output word embedding would make GPT-2 solve the task by memorizing/overfitting the training set quickly and lead to much worse testing performances.",
"Thus, we freeze the output word embedding during the training.",
"We visualize the predictions of the Paris-Germany example in the last column of Table 3. We can see two of the softmaxes are close to Paris and the remaining one is close to German , while Softmax overestimates the probability of Paris and ranks France higher than the German .",
"The result verifies that the correct probability distribution of the words in some ambiguous context is hard to learn using Softmax .",
"Quantitatively, Table 4 indicates that when the possible next words are the diagonal words, the Softmax model performs much worse compared to other multiple embedding alternatives.",
"In the edge word dataset, the multiple embedding solutions are still better but have a much smaller gap.",
"MFS w/o Multi-partition slightly improves MoS .",
"We hypothesize the reason is that multiple input hidden states could help the facets to be moved more freely.",
"Finally, multiple partitions seem to cause slight overfitting in this bimodal distribution prediction task.",
"ProtoQA (Boratko et al., 2020) is a question-answering dataset built for evaluating the commonsense reasoning ability of language models.",
"Each question in ProtoQA is ambiguous and leads to a distribution of possible answers.",
"For instance, the answer to Name something that people usually do before they leave for work? is Shower 0.43, Breakfast 0.30, ... .",
"The paper discovers that by reformulating the question-answering task as a context (e.g., One thing people usually do before they leave for work is ... ), GPT-2 could generate the possible answers by sampling the next words from its word prediction distribution.",
"The dataset gives us a chance to directly compare the quality of the distributions generated by different LMs in Table 5.",
"After pretraining GPT-2 Medium on the OpenWebText, we fine-tune them using the training data in ProtoQA for 2 epochs.",
"We repeat the fine-tuning 5 times and compare their average perplexity in our validation set.",
"Next, we generate 150 sentences starting from each context and compare the generated answers with the ground truth distribution.",
"For each fine-tuned model, we repeat the generation evaluation 3 times and report the average accuracy of the resulting 15 trials.",
"We can see that the multiple softmaxes, input hidden states, and partitions usually improve the quality of prediction distribution, and the proposed MFS , which combines all modifications, achieves the best performances.",
"Yang et al. (2018) propose the concept of softmax bottleneck , which points out that the dot product in the softmax layer restricts the representation power of outputting arbitrary conditional probabilities.",
"It also proposes MoS to break the softmax bottleneck in an RNN-based LM.",
"Kanai et al. (2018) and Ganea et al. (2019) add nonlinearities into the softmax layer to break the bottleneck more effi-8055 ciently, but the approaches gain less improvement compared to MoS .",
"A limitation of the aforementioned previous work is that they do not tell us which kinds of sentences would be affected by the bottleneck more and whether the order of the top few next words would be affected, which are the main research questions of our work.",
"Contrary to the previous belief that a large hidden state dimension would eliminate the softmax bottleneck, our theorems suggest that some words in a low dimensional subspace could still make the single embedding in the softmax layer become a bottleneck of arbitrarily ranking the output words.",
"Furthermore, our geometric analyses provide an intuitive explanation about why breaking the bottleneck using multiple embeddings leads to better performances compared to only adding the non-linearity.",
"Demeter et al. (2020) also analyze the structural weakness of the softmax layer from a geometric perspective.",
"They discover that the words with high prior frequencies could stop the LMs from assigning the high probabilities to rare words, which can be viewed as a special case of our theory (See Appendix E).",
"For instance, our work shows that the softmax layer could still prevent the LMs from outputting some top words even if all the possible next words have the same prior frequency.",
"Our theory is deeply connected to the mathematical work that counts the number of possible rankings of points in an embedding space (Cover, 1967; Good and Tideman, 1977).",
"Compared to the studies, our work focuses more on analyzing the multimodal distribution in the word embedding space and its implication to language models.",
"An alternative to model the multimodal distribution is to use multiple embeddings to represent each output word (Athiwaratkun and Wilson, 2017; Miao et al., 2019).",
"Compared to MoS or our approach that use multiple embeddings to represent each hidden state of the context, their method requires many extra parameters to store different senses of each output word.",
"Another type of related model (Shazeer et al., 2017; Fedus et al., 2021) dynamically routes the signals to different experts (i.e., feed-forward networks) and Zhang et al. (2022); Mittal et al. (2022) use multiple embeddings in the attention layers.",
"The methods are similar to MoS and our approach, but they add the multiple embeddings inside each layer of the transformer encoder while the proposed MFS is an alternative to the output softmax layer.",
"When the ideal distribution in the output word embedding space has multiple modes, GPT-2 cannot learn to correctly rank the words in all the modes as the top next words.",
"This shows that the single embedding in the softmax layer, which is used nearly universally by current LMs, constitutes a performance upper bound of predicting the next/masked word.",
"To address the systematic failure caused by these structural weaknesses, we propose multi-facet softmax (MFS).",
"In our experiments, we confirm that the MFS significantly outperforms the standard softmax layer and alleviates the softmax bottleneck in the transformer-based LMs such as GPT-2 better than mixture of softmax (MoS).",
"We thank Michael Boratko, Jay Yoon Lee, Sabrina J. Mielke, Steve Cheng-Xian Li, and the anonymous reviewers for their constructive feedback.",
"This work was supported in part by the Center for Data Science and the Center for Intelligent Information Retrieval, in part by the Chan Zuckerberg Initiative under the project Scientific Knowledge Base Construction, in part by the IBM Research AI through the AI Horizons Network, in part using high performance computing equipment obtained under a grant from the Collaborative R&D Fund managed by the Massachusetts Technology Collaborative, in part by the National Science Foundation (NSF) grant numbers DMR-1534431, IIS-1763618, and IIS-1955567, and in part by the Office of Naval Research (ONR) via Contract No.",
"N660011924032 under Subaward No. 123875727 from the University of Southern California.",
"Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.",
"This work studies a general limitation of LMs and proposes solutions.",
"The proposed theory can help us to understand that some types of hallucinations, mistakes, or biases of LMs could come from softmax bottleneck and their incapability of modeling the correct distribution.",
"For example, there are 60% of male characters and 40% of female characters in our training corpus.",
"The language generation model might be forced to assign more than 60% 8056 probability to male characters as being much more likely to output king than woman in Figure 1. Recently, Narang et al. (2021); Anonymous (2021) show that MoS is one of the few architecture modifications of transformer-based LM that can provide consistent improvements in downstream applications.",
"Hyung Won Chung, Thibault Fvry, Henry Tsai, Melvin Johnson, and Sebastian Ruder.",
"2021.",
"Rethinking embedding coupling in pre-trained language models.",
"In ICLR .",
"Thomas M Cover.",
"1967.",
"The number of linearly inducible orderings of points in d-space.",
"SIAM Journal on Applied Mathematics , 15(2):434439.",
"Angela Fan, Thibaut Lavril, Edouard Grave, Armand Joulin, and Sainbayar Sukhbaatar.",
"2020.",
"Addressing some limitations of transformers with feedback memory.",
"arXiv preprint arXiv:2002.09402 .",
"William Fedus, Barret Zoph, and Noam Shazeer.",
"2021.",
"Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.",
"arXiv preprint arXiv:2101.03961 .",
"In Proceedings of the 36th International 8057 Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA , volume 97 of Proceedings of Machine Learning Research , pages 20732082.",
"Our work provides a fundamental reason why the multiple embedding representation is better, which could inspire more future studies that propose a better multiple-embedding architecture to improve LMs (e.g., multi-lingual BERT) or downstream applications.",
"As examples, we list several possible future directions in Appendix H. Finally, a better LM could lead to both positive and negative societal impacts, but they are not the focus of this paper.",
"Generally speaking, this paper deepens our understanding of the weaknesses of modern LMs and we believe the knowledge can help us to design a better LM that increases the positive impacts and reduces the negative impacts in the future."
]
| [
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"other",
"objective",
"abstain",
"abstain",
"objective",
"result",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method"
]
|
[
"Attention has been proven successful in many natural language processing (NLP) tasks.",
"Recently, many researchers started to investigate the interpretability of attention on NLP tasks.",
"Many existing approaches focused on examining whether the local attention weights could reflect the importance of input representations.",
"In this work, we present a study on understanding the internal mechanism of attention by looking into the gradient update process, checking its behavior when approaching a local minimum during training.",
"We propose to analyze for each word token the following two quantities: its polarity score and its attention score , where the latter is a global assessment on the token's significance.",
"We discuss conditions under which the attention mechanism may become more (or less) interpretable, and show how the interplay between the two quantities may impact the model performance.",
"1 1 Introduction Attention mechanism (Bahdanau et al., 2015) has been used as an important component across a wide range of NLP models.",
"Typically, an attention layer produces a distribution over input representations to be attended to.",
"Such a distribution is then used for constructing a weighted combination of the inputs, which will then be employed by certain downstream modules.",
"Recently, several research efforts on investigating the interpretability of attention on tasks such as text classification, question answering, and natural language inference (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019; Arras et al., 2019) have been conducted.",
"One of their important arguments was whether the attention distribution could adequately reflect the significance of inputs.",
"To answer this question, they designed a series of metrics and 1 Supplementary material and code at https:// github.com/richardsun-voyager/UAFTC conducted corresponding experiments.",
"In their approaches, they were mainly observing how the attention may impact the outputs on the pre-trained models by changing some elements in the inputs.",
"While such approaches have resulted in interesting findings, the attention mechanism itself remains a black box to us it is still largely unclear what are the underlying factors that may have an impact on the attention mechanism.",
"When analyzing the results of a typical model with attention on the text classification tasks, we noticed that in some instances, many of the word tokens with large attention weights were adjectives or adverbs which conveyed explicit signals on the underlying class label.",
"On the other hand, in some other instances, we also noticed that such useful words may not always be able to receive significant attention weights, especially under certain config-urations of hyperparameters, making the attention mechanism less interpretable.",
"Such observations lead to several important questions.",
"First, the attention weight for a word token appears to be the relative measurement to its significance, and is largely local and instance specific.",
"Would there be an instance-independent quantity to assess the corpus-level importance of a word token?",
"And if so, what role would such a quantity play in terms of interpreting the overall attention mechanism?",
"Second, when the attention mechanism appears to be less interpretable, how would the underlying model be affected in terms of performance?",
"In this work, we focus on answering the above questions.",
"We argue that the attention scores (rather than attention weights ) are able to capture the global , absolute importance of word tokens within a corpus.",
"We present a study to figure out the underlying factors that may influence such attention scores under a simple neural classification model.",
"Inspired by Qian (1999), we analyzed the gradients as well as the updates of intermediate variables in the process of gradient descent, and found that there exist some implicit trends on the intermediate variables related to attention: the degree of association between a word token and the class label may impact their attention scores.",
"We argue that when certain hyperparameters are properly set, tokens with strong polarity high degree of association with specific labels, would likely end up with large attention scores, making them more likely to receive large attention weights in a particular sentence.",
"While in such scenarios, the attention mechanism would appear to be more interpretable, we also discuss scenarios where the attention weights may become less interpretable, and show how the polarity scores , another important token-level quantity, will play their roles in the overall model in terms of contributing towards the model performance.",
"Research on interpretability of neural models has received significant attention recently.",
"One approach was using visualization to explore patterns that exist in the intermediate representations of neural networks.",
"Simonyan et al. (2013) visualized the image-specific class saliency on image classification tasks using learnt ConvNets, and displayed the features captured by the neural networks.",
"Li et al. (2016a,b) proposed visualization methods to look into the neural representations of the embeddings from the local composition, concessive sentences, clause composition, as well as the saliency of phrases and sentences, and illustrated patterns based on the visualizations.",
"An erasure method was also adopted to validate the importance of different dimensions and words.",
"Vig and Belinkov (2019) analyzed the attention structure on the Transformer (Vaswani et al., 2017) language model as well as GPT-2 (Radford et al., 2019) pre-trained model.",
"Another approach to understanding neural approaches is to conduct theoretical analysis to investigate the underlying explanations of neural models.",
"One example is the work of Levy and Goldberg (2014), which regarded the word embedding learning task as an optimization problem, and found that the training process of the skip-gram model (Mikolov et al., 2013a,b) can be explained as implicit factorization of a shifted positive PMI (point-wise mutual information) matrix.",
"Recently, several research efforts have focused on the interpretability of the attention mechanism.",
"Jain and Wallace (2019) raised the question on the explainability of feature importance as captured by the attention mechanism.",
"They found the attention weights may not always be consistent with Attention Linear Sigmoid ... ... h 1 h 2 h j h n h = h j j j s = h WT h n 1 Output Figure 1: Classification architecture with attention the feature importance from the human perspective in tasks such as text classification and question answering.",
"Serrano and Smith (2019) also carried out an analysis on the interpretability of the attention mechanism, with a focus on the text classification task.",
"They conducted their study in a cautious way with respect to defining interpretability and the research scope.",
"The paper concluded that the attention weights are noisy predictors of importance, but should not be regarded as justifi-cation for decisions.",
"Wiegreffe and Pinter (2019) suggested that the notion of explanation needs to be clearly defined, and the study of the explanation requires taking all components of a model into account.",
"Their results indicated that prior work could not disprove the usefulness of attention mechanisms with respect to explainability.",
"Moreover, Michel et al. (2019) and Voita et al. (2019) examined the multi-head self-attention mechanism on Transformer-based models, particularly the roles played by the heads.",
"Our work and findings are largely consistent with such findings reported in the literature.",
"We believe there are many factors involved when understanding the attention mechanism.",
"Inspired by Qian (1999), which investigated the internal mechanism of gradient descent, in this work we focus on understanding attention's internal mechanism.",
"We consider the task of text classification, with a specific focus on binary classification.",
"2 The architecture of the model is depicted in Figure",
"1. There are various attention mechanisms introduced in the field (Luong et al., 2015).",
"Two commonly used mechanisms are the additive attention (Bahdanau et al., 2015) and scaled dot-product attention (Vaswani et al., 2017).",
"In this work, we will largely focus our analysis on the latter approach (but we will also touch the former approach later).",
"2 Extending to multi-class classification is possible.",
"See the supplementary material for detailed analysis and discussion.",
"Consider an input token sequence of length n : x = e 1 , e 2 , . . . , e n , where e j is the j -th input token whose representation before the attention layer is h j R d .",
"The attention score for the j -th token is: a j = h (cid:62) j V , (1) where the hyperparameter is the scaling factor (typically set to a large value, e.g., d is often used in the literature (Vaswani et al., 2017)), and V R d is the context vector that can be viewed as a fixed query asking for the most informative word from the input sequence (Yang et al., 2016).",
"The token representation h j can be the word embedding, or the output of an encoder.",
"The corresponding attention weight would be: j = exp( a j ) (cid:80) j (cid:48) exp (cid:0) a j (cid:48) (cid:1) .",
"The complete input sequence is represented as:",
"and the output of the linear layer is:",
"which we call instance-level polarity score of the input sequence.",
"Here, W R d is the weight vector for the linear layer.",
"When we make predictions, if the resulting polarity score s is positive, the corresponding input sequence will be classified as positive (i.e., y = +1 , where y is the output label).",
"Otherwise, it will be classified as negative (i.e., y = 1 ).",
"During training, assume we have a training set D = { ( x (1) , y (1) ) , ( x (2) , y (2) ) , . . . , ( x ( m ) , y ( m ) ) } with m labeled instances.",
"Our overall loss is: (cid:96) = 1 m m (cid:88) t =1 (cid:96) ( t ) = 1 m m (cid:88) t =1 log (cid:16) ( y ( t ) s ( t ) ) (cid:17) .",
"where y ( t ) and s ( t ) are the gold output label and the instance-level polarity score for the t -th instance respectively, and is the sigmoid function.",
"The instance-level polarity score s can also be written as: s = (cid:88) j j h (cid:62) j W = (cid:88) j j s j .",
"Here, we have introduced the token-level polarity score s j for the input token representation h j : s j = h (cid:62) j W .",
"(7) From here we can observe that the instance-level polarity score of the input sequence can be interpreted as the weighted sum of the token-level polarity scores, where the weights are given by the attention weights ( j for h j ).",
"Such attention weights measure the relative importance of the token within a specific input sequence.",
"On the other hand, the attention score a j captures the absolute importance of the token.",
"We believe such absolute measurements to the significance of words may be playing a more crucial role (than attention weights) when understanding the attention mechanism.",
"Thus, unlike many previous research efforts, we will instead focus on the understanding of attention scores in this work.",
"In this paper, we will mainly investigate a simple neural model where h j = e j .",
"Here e j is the word embedding for the j -th input token.",
"In other words, we assume the word embeddings are used as the inputs to the attention layer.",
"Detailed discussions on other assumptions on h j can be found in the supplementary material.",
"We conduct some analysis in this section to understand how the attention mechanism works for the task of text classification.",
"First, let us consider the following 3 different types of tokens: positive tokens : tokens that frequently appear in positive training instances only, negative tokens : tokens that frequently appear in negative training instances only, and neutral tokens : tokens that appear evenly across both positive and negative training instances.",
"We also call the first two types of tokens polarity tokens .",
"For the ease of analysis and discussion, we assume each token belongs to either of these 3 types, and we assume the dataset is balanced and symmetric 3 .",
"While some of these assumptions may seem strong, having them would significantly simplify our analysis.",
"As we will see later in experiments, even though some of the above assumptions do not hold in some real datasets, our findings are still valid in practice.",
"3 In other words, if we flip the signs of the y labels for all documents in the training set, we arrive at exactly the same training set (under a particular mapping between tokens).",
"the gradient flow equation using Euler's Method (Scieur et al., 2017; Qian, 1999), written as: d z ( ) d = (cid:96) ( z ( )) , z (0) = z 0 , (8) where z is the parameter vector, and z 0 is its initialization, and is the time step.",
"We assume that all parameters have initializations, and will omit such initializations in the subsequent differential equations.",
"We will not seek to solve the differential equations directly but to find out whether there exist some trends and patterns for certain variables during training.",
"Consider the token e in the vocabulary whose vector representation is e .",
"Let us have an analysis on the polarity score s e for the token e .",
"This token may appear somewhere in the training set.",
"We write e ( t ) j e if and only if this token e appears as the j -th token in the t -th instance.",
"Gradient update iteration will be represented as: ds e ( ) d = ( d e ( ) d ) (cid:62) W ( ) + e (cid:62) ( ) d W ( ) d , (9) where W ( ) is the linear layer weight vector at the time .",
"Its update can be represented by another ordinary differential equation: d W ( ) d = (cid:96) W ( ) , (10) Similarly, we have: d e ( ) d = (cid:96) e ( ) .",
"For simplicity, we will omit the time step in the equations.",
"The derivative of the token level polarity score will be written as: ds e d = (cid:18) (cid:96) e (cid:19) (cid:62) W (cid:124) (cid:123)(cid:122) (cid:125) s (cid:48) e + (cid:18) e (cid:62) (cid:96) W (cid:19) (cid:124) (cid:123)(cid:122) (cid:125) s (cid:48)(cid:48) e .",
"The two partial derivatives can be calculated as 4 : (cid:96) e = 1 m (cid:88) ( t,j ): e ( t ) j e y ( t ) ( t ) ( t ) j (cid:34) V ( e h ( t ) ) (cid:62) + I (cid:35) W , (13) (cid:96) W = 1 m m (cid:88) t =1 y ( t ) ( t ) h ( t ) , (14) 4 See the supplementary material for details.",
"where ( t, j ) : e ( t ) j e means we are selecting such tokens from the t -th instance at the j -th position that are exactly e , and ( t ) j is the attention weight for that j -th token in the selected t -th instance.",
"The vector h ( t ) is the representation of the t -th instance, and ( t ) is defined as ( t ) = 1 ( y ( t ) s ( t ) ) .",
"s (cid:48) e = 1 m (cid:88) ( t,j ): e ( t ) j e y ( t ) ( t ) ( t ) j (cid:0) s e s ( t ) (cid:1) V (cid:62) W + 1 m || W || 22 (cid:88) ( t,j ): e ( t ) j e y ( t ) ( t ) ( t ) j .",
"(15)",
"The sign of the second term above depends on: ( e ) = (cid:88) ( t,j ): e ( t ) j e y ( t ) ( t ) ( t ) j .",
"s (cid:48)(cid:48) e = 1 m m (cid:88) t =1 y ( t ) ( t ) e (cid:62) h ( t ) = 1 m m (cid:88) t =1 y ( t ) ( t ) (cid:88) j ( t ) j e (cid:62) e ( t ) j = 1 m (cid:88) ( t,j ) y ( t ) ( t ) ( t ) j e (cid:62) e ( t ) j .",
"(17)",
"Equation 17 involves dot-products between embeddings.",
"During training, certain trends and patterns will be developed for such dot-products.",
"Near a local minimum, we can show that it is desirable to have e (cid:62) i e j > 0 when e i and e j are both positive tokens or both negative tokens, and e (cid:62) i e j < 0 when one is a positive token and the other is a negative token.",
"More details and analysis on the desirability of these properties can be found in the supplementary material.",
"Now let us look at the last term in Equation 17.",
"This term can be re-written as: 1 m (cid:88) ( t,j ): y ( t ) =+1 ( t ) ( t ) j (cid:16) e (cid:62) e ( t ) j (cid:17) + 1 m (cid:88) ( t,j ): y ( t ) = 1 ( t ) ( t ) j (cid:16) e (cid:62) e ( t ) j (cid:17) .",
"(18) where we split the term into two based on the polarity of the training instances.",
"In the first term, each e j token would be either a positive or a neutral token; in the second term, each e j would be either a negative or a neutral token, and again under the assumption on the dataset, all the terms involving neutral e j tokens would roughly sum to a value close to 0 (regardless of e ).",
"So we may assume there are no neutral e j tokens.",
"Now, if e is a positive token, we can see it is desirable for both terms to be positive.",
"If e is negative, it is desirable for both terms to be negative.",
"If e is neutral, likely this term is close to",
"0. Overall, the update of s e is: ds e d = 1 m (cid:16) V (cid:62) W / (cid:17) ( e ) (cid:124) (cid:123)(cid:122) (cid:125) ( A ) + 1 m || W || 22 ( e ) (cid:124) (cid:123)(cid:122) (cid:125) ( B ) + 1 m (cid:88) ( t,j ) y ( t ) ( t ) ( t ) j e (cid:62) e ( t ) j (cid:124) (cid:123)(cid:122) (cid:125) ( C ) , (19) where ( e ) = (cid:88) ( t,j ): e ( t ) j e y ( t ) ( t ) ( t ) j (cid:16) s e s ( t ) (cid:17) .",
"Under the assumption that V (cid:62) W / is reasonably small (for example, we may set to an appropriate value, which is reasonably large), we have A 0 .",
"We then have the following results: For positive tokens, we have B > 0 and C > 0 .",
"The corresponding polarity scores will likely in-crease after each update when approaching the local minimum, and may end up with relatively large positive polarity scores eventually.",
"For negative tokens, we have B < 0 and C < 0 .",
"The corresponding polarity scores will likely decrease after each update when approaching the local minimum, and may end up with relatively large negative polarity scores eventually.",
"For neutral tokens, we have B 0 and C 0 .",
"Their polarity scores will likely not change significantly after each update when approaching the local minimum, and may end up with polarity scores that are neither significantly positive nor significantly negative eventually.",
"Based on the above results, we can also quickly note that ( e ) has the following property: it is positive if e is a polarity token, and close to zero if e is neutral.",
"These results are desirable as the token-level polarity scores will be used for defining the instance-level polarity scores, which are in term useful for prediction of the final polarity of the sentence containing such tokens.",
"However, we note that the above results depend on the assumption that term A is small.",
"As we mentioned above, we may assume is large to achieve this.",
"When V (cid:62) W / is not small enough, the term A may lead to a gap in the polarity scores between the positive and negative tokens, depending on the sign of V (cid:62) W a term that will appear again in the next section when examining the attention scores.",
"Now let us have an analysis on the attention score for each token.",
"Again given a token e , the corresponding attention score is a e = e (cid:62) V .",
"Note that this is a global score that is independent of any instance.",
"The update of a e is: da e ( ) d = 1 ( d e ( ) d ) (cid:62) V ( ) + 1 e (cid:62) ( ) d V ( ) d .",
"(21)",
"Similarly, let us rewrite the equation as: da e d = 1 (cid:18) (cid:96) e (cid:19) (cid:62) V (cid:124) (cid:123)(cid:122) (cid:125) a (cid:48) e + (cid:18) 1 e (cid:62) (cid:96) V (cid:19) (cid:124) (cid:123)(cid:122) (cid:125) a (cid:48)(cid:48) e .",
"(22)",
"We have (cid:96) V = 1 m (cid:88) ( t,j ) y ( t ) ( t ) ( t ) j e ( t ) j (cid:16) s ( t ) j s ( t ) (cid:17) .",
"(23)",
"The first term can be calculated as: a (cid:48) e = 1 m 2 || V || 22 (cid:88) ( t,j ): e ( t ) j e y ( t ) ( t ) ( t ) j (cid:16) s e s ( t ) (cid:17) + 1 m (cid:88) ( t,j ): e ( t ) j e y ( t ) ( t ) ( t ) j W (cid:62) V .",
"(24)",
"The second term is: a (cid:48)(cid:48) e = 1 m 2 (cid:88) ( t,j ) y ( t ) ( t ) ( t ) j e (cid:62) e ( t ) j (cid:16) s ( t ) j s ( t ) (cid:17) .",
"(25)",
"Similarly, this can be re-written as: 1 m 2 (cid:88) ( t,j ): y ( t ) =+1 ( t ) ( t ) j (cid:16) s ( t ) j s ( t ) (cid:17) e (cid:62) e ( t ) j + 1 m 2 (cid:88) ( t,j ): y ( t ) = 1 ( t ) ( t ) j (cid:16) s ( t ) s ( t ) j (cid:17) e (cid:62) e ( t ) j .",
"(26)",
"This term shall be close to zero initially, regardless of e .",
"However, this term may become positive for a polarity token e as learning progresses.",
"5 The update of a e is (note that W (cid:62) V = V (cid:62) W ): da e d = 1 m 2 (cid:16) V (cid:62) W (cid:17) ( e ) (cid:124) (cid:123)(cid:122) (cid:125) ( D ) + 1 m 2 || V || 22 ( e ) (cid:124) (cid:123)(cid:122) (cid:125) ( E ) (27) + 1 m 2 (cid:88) ( t,j ) y ( t ) ( t ) ( t ) j e (cid:62) e ( t ) j (cid:16) s ( t ) j s ( t ) (cid:17) (cid:124) (cid:123)(cid:122) (cid:125) ( F ) .",
"Let us now understand the influence of these terms respectively: Term D .",
"When V (cid:62) W > 0 , the positive tokens will receive a positive update whereas the negative tokens will receive a negative update from this term after each step.",
"When V (cid:62) W < 0 , the influence is the other way around.",
"It does not influence the attention scores of the neutral tokens much as the corresponding ( e ) is approximately zero.",
"When it is not close to zero, this term can lead to a gap between the final attention scores of the positive tokens and negative tokens.",
"Terms E and F .",
"Based on our analysis, E > 0 , and F 0 for polarity tokens, and E 0 and F 0 for neutral tokens.",
"This means for the positive tokens and negative tokens, their attention scores will likely receive a positive value from this term after each update when approaching a local minimum.",
"Their corresponding attention scores may end up with large positive scores eventually.",
"For the neutral tokens, this term does not have much influence on their attention scores.",
"From here we can observe that when V (cid:62) W is small, the polarity tokens will likely end up with larger attention scores than the neutral tokens.",
"This is actually a desirable situation polarity tokens are likely more representative when used for predicting the underlying class labels, and therefore shall receive more attention in general.",
"However, we note that if the scaling factor is too large, the term D may be significant.",
"This means the sign of V (cid:62) W will then play a crucial role when it is non-zero and when is very large, positive tokens and negative tokens will likely have 5 See the supplementary material for more details.",
"attention scores of opposite signs.",
"This may not be a very desirable situation as the attention scores would be less interpretable in that case.",
"On the other hand, as we have discussed in the previous section, the scaling factor should not be too small too.",
"Otherwise term A in Equation 19 would not be close to 0 as a result the conclusions on the polarity scores for the tokens stated at end of Sec 4.1 may not hold.",
"In conclusion, if we would like to observe the desirable behavior as discussed for the attention mechanism, it is important for us to choose an appropriate value or we shall possibly find ways to control the value of V (cid:62) W 6 .",
"We will conduct experiments on real datasets to verify our findings.",
"Besides the above analysis, we have also analyzed polarity scores and attention scores from the model with additive attention, the model with an affine input layer and the model for multi-class classification respectively.",
"There are terms that have similar effects on polarity and attention scores during training.",
"Due to space limitations, we provide such details in the supplementary material.",
"We conducted experiments on four text classification datasets 7 .",
"The statistics of the datasets are shown in Table",
"1. We followed the work of Jain and Wallace (2019) for pre-processing of the datasets 8 , and lower-cased all the tokens.",
"Stanford Sentiment Treebank (SST) (Socher et al., 2013).",
"The original dataset that consists of 10,662 instances with labels ranging from 1 (most negative) to 5 (most positive).",
"Similar to the work of Jain and Wallace (2019), we removed neutral instances (with label 3), and regarded instances with label 4 or 5 as positive and instances with the label 1 or 2 as negative.",
"IMDB (Maas et al., 2011).",
"The original dataset 6 We have further discussions on V (cid:62) W in the supplementary material.",
"that consists of 50,000 movie reviews with positive or negative labels.",
"20Newsgroup I (20News I).",
"The original dataset that consists of around 20,000 newsgroup correspondences.",
"Similar to the work of Jain and Wallace (2019), we selected the instances from these two categories: rec.sport.hockey and rec.sport.baseball , and regarded the former as positive instances and the latter negative.",
"20Newsgroup II (20News II).",
"This is a dataset for 3-class classification.",
"We selected instances from these three categories: rec.motorcycles , sci.med and talk.politics.guns .",
"Our analysis focused on the ideal case (e.g., positive tokens only appear in positive documents).",
"To be as consistent as possible with our analysis, we only examined the tokens of strong association with specific labels and the tokens that could be seen almost evenly across different types of instances based on their frequencies (note that we only selected these tokens for examination after training, but no tokens were excluded during the training process).",
"We defined a metric e to measure the association between the token e and instance labels 9 : e = f + e f e f + e + f e , (28) where f + e and f e refer to the frequencies in the positive and in the negative instances respectively.",
"If e (0 . 5 , 1) and f + e > 5 , the token will be regarded as a positive token.",
"If e ( 1 , 0 . 5) 9 For multi-class classification, we determined the polarity of each token based on the relative frequency of each token with respect to each label.",
"For each token, we calculated the frequency distribution across the labels that they appear in.",
"If the largest element of the distribution is above a given threshold, we will regard the token as a polarity one.",
"5 , the token will be regarded as a neutral token.",
"10 We ran the experiments using different scaling factors on the models with the scaled dot-product attention (DP) and additive attention (AD) respectively.",
"For the former, we also investigated the performances on the models with a LSTM (DP-L) or an affine transformation layer (DP-A) as the input encoder.",
"11 The Adagrad optimizer (Duchi et al., 2011) was used for gradient descent.",
"Dropout (Sri-vastava et al., 2014) was adopted to prevent overfit-ting.",
"All the parameters were learned from scratch to avoid the influence of prior information.",
"For the same reason, while we may be able to use pre-trained word embeddings, we chose to initialize word embeddings with a uniform distribution from -0.1 to 0.1 with a dimension d = 100 .",
"The results are shown in Table",
"2. For the scaled dot-product attention, which is our focus in this work, it can be observed that when the scaling factor is small (1 or 0.001), the test set results appear to be worse than the case when is set to a larger value.",
"The optimal results may be obtained when is set to a proper value.",
"However, setting to a very large value does not seem to have a significant impact on the performance in this case, from Equations 1 and 2 we can see that the attention weights will be close to each other for all input tokens, leading to an effect similar to mean pooling.",
"Results on using LSTM or the affine transformation layer as the input encoder are similar setting a proper value for appears to be crucial.",
"Figure 2 shows the results for polarity scores and attention scores for the first 3 datasets, when is set to a moderate value of 10 (i.e., d ).",
"These results are consistent with our analysis.",
"It can be observed that generally positive tokens have positive polarity scores while negative tokens have negative polarity scores.",
"Neutral tokens typically have polarity scores around zero.",
"It can also be observed that both the positive and negative tokens generally have larger attention scores than the neutral tokens.",
"We also examined whether there would be an obvious gap between the attention scores of the polarity tokens when is large.",
"As we can see from Figure 3b, when is set to 100, the resulting attention scores for the positive tokens are smaller than those of the neutral (and negative) tokens.",
"In 10 Example selected tokens from these datasets can be found in the supplementary material.",
"11 More results from these models can be found in the supplementary material.",
"For each model, we only reported one set of the results with a random initialization as we found the patterns were similar with different initializations.",
"this case, the resulting attention scores appear to be less interpretable.",
"However, as we discussed above, when is very large, the attention mechanism will effectively become mean pooling (we can also see from Figure 3b that attentions scores of all tokens are now much smaller), and the overall model would be relying on the average polarity scores of the word tokens in the sentence for making prediction.",
"Interestingly, on the other hand, as we discussed before at the end of Section 4.1, when is large, the polarity tokens will likely end up with polarity scores of large magnitudes a fact that can also be empirically observed in Figure 3a.",
"It is because of such healthy polarity scores acquired, the model is still able to yield good performance in this case even though the attention scores do not appear to be very interpretable.",
"We also tried to set a constraint on V (cid:62) W by introducing a regularization term to minimize it in the learning process.",
"We found doing so will generally encourage the attention model to produce more interpretable attention scores for example, even when was large, both the positive and negative tokens ended up with positive attention scores that were generally larger than those of the neutral tokens.",
"However, empirically we did not observe a significant improvement in test performance.",
"See the supplementary material for details.",
"We examined the attention scores on the 20News II dataset which consists of 3 labels.",
"As shown in Figure 3c, polarity tokens that are strongly associated with specific labels are still likely to have larger attention scores than those of neutral tokens.",
"To understand whether there are similar patterns for the polarity and attention scores when using the additive attention models, we replaced the scaled dot-product attention layer with the additive attention layer and ran experiments on the SST dataset.",
"The results are shown in Figure 4, which are similar to those of our scaled dot-product attention model.",
"Furthermore, we analyzed the relationship between the global attention scores and the local attention weights.",
"We collected all the attention weights on the test set of SST for the positive, negative and 0 2000 4000 6000 8000 Token Index 4 2 0 2 4 P o l a r i t y S c o r e PosNegNeutral 0 2000 4000 6000 8000 Token Index 2 1 0 1 2 3 4 A tt e n t i o n S c o r e PosNegNeutral Figure 4: Polarity and attention scores when additive attention is used (on SST, = 10 ).",
"neutral tokens, and calculated the average weight for each token.",
"Next we plot in Figure 5 the distribution of such average attention weights for tokens of these three types separately.",
"As we can observe, generally, the polarity tokens are more likely to have larger attention weights than the neutral tokens.",
"However, the positive tokens seemed to receive lower scores than the negative tokens in terms of the attention weights.",
"This is consistent with the attention scores shown in Figure 2d: the attention scores of the positive tokens were generally lower than those of the negative tokens.",
"Meanwhile, we could see that there were some outliers of large weights for the neutral tokens (circles that appear outside the boxes are outliers).",
"We looked into the case, it was due to that all of the three tokens in the short instance is this progress had negative attention scores, and the last token progress somehow had a relatively larger one, making its corresponding attention weight the largest amongst the three.",
"This can be explained by the fact that attention weights only capture relative significance of tokens within a local context.",
"These empirical results support our analysis as well as our belief on the significance of the attention scores.",
"When certain hyperparameters are properly set, the attention mechanism tends to assign larger attention scores to those tokens which have strong association with instances of a specific label.",
"Meanwhile, the polarity scores for such tokens tend to yield large absolute values (of possibly different signs, depending on the polarity of the tokens), which will be helpful when predicting instance labels.",
"By contrast, neutral tokens that appeared evenly across instances of different labels are likely assigned small attention scores and polarity scores, making them relatively less influential.",
"In this work, we focused on understanding the underlying factors that may influence the attention mechanism, and proposed to examine attention scores a global measurement of significance of word tokens.",
"We focused on binary classification models with dot-product attention, and analyzed through a gradient descent based learning framework the behavior of attention scores and polarity scores another quantity that we defined and proposed to examine.",
"Through the analysis we found that both quantities play important roles in the learning and prediction process and examining both of them in an integrated manner allows us to better understand the underlying workings of an attention based model.",
"Our analysis also revealed factors that may impact the interpretability of the attention mechanism, providing understandings on why the model may still be robust even under scenarios where the attention scores appear to be less interpretable.",
"The empirical results of experiments on various real datasets supported our analysis.",
"We also extended to and empirically examined additive attention, multi-label classification and models with an affine input layer, and observed similar behaviors.",
"There are some future directions that are worth exploring.",
"Specifically, we can further examine the influence of using pre-trained word embeddings whether similar words can help each other boost their polarity and attention scores.",
"Moreover, we can also examine the influence of using deep contextualized input encoders such as ELMo (Peters et al., 2018) or BERT (Devlin et al., 2018).",
"We would like to thank the anonymous reviewers for their thoughtful and constructive comments.",
"We also thank Rui Qiao for his help on proofreading.",
"This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-RP-2019-012), and Ministry of Education, Singapore, under its Academic Research Fund (AcRF) Tier 2 Programme (MOE AcRF Tier 2 Award No: MOE2017-T2-1-156).",
"Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of National Research Foundation, Singapore and AI Singapore or the views of the Ministry of Education, Singapore."
]
| [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"result",
"result",
"objective",
"abstain",
"method",
"method",
"other",
"other",
"other",
"other"
]
|
[
"Abstract Transformer-based pre-trained language models have significantly improved the performance of various natural language processing (NLP) tasks in the recent years.",
"While effective and prevalent, these models are usually prohibitively large for resource-limited deployment scenarios.",
"A thread of research has thus been working on applying network pruning techniques under the pretrain-then-finetune paradigm widely adopted in NLP.",
"However, the existing pruning results on benchmark transformers, such as BERT, are not as remarkable as the pruning results in the literature of convolutional neural networks (CNNs).",
"In particular, common wisdom in pruning CNN states that sparse pruning technique compresses a model more than that obtained by reducing number of channels and layers (Elsen et al., 2020; Zhu and Gupta, 2017), while existing works on sparse pruning of BERT yields inferior results than its small-dense counterparts such as TinyBERT (Jiao et al., 2020).",
"In this work, we aim to fill this gap by studying how knowledge are transferred and lost during the pre-train, fine-tune, and pruning process, and proposing a knowledge-aware sparse pruning process that achieves significantly superior results than existing literature.",
"We show for the first time that sparse pruning compresses a BERT model significantly more than reducing its number of channels and layers.",
"Experiments on multiple data sets of GLUE benchmark show that our method outperforms the leading competitors with a 20-times weight/FLOPs compression and neglectable loss in prediction accuracy 1 .",
"of natural language processing (NLP) tasks.",
"These models are pre-trained in a self-supervised fashion and then fine-tuned for supervised downstream tasks.",
"However, these models suffer from the heavy model size, making them impractical for resource-limited deployment scenarios and incurring cost concerns (Strubell et al., 2019).",
"In parallel, an emerging subfield has studied the redundancy in deep neural network models (Zhu and Gupta, 2017; Gale et al., 2019) and proposed to prune networks without sacrificing performance, such as the lottery ticket hypothesis (Frankle and Carbin, 2019).",
"Common wisdom in CNN literature shows that sparse pruning leads to more compression rate than structural pruning.",
"For example, for the same number of parameters (0.46M), the sparse MobileNets improve by 11.2% accuracy over the dense ones (Zhu and Gupta, 2017).",
"However, similar conclusions are not observed for pre-trained language models.",
"The main question this paper attempts to answer is: how to perform sparse pruning under the pre-train and fine-tune paradigm?",
"Answering this question correctly is challenging.",
"First, these models adopt pre-training and fine-tuning procedures, during which the general-purpose language knowledge and the task-specific knowledge are learned respectively.",
"Thus, it is desirable and challenging to keep the weights that are important to both knowledge during pruning.",
"Second, unlike CNNs, pre-trained language models have a complex architecture consisting of embedding, self-attention, and feed-forward layers.",
"To address these challenges, we propose SparseBERT, a knowledge-aware sparse pruning method for pre-trained language models, with a special focus on the widely used BERT model.",
"SparseBERT is executed in the fine-tuning stage.",
"It preserves both general-purpose and task-specific language knowledge while pruning.",
"To preserve the general-purpose knowledge learned during pre-training, !",
"\" , $ \" & & ' L Pre-Training ! % , $ % & ' & ' ) D Fine-Tuning ! * , $ * L+, + D Testing Genera. Error ! % , $ % & ' )",
"L # L L !\"",
"L L # L # !\" %& '())(* )+,L !\" # L # L # !\" L # L !\"",
"(a) is the general pre-training and fine-tuning procedure (Section 3.1).",
"g is an encoder.",
"g L and g LD are the encoders well-trained on the pre-training and fine-tuning datasets respectively.",
"L and D are the general-purpose language knowledge and the task-specific knowledge respectively.",
"There is a domain error between pre-training and testing, and a generalization error between fine-tuning and testing.",
"(b) and",
"(c) are two basic pruning strategies (Section 3.2.1).",
"Both LD and L pr are subsets of knowledge L .",
"LD is related to the downstream task.",
"L pr is preserved in a pruned encoder g L pr .",
"(d) is the proposed pruning strategy (Sections 3.2.2-3.2.3).",
"( L pr )",
"D refers to the knowledge obtained by first pruning and then fine-tuning.",
"( LD ) pr corresponds to first fine-tuning and then pruning while distilling.",
"L # !\" / 0 ,2 0 3 3 5 L Pre-Training / 6 ,2 6 3 5 3 5 7 D Fine-Tuning / 8 ,2 8 L # !\" + D Testing Genera.",
"Error / 6 ,2 6 Teacher = 3 5 7 Student = 3 5 3 5 7 9: D Distilling Domain Error",
"L # !\" / 0 ,2 0 3 3 5 L Pre-Training / 6 ,2 6 3 5 3 5 7 D Fine-Tuning / 8 ,2 8 L # !\" + D Testing Genera.",
"Error / 6 ,2 6 Teacher = 3 5 7 Student = 3 5 3 5 7 9: D Distilling Domain Error",
"L # L # !\" L # L !\"",
"We summarize different types of BERT pruning approaches in Figure 1 (see Section 3.2 for detailed discussion) Experimental results on the GLUE benchmark demonstrate that SparseBERT outperforms all the leading competitors and achieves 1.4% averaged loss with down to only 5% remaining weights compared to BERT-base.",
"SparseBERT uses the pre-trained BERT without fine-tuning as the initialized model and prunes the linear transformations in self-attention and feedforward layers, which is inspired by the recent findings that self-attention and feed-forward layers are overparameterized (Michel et al., 2019; Voita et al., 2019) and are also the most computation consumption parts (Ganesh et al., 2020).",
"To learn the task-specific task knowledge during pruning while preserving the general-purpose knowledge at the same time, we apply knowledge distillation (Hinton et al., 2015).",
"We adopt the task-specific fine-tuned BERT as the teacher network and the pre-trained BERT that is being pruned as the student.",
"We feed the downstream task data into the teacher-student framework to train the student to reproduce the behaviors of the teacher.",
"A lot of efforts have been made on studying network redundancy and pruning networks without accuracy loss (Gale et al., 2019; Renda et al., 2020).",
"For example, the work on lottery ticket hypothesis (Frankle and Carbin, 2019) showed that there exist sparse smaller subnetworks capable of training to full accuracy in CNNs.",
"Common wisdom in CNN literature shows that spare pruning leads to much more compression rate than structural pruning (Gale et al., 2019; Elsen et al., 2020).",
"For example, for the same number of parameters (0.46M), the sparse MobileNets achieve 61.8% accuracy while the dense ones achieve 50.6% (Zhu and Gupta, 2017).",
"However, similar observations are not observed in existing approaches for pretrained language models (Fan et al., 2019; Michel et al., 2019; Chen et al., 2020; McCarley et al., 2020; Jiao et al., 2020).",
"Our method aims to fill the gap and summarize these pruning strategies.",
"There are other compression approaches for pre-trained language models, such as quantization (Zafrir et al., 2019) and weight factorization (Wang et al., 2019), which are out of the scope of this work.",
"We first formalize the knowledge transfer involved in fine-tuning pre-trained language models.",
"Then, we introduce our SparseBERT.",
"The practice of fine-tuning pre-trained language models has become prevalent in various NLP tasks.",
"The two-stage procedure is illustrated in Figure",
"1(a).",
"The language model is denoted by f = g h , where g is a text encoder and h is a task predictor head.",
"Text encoders, like Transformers in BERT, are used to map input sentences to hidden representations and task predictors further map the representations to the label space.",
"The pre-trained model is trained on a large amount of data examples ( x p , y p ) from the pre-training task domain via different tasks that resemble language modeling.",
"During pre-training, the general-purpose language knowledge, denoted by L , is learned based on ( x p , y p ) .",
"L contains a subset that is related to the downstream task, denoted by LD , and the amount of L is far greater than that of LD (see Figure",
"2(a)).",
"To transfer knowledge L (especially LD ) from pre-training domain to downstream domain, the well-trained encoder g L is used to initialize the downstream encoder.",
"In fine-tuning, downstream encoder is trained based on the task-specific knowledge D preserved in a small amount of data examples ( x d , y d ) from downstream domain.",
"Finally, the well-trained downstream encoder g LD is evaluated on test data.",
"Intuitively, there are two pruning strategies.",
"One is that pruning is applied to the downstream encoder g L during fine-tuning (see Figure",
"1(b)).",
"However, because the loss to update the weights during fine-tuning is exclusively based on the data examples ( x d , y d ) from the downstream task domain, this pruning strategy might destruct the knowledge LD , which is learned based on ( x p , y p ) and encoded in the initialization of g L .",
"The other strategy is that pruning is executed during pre-training (see Figure",
"1(c)).",
"The generated pruned network preserves a subset of knowledge L , denoted by L pr .",
"Unfortunately, because this strategy ignores the downstream task information and the amount of L is extremely large, i.e., L (cid:29) L pr , the knowledge L pr could be much different from LD that we hope to preserve (see Figure",
"As shown in Figure",
"1(d), SparseBERT executes pruning at the distilling stage.",
"It prunes the pretrained encoder without fine-tuning, g L , while fine-tuning the pruned encoder based on the downstream dataset ( x d , y d ) .",
"Recent findings indicate that self-attention and feed-forward layers are overparameterized and are the most computation consumption parts (Michel et al., 2019; Voita et al., 2019; Ganesh et al., 2020).",
"Thus, SparseBERT applies network pruning to the linear transformations matrices in self-attention and feed-forward layers (see Figure 3).",
"The choice of pruning approach is flexible.",
"We choose magnitude weight pruning (Han et al., 2015) in this paper, mainly because it is one of the most effective and popular pruning methods.",
"More details about the pruning strategy used in SparseBERT can be found in the codes.",
"To mitigate the loss of LD , we propose to utilize knowledge distillation while pruning.",
"We use the task-specific fine-tuned BERT as the teacher network and the pre-trained BERT that is being pruned as the student (see Figure",
"1(d) and Figure 3).",
"The motivation is that the task-specific fine-tuned BERT preserves LD .",
"By feeding downstream task data ( x d , y d ) into the teacher-student framework, we help the student reproduce the behaviors of the teacher to learn both L d and L as much as possible.",
"We design the distillation loss as L distil = L emb + L att + L hid + L prd .",
"L emb = MSE( ES , ET ) is the difference between the embedding layers of student and teacher.",
"L att = (cid:80) MSE( A Si , A Ti ) is the difference between attention matrices and i is the layer index.",
"L hid = (cid:80) MSE( H Si , H Ti ) is the difference between hidden representations.",
"L prd = -softmax( z T ) log_softmax( z S / temp) is the soft cross-entropy loss between the logits of student and teacher.",
"temp represents the temperature value.",
"The proposed distillation loss is inspired by (Jiao et al., 2020) and it helps the student imitate the teacher's behavior as much as possible.",
"In addition, we perform the same data augmentation as (Jiao et al., 2020) does to generate more task-specific data for teacher-student Self-Attention Add LayerNorm Feedforward Feedforward Gelu Add LayerNorm Embedding Layer Output Layer Output Input Knowledge Distillation Block (light purple) repeats multiple times Pruned Layers Teacher Network: finetuned BERT Student Network: pretrained BERT Self-Attention Add LayerNorm Feedforward Feedforward Gelu Add LayerNorm Embedding Layer Output Layer Output Input Knowledge Distillation Knowledge Distillation Knowledge Distillation Figure 3: Illustration of the proposed knowledge-aware compression.",
"learning.",
"Notably, the choices of distillation loss and data augmentation method are flexible and we found the ones we adopted worked well in general.",
"We evaluate SparseBERT on four data sets from the GLUE benchmark (Wang et al., 2018).",
"To test if SparseBERT is applicable across tasks, we include the tasks of both single sentence and sentence-pair classification.",
"We report the results on dev sets.",
"We run 3, 20, 20, 50 epochs for QNLI, MRPC, RTE, CoLA separately.",
"The baselines include BERT-base, ELMo (Peters et al., 2018), BERT-PKD (Sun et al., 2019), Bert-of-Theseus (Xu et al., 2020), DistilBERT (Sanh et al., 2019), MiniLM (Wang et al., 2020), TinyBERT (Jiao et al., 2020), BERT-Tickets (Chen et al., 2020), CompressBERT (Gor-don et al., 2020), and RPP (Guo et al., 2019).",
"The results are shown in Table 1.",
"Compared to BERT-base, SparseBERT achieves 1.4% averaged performance loss with down to 5% weights.",
"In addition, SparseBERT outperforms all leading competitors with the highest sparsity.",
"We compare SparseBERT with the pruning described in Figure",
"1(b) on the question answer tasks of SQuAD v1.1 and v2.0 (Rajpurkar et al., 2016, 2018).",
"Given a question and a passage containing Method Remain.",
"the answer, the two tasks are to predict the answer text span in the passage.",
"The difference between them is that SQuAD v2.0 allows for the possibility that no short answer exists in the passage.",
"We follow the general setting of SparseBERT, except that we only apply the logit distillation, i.e., L distil = L prd , and do not perform data augmentation, which are the most common distillation strategies.",
"The results are shown in Figure 4. It is observed that SparseBERT consistently outperforms the baseline method, especially at high sparsity.",
"The performance gain of SparseBERT decreases on SQuAD v2.0 mainly because SQuAD v2.0 is more challenging than SQuAD v1.1.",
"These observations demonstrate advantage of SparseBERT compared to pruning at downstream.",
"To get more insights about the advantage of SparseBERT over the pruning described in Figure",
"1(c), we compare their fitting abilities.",
"Specifically, we use TinyBERT as an example of the baseline pruning method.",
"We compare SparseBERT with TinyBERT with 4 layers and 312 hidden dimensions, which has a similar number of parameters as SparseBERT (sparsity=95%).",
"SparseBERT only distills knowledge from the same layers as TinyBERT does.",
"We vary the number of pruning epochs and report the results (loss on training set and accuracy on dev set) on RTE in Figure 5. It is observed that SparseBERT consistently shows smaller training loss while higher evaluation performance, which demonstrates that SparseBERT has a better fitting ability when pruning compared to the baseline.",
"Sparse networks were not hardware-friendly in the past.",
"However, hardware platforms with sparse tensor operation support have been rising up.",
"For example, the latest release of Nvidia high-end GPU A100 has native support of sparse tensor operation up to 2x compression rate, while startup company such as Moffett AI has developed computing platform with sparse tensor operation acceleration up to 32x compression rate.",
"Here we deployed SparseBERT of different sparse compression ratios (1, 2, 4, 8, 16, 20) on Moffett AI's latest hardware platform ANTOM to measure the real inference speedup induced by sparse compression, where 4' indicates the model is compressed by a factor of 4, with 75% of the parameters being zeros.",
"As shown in Figure 6, the sparse compression has almost linear speedup up to 4x and leads to more than 10x speedup when compression rate is 20x.",
"We studied the reduction of parameters and FLOPS.",
"For example, on the MRPC dataset, BERT-base (backbone) vs SparseBERT (backbone) = 85.53 vs 4.84 (#parameters, M) and BERT-base vs SparseBERT = 10.87 vs 0.54 (GFLOPS).",
"We studied the time and convergence speed.",
"For example, to get the reported 20x pruned result (Ta-ble 1), it needed 12 epochs of fine-tuning on MRPC and each epoch took 1.5 h (two RTX 2080 Ti).",
"The inference time was around 20 s.",
"We introduce SparseBERT, a knowledge-aware sparse pruning method for pre-trained language models, with a focus on BERT.",
"We summarize different types of BERT pruning approaches and compare SparseBERT with leading competitors.",
"Experimental results on GLUE and SQuAD benchmarks demonstrate the superiority of SparseBERT.",
"We thank Xiaoqi Jiao for his valuable discussion and feedback on this work."
]
| [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"other"
]
|
[
"The Word Embedding Association Test shows that GloVe and word2vec word embeddings exhibit human-like implicit biases based on gender, race, and other social constructs (Caliskan et al., 2017).",
"Meanwhile, research on learning reusable text representations has begun to explore sentence-level texts, with some sentence encoders seeing enthusiastic adoption.",
"Accordingly, we extend the Word Embedding Association Test to measure bias in sentence encoders.",
"We then test several sentence encoders, including state-of-the-art methods such as ELMo and BERT, for the social biases studied in prior work and two important biases that are difficult or impossible to test at the word level.",
"We observe mixed results including suspicious patterns of sensitivity that suggest the test's assumptions may not hold in general.",
"We conclude by proposing directions for future work on measuring bias in sentence encoders.",
"Word embeddings quickly achieved wide adoption in natural language processing (NLP), precipitating the development of efficient, word-level neural models of human language.",
"However, prominent word embeddings such as word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) encode systematic biases against women and black people (Bolukbasi et al., 2016; Garg et al., 2018, i.a.), implicating many NLP systems in scaling up social injustice.",
"We investigate whether sentence encoders, which extend the word embedding approach to sentences, are similarly biased.",
"1 The previously developed Word Embedding Association Test (WEAT; Caliskan et al., 2017) measures bias in word embeddings by comparing two sets of target-concept words to two sets of attribute words.",
"We propose a simple generaliza-1 While encoder training data may contain perspectives from outside the U.S., we focus on biases in U.S. contexts.",
"tion of WEAT to phrases and sentences: the Sentence Encoder Association Test (SEAT).",
"We apply SEAT to sentences generated by inserting individual words from Caliskan et",
"al.'s tests into simple templates such as This is a[n] < word > .",
"To demonstrate the new potential of a sentence-level approach and advance the discourse on bias in NLP, we also introduce tests of two biases that are less amenable to word-level representation: the angry black woman stereotype (Collins, 2004; Madison, 2009; Harris-Perry, 2011; hooks, 2015; Gillespie, 2016) and a double bind on women in professional settings (Heilman et al., 2004).",
"The use of sentence-level contexts also facilitates testing the impact of different experimental designs.",
"For example, several of Caliskan et",
"al.'s tests rely on given names associated with European American and African American people or rely on terms referring to women and men as groups (such as woman and man).",
"We explore the effect of using given names versus group terms by creating alternate versions of several bias tests that swap the two.",
"This is not generally feasible with WEAT, as categories like African Americans lack common single-word group terms.",
"We find varying evidence of human-like bias in sentence encoders using SEAT.",
"Sentence-to-vector encoders largely exhibit the angry black woman stereotype and Caliskan biases, and to a lesser degree the double bind biases.",
"Recent sentence encoders such as BERT (Devlin et al., 2018) display limited evidence of the tested biases.",
"However, while SEAT can confirm the existence of bias, negative results do not indicate the model is bias-free.",
"Furthermore, discrepancies in the results suggest that the confirmed biases may not generalize beyond the specific words and sentences in our test data, and in particular that cosine similarity may not be a suitable measure of representational similarity in recent models, indicating a need for alternate bias detection techniques.",
"The Word Embedding Association Test WEAT imitates the human implicit association test (Greenwald et al., 1998) for word embeddings, measuring the association between two sets of target concepts and two sets of attributes.",
"Let X and Y be equal-size sets of target concept embeddings and let A and B be sets of attribute embeddings.",
"The test statistic is a difference between sums over the respective target concepts, s ( X, Y, A, B ) = (cid:2)(cid:80) x X s ( x, A, B ) (cid:80) y Y s ( y, A, B ) (cid:3) , where each addend is the difference between mean cosine similarities of the respective attributes, s ( w, A, B ) = (cid:2) mean a A cos( w, a ) mean b B cos( w, b ) (cid:3) A permutation test on s ( X, Y, A, B ) is used to compute the significance of the association between ( A, B ) and ( X, Y ) , p = Pr [ s ( X i , Y i , A, B ) > s ( X, Y, A, B )] , where the probability is computed over the space of partitions ( X i , Y i ) of X Y such that X i and Y i are of equal size, and a normalized difference of means of s ( w, A, B ) is used to measure the magnitude of the association (the effect size; Caliskan et al., 2017), d = mean x X s ( x, A, B ) mean y Y s ( y, A, B ) std dev w X Y s ( w, A, B ) .",
"Controlling for significance, a larger effect size re-flects a more severe bias.",
"We detail our implementations in the supplement.",
"The Sentence Encoder Association Test SEAT compares sets of sentences, rather than sets of words, by applying WEAT to the vector representation of a sentence.",
"Because SEAT operates on fixed-sized vectors and some encoders produce variable-length vector sequences, we use pooling as needed to aggregate outputs into a fixed-sized vector.",
"We can view WEAT as a special case of SEAT in which the sentence is a single word.",
"In fact, the original WEAT tests have been run on the Universal Sentence Encoder (Cer et al., 2018).",
"To extend a word-level test to sentence contexts, we slot each word into each of several semantically bleached sentence templates such as This is < word > ., < word > is here., This will < word > ., and < word > are things..",
"These templates make heavy use of deixis and are designed to convey little specific meaning beyond that of the terms inserted into them.",
"2 For example, the word version of Caliskan Test 3 is illustrated in Table 1 and the sentence version is illustrated in Table 2.",
"We choose this design to focus on the associations a sentence encoder makes with a given term rather than those it happens to make with the contexts of that term that are prevalent in the training data; a similar design was used in a recent sentiment analysis evaluation corpus stratified by race and gender (Kiritchenko and Mohammad, 2018).",
"To facilitate future work, we publicly release code for SEAT and all of our experiments.",
"3 3 Biases Tested Caliskan Tests We first test whether the sentence encoders reproduce the same biases that word embedding models exhibited in Caliskan et al. (2017).",
"These biases correspond to past social psychology studies of implicit associations in human subjects.",
"4 We apply both the original 2 See the supplement for further details and examples.",
"3 http://github.com/W4ngatang/sent-bias 4 See Greenwald et al. (2009) for a review of this work.",
"Angry Black Woman Stereotype In the Sapphire or angry black woman (ABW) stereotype, black women are portrayed as loud, angry, and imposing (Collins, 2004; Madison, 2009; Harris-Perry, 2011; hooks, 2015; Gillespie, 2016).",
"This stereotype contradicts common associations made with the ostensibly race-neutral (unmarked) category of women (Bem, 1974), suggesting that that category is implicitly white.",
"Intersectionality reveals that experiences considered common to women are not necessarily shared by black women, who are marginalized both among women and among black people (Crenshaw, 1989).",
"Recently, intersectionality has been demonstrated in English Wikipedia using distributional semantic word representations (Herbelot et al., 2012), and in the disparate error rates of machine learning technologies like face recognition (Buolamwini and Gebru, 2018).",
"To measure sentence encoders' reproduction of the angry black woman stereotype, we create a test whose target concepts are black-identifying and white-identifying female given names from Sweeney (2013, Table 1) and whose attributes are adjectives used in the discussion of the stereotype in Collins (2004, pp. 87-90) and their antonyms.",
"We also produce a version of the test with attributes consisting of terms describing black women and white women as groups, as well as sentence versions in which attribute and target concept terms are inserted in sentence templates.",
"Double Binds Women face many double binds , contradictory or unsatisfiable expectations of femininity and masculinity (Stone and Lovejoy, 2004; Harris-Perry, 2011; Mitchell, 2012).",
"If women clearly succeed in a male gender-typed job, they are perceived less likable and more hostile than men in similar positions; if success is ambiguous, they are perceived less competent and achievement-oriented than men.",
"Both outcomes can interfere in performance evaluations (Heilman et al., 2004), contributing to the glass ceiling impeding women's career advancement.",
"5 We test this double bind in sentence encoders by translating Heilman et",
"get concepts by names of women and men, respectively, in the single sentence template < word > is an engineer with superior technical skills.; the attributes are likable and non-hostile terms, based on Heilman et",
"al.'s design, in the sentence template The engineer is < word > .",
"In the second, we use the shortened target concept sentence template < word > is an engineer and fill the attribute templates from before with competent and achievement-oriented terms based on Heilman et",
"al.'s design.",
"6 We refer to these tests as semantically unbleached because the context contains important information about the bias.",
"We produce two variations of these tests: word-level tests in which target concepts are names in isolation and attributes are adjectives in isolation, as well as corresponding semantically bleached sentence-level tests.",
"These control conditions allow us to probe the extent to which observed associations are attributable to gender independent of context.",
"We apply SEAT to seven sentence encoders (listed in Table",
"3) including simple bag-of-words encoders, sentence-to-vector models, and state-of-the-art sequence models.",
"7 For all models, we use publicly available pretrained parameters.",
"Table 4 shows effect size and significance at 0.01 before and after applying the Holm-Bonferroni multiple testing correction (Holm, 1979) for a subset of tests and models; complete results are provided in the supplement.",
"8 6 We consider other formulations in the supplement.",
"7 We provide further details and explore variations on these model configurations in the supplement.",
"8 We use the full set of tests and models when comput-Test Context CBoW InferSent GenSen USE ELMo GPT BERT C1: Flowers/Insects word 1 .",
"Specifically, we select Caliskan Test 1 associating flowers/insects with pleasant/unpleasant, Test 3 associating European/African American names with pleasant/unpleasant, and Test 6 associating male/female names with career/family, as well as the angry black woman stereotype and the competent and likable double bind tests.",
"We observe that tests based on given names more often find a significant association than those based on group terms; we only show the given-name results here.",
"We find varying evidence of bias in sentence encoders according to these tests.",
"Bleached sentence-level tests tend to elicit more significant associations than word-level tests, while the latter tend to have larger effect sizes.",
"We find stronger evidence for the Caliskan and ABW stereotype tests than for the double bind.",
"After the multiple testing correction, we only find evidence of the double bind in bleached, sentence-level competent control tests; that is, we find women are associated with incompetence independent of context.",
"9 Some patterns in the results cast doubt on the reasonableness of SEAT as an evaluation.",
"For instance, Caliskan Test 7 (association between math/art and male/female ) and Test 8 ( sci-ence/art and male/female ) elicit counterintuitive results from several models.",
"These tests have the same sizes of target concept and attribute sets.",
"For CBoW on the word versions of those tests, we see p -values of 0.016 and 10 2 , respectively.",
"ing the multiple testing correction, including those only presented in the supplement.",
"9 However, the double bind results differ across models; we show no significant associations for ELMo or GPT and only one each for USE and BERT.",
"On the sentence versions, we see p -values of 10 5 for both tests.",
"Observing similar p -values agrees with intuition: The math/art association should be similar to the science/art association because they instantiate a disciplinary dichotomy between math/science and arts/language (Nosek et al., 2002).",
"However, for BERT on the sentence version, we see discrepant p -values of 10 5 and 0.14; for GenSen, 0.12 and 10 3 ; and for GPT, 0.89 and 10 4 .",
"Caliskan Tests 3, 4, and 5 elicit even more counterintuitive results from ELMo.",
"These tests measure the association between European Amer-ican/African American and pleasant/unpleasant .",
"Test 3 has larger attribute sets than Test 4, which has larger target concept sets than Test 5.",
"Intuitively, we expect increasing p -values across Tests 3, 4, and 5, as well-designed target concepts and attributes of larger sizes should yield higher-power tests.",
"Indeed, for CBoW, we find increasing p values of 10 5 , 10 5 , and 10 4 on the word versions of the tests and 10 5 , 10 5 , and 10 2 on the sentence versions, respectively.",
"10 However, for ELMo, we find decreasing p -values of 0 .",
"95 , 0 .",
"45 , and 0 .",
"08 on the word versions of the tests and 1 , 0 .",
"97 , and 10 4 on the sentence versions.",
"We interpret these results as ELMo producing substantially different representations for conceptually similar words.",
"Thus, SEAT's assumption that the sentence representations of each target concept and attribute instantiate a coherent concept appears invalid.",
"At face value, our results suggest recent sentence encoders exhibit less bias than previous models do, at least when bias is considered from a U.S. perspective and measured using the specific tests we have designed.",
"However, we strongly caution against interpreting the number of significant associations or the average significant effect size as an absolute measure of bias.",
"Like WEAT, SEAT only has positive predictive ability: It can detect presence of bias, but not its absence.",
"Considering that these representations are trained without explicit bias control mechanisms on naturally occurring text, we argue against interpreting a lack of evidence of bias as a lack of bias.",
"Moreover, the counterintuitive sensitivity of SEAT on some models and biases suggests that biases revealed by SEAT may not generalize beyond the specific words and sentences in our test data.",
"That is, our results invalidate the assumption that each set of words or sentences in our tests represents a coherent concept/attribute (like African American or pleasant ) to the sentence encoders; hence, we do not assume the encoders will exhibit similar behavior on other potential elements of those concepts/attributes (other words or sentences representing, for example, African American or pleasant ).",
"One possible explanation of the observed sensitivity at the sentence level is that, from the sentence encoders' view, our sentence templates are not as semantically bleached as we expect; small variations in their relative frequencies and interactions with the terms inserted into them may be undermining the coherence of the concepts/attributes they implement.",
"Another possible explanation that also accounts for the sensitivity observed in the word-level tests is that cosine similarity is an inadequate measure of text similarity for sentence encoders.",
"If this is the case, the biases revealed by SEAT may not translate to biases in downstream applications.",
"Future work could measure bias at the application level instead, following Bailey and Deery (2018)'s recommendation based on the tension between descriptive and normative correctness in representations.",
"The angry black woman stereotype represents an intersectional bias, a phenomenon not well anticipated by an additive model of racism and sexism (Crenshaw, 1989).",
"Previous work has modeled biases at the intersection of race and gender in distributional semantic word representations (Her-belot et al., 2012), natural language inference data (Rudinger et al., 2017), and facial recognition systems (Buolamwini and Gebru, 2018), as well as at the intersection of dialect and gender in automatic speech recognition (Tatman, 2017).",
"We advocate for further consideration of intersectionality in future work in order to avoid reproducing the erasure of multiple minorities who are most vulnerable to bias.",
"We have developed a simple sentence-level extension of an established word embedding bias instrument and used it to measure the degree to which pretrained sentence encoders capture a range of social biases, observing a large number of significant effects as well as idiosyncrasies suggesting limited external validity.",
"This study is preliminary and leaves open to investigation several design choices that may impact the results; future work may consider revisiting choices like the use of semantically bleached sentence inputs, the aggregation applied to models that represent sentences with sequences of hidden states, and the use of cosine similarity between sentence representations.",
"We challenge researchers of fairness and ethics in NLP to critically (re-)examine their methods; looking forward, we hope for a deeper consideration of the social contexts in which NLP systems are applied.",
"We are grateful to Carolyn Rose, Jason Phang, Sebastien Jean, Thibault Fevry, Katharina Kann, and Benjamin Van Durme for helpful conversations concerning this work and to our reviewers for their thoughtful feedback.",
"CM is funded by IARPA MATERIAL; RR is funded by DARPA AIDA; AW is funded by an NSF fellowship.",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes.",
"The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of IARPA, DARPA, NSF, or the U.S. Government."
]
| [
"abstain",
"abstain",
"objective",
"method",
"result",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"method",
"result",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"other",
"other",
"other",
"other"
]
|
[
"Privacy plays a crucial role in preserving democratic ideals and personal autonomy.",
"The dominant legal approach to privacy in many jurisdictions is the Notice and Choice paradigm, where privacy policies are the primary instrument used to convey information to users.",
"However, privacy policies are long and complex documents that are difficult for users to read and comprehend.",
"We discuss how language technologies can play an important role in addressing this information gap, reporting on initial progress towards helping three specific categories of stakeholders take advantage of digital privacy policies: consumers, enterprises, and regulators.",
"Our goal is to provide a roadmap for the development and use of language technologies to empower users to reclaim control over their privacy, limit privacy harms, and rally research efforts from the community towards addressing an issue with large social impact.",
"We highlight many remaining opportunities to develop language technologies that are more precise or nuanced in the way in which they use the text of privacy policies.",
"Privacy is a fundamental right central to a democratic society, in which individuals can operate as autonomous beings free from undue interference from other individuals or entities (Assembly, 1948).",
"However, certain functions of privacy, such as the power to grant or deny access to one's personal information, are eroded by modern commercial and business practices that involve vast collection, linking, sharing, and processing of digital personal information through an opaque network, often without data subjects' knowledge or consent.",
"In many jurisdictions, online privacy is largely governed by Notice and Choice (Federal Trade Commission, 1998).",
"Under this framework, data-collecting and data-processing entities publish privacy policies that disclose their data practices.",
"Theoretically, users are free to make choices about which services and products they use based on the disclosures made in these policies.",
"Thus, the legitimacy of this framework hinges on users reading a large number of privacy policies to understand what data can be collected and how that data can be processed before making informed privacy decisions.",
"In practice, people seldom read privacy policies, as this would require prohibitive amounts of their time (McDonald and Cranor, 2008; Cate, 2010; Cranor, 2012; Reidenberg et al., 2015; Schaub et al., 2015; Jain et al., 2016).",
"Thus, an opportunity exists for language technologies to bridge this gap by processing privacy policies to meet the needs of Internet and mobile users.",
"NLP has made inroads in digesting large amounts of text in domains such as scientific publications and news (Jain et al., 2020; Cachola et al., 2020; Kang et al., 2018; Rush et al., 2015; See et al., 2017), with several practical tools based on these technologies helping users every day (Cachola et al., 2020; TLDR, 2021; News, 2021).",
"These domains have also received considerable research attention: several benchmark datasets and technologies are based in texts from these domains (Nallapati et al., 2016; See et al., 2017; Narayan et al., 2018; Beltagy et al., 2019).",
"We highlight that the privacy domain can also benefit from increased research attention from the community.",
"Moreover, technologies developed in the privacy domain have potential for significant and large-scale positive social impactthe affected population includes virtually every Internet or mobile user (Sadeh et al., 2013).",
"Automated processing of privacy policies opens the door to a number of scenarios where language technologies can be developed to support users in the context of different tasks.",
"This includes saving data subjects the trouble of having to read the entire text of policies when they are typically only concerned about one or a small number of issues (e.g., determining whether they can opt out of some practices or whether some of their data might be shared with third parties).",
"It includes helping companies ensure that they are compliant and that their privacy policies are consistent with what their code actually does.",
"It also includes supporting regulators, as they face the daunting task of enforcing compliance across an ever-growing collection of software products and processes, including sophisticated data collection and use practices.",
"In this work, we conduct an extensive survey of initial progress in applying NLP to address limitations of the Notice and Choice model.",
"We expect our work to serve as a useful starting point for practitioners to familiarize themselves with technological progress in this domain, by providing both an introduction to the basic privacy concerns and frameworks surrounding privacy policies, as well as an account of applications for which language technologies have been developed.",
"Finally, we highlight many remaining opportunities for NLP technologies to extract more precise, more nuanced, and ultimately more useful information from privacy policy text describing key challenges in this area and laying out a vision for the future.",
"In 1890, Warren and Brandeis defined the right to privacy as the right to be let alone(Warren and Brandeis, 1890).",
"More recently, Westin defined the right as the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others (Westin, 1968).",
"A primary aspiration of privacy is to allow for the separation of individual and society as a means of fostering personal autonomy.",
"To that end, privacy protects the situated practices of boundary management through which the capacity for self-determination develops, and further shelters dynamic, emergent subjectivity from the efforts of commercial and government actors to render individuals and communities fixed, transparent, and predictable (Cohen, 2012).",
"Privacy, therefore, is foundational to the practice of informed and reflective citizenship, and serves as an indispensable structural feature of liberal democratic political systems (Cohen, 2012).",
"When privacy is threatened, we risk losing the chance for critical self-reflection of political processes and social norms.",
"Indeed, privacy undergirds the concepts of human dignity and other key values, such as the freedoms of association and speech.",
"For these reasons and others, privacy is regarded as a fundamental human right (Assem-bly, 1948).",
"In the digital age, privacy is threatened by aggressive, rapid, and largely automated collection, linking, sharing, and processing of digital personal information.",
"Digital privacy is intrinsically linked to the fundamental ethical principles of transparency , fairness and agency .",
"Transparency : Users have a right to know how information about them is collected and used.",
"Entities collecting user data stay clear of manipulative schemes designed to influence the data subject's willingness to disclose their data (e.g. overemphasizing benefits while remaining silent about potential risks associated with the disclosure of data in a given context).",
"Fairness : Users should receive perceived value commensurate to the perceived loss of privacy associated with disclosure and use of their data.",
"Agency : Users should have a choice about what data is collected about them and how it is used.",
"The dominant paradigm to address these principles in the United States and most legal jurisdictions around the world, is the 'Notice and Choice' regulatory framework (Westin, 1968; Federal Trade Commission, 1998).",
"'Notice and Choice' regimes are based on the presupposition that consumers will adequately manage their privacy, if provided sufficient information about how their data will be collected, used and managed, as well as offered meaningful choices.",
"Today, 'Notice' is often practically realized through publishing privacy policies , which are long and verbose documents that users are expected to read and understand.",
"Choice' is often limited to the user clicking I agree' to the privacy policy, or even interpreting their continued use of the service as some sort of meaningful consent to the terms of the policy.",
"The 'Notice and Choice' framework is fundamentally broken.",
"In practice, users seldom read privacy policies (McDonald and Cranor, 2008; Cate, 2010; US Federal Trade Commission et al., 2012) and it is prohibitively expensive for them to even do so.",
"McDonald and Cranor (2008) estimate that if internet users were to actually read the privacy policies of the websites they visited, they would have to spend roughly 250 hours each year just reading Challenge Example Ambiguity We may also use aggregate personal information for regulatory compliance, industry and market analysis, research, demographic profiling, marketing and advertising, and other business purposes.",
"privacy policies.",
"A 2014 report from the Presidents Council of Advisors on Science and Technology stated that only in some fantasy world were users reading and understanding privacy policies before giving their consent (of the President's Council of Advisors on Science and Technology, 2014).",
"Indeed, 91% of people in the U.S have reported feeling like they have lost control over their information (Madden et al., 2014).",
"Moreover, recent privacy laws such as the EU's General Data Protection Regulation (GDPR) (Regulation, 2016) still fail to address the critical limitation of notice and choice: the continued reliance on users to read and understand a large number of privacy policies.",
"Studies have shown that GDPR requirements have actually resulted in longer privacy policies (Linden et al., 2020), and users still encounter unreadable privacy policies (Becher and Benoliel, 2019).",
"The lack of respect for individuals' rights to privacy also has implications for society.",
"With social platforms in particular having access to an unprecedented scale of information about human behaviour, Vicario et al. (2019) discuss that users' polarization and confirmation bias can play a role in spreading misinformation on social platforms.",
"Madden et al. (2017) report that particular groups of less-privileged users on the internet are uniquely vulnerable to various forms of surveillance and privacy harms, which could widen existing economic gaps.",
"Introna (1997) describe privacy as central to human autonomy in social relationships.",
"In this work, we examine the potential of language technologies in enabling people to derive the benefits of their rights to transparency, fairness and agency.",
"Privacy policies present interesting challenges for NLP practitioners, as they often feature characteristic aspects of language that remain under-examined or difficult to process (Table. 1).",
"For example, while many policies discuss similar issues surrounding how user data is collected, managed and stored, policy silence about certain data practices may carry great weight from a legal, policy, and regulatory perspective.",
"1 In the privacy policy domain, understanding what has not been said in a privacy policy ( policy silence ) is just as important as understanding what is said (Zimmeck et al., 2019a; Marotta-Wurgler, 2019).",
"Further, though policies tend to feature literal language (compared to more subjective domains like literature or blog posts), processing them ef-1 For example, in United States v. Path , the defendant's (Path) privacy policy described that its app collected certain information such as your Internet Protocol (IP) address, your operating system, the browser",
"type. The Federal Trade Commission found this disclosure to be incomplete and insufficient to provide notice about the collection of users' contact data (FTC, 2013).",
"fectively also requires several additional capabilities such as reasoning over vagueness and ambiguity, understanding elements such as lists (in-cluding when they are intended to be exhaustive and when they are not (Bhatia et al., 2016)), effectively incorporating co-text'aspects of web document structure such as document headers that are meaningful semantically to the content of privacy policies(Mysore Gopinath et al., 2018) and incorporating domain knowledge (for example, understanding whether information is sensitive requires background knowledge in the form of applicable regulation).",
"Privacy policies also differ from several closely related domains, such as legal texts which are largely meant to be processed by domain experts.",
"In contrast, privacy policies are legal documents with legal effectsgenerally drafted by expertsthat are ostensibly meant to be understood by everyday users.",
"NLP applications in the privacy domain also need to be designed with end user requirements in mind.",
"For example, from a legal standpoint, when generating answers to a user's question about the content of a privacy policy, it is generally advisable to include disclaimers, but users may prefer to be presented with shorter answers, where disclaimers are kept as short as possible.",
"Challenges are described in more detail in (4).",
"We survey current efforts to apply NLP in the privacy domain, discussing both existing task formulations as well as future areas in this domain where language technologies can have impact.",
"2 2 Our survey includes relevant papers from major NLP venues, including ACL, EMNLP, NAACL, EACL, COLING, CoNLL, SemEval, TACL, and CL.",
"We supplemented these publications with a review of the literature at venues such as SOUPS, PETS, WWW, ACM, and NDSS.",
"We also included relevant legal venues, such as law reviews and journals.",
"Initial efforts in applying NLP in the privacy domain have largely focused on discovering or identifying data practice categories in privacy policies (Costante et al., 2012a; Ammar et al., 2012; Costante et al., 2012b; Liu et al., 2014b; Ramanath et al., 2014a; Wilson et al., 2016b).",
"Automating the identification of such data practices could potentially support users in navigating privacy policies more effectively 3 , as well as automate analysis for regulators who currently do not have techniques to assess a large number of privacy policies.",
"Wilson et al. (2016b) create a corpus of 115 website privacy policies annotated with detailed information of the privacy policies described.",
"The corpus and associated taxonomy have been of utility in the development of several subsequent privacy-enhancing language technologies (Mysore Sathyendra et al., 2017a; Zimmeck et al., 2017; Ravichander et al., 2019; Ahmad et al., 2020).",
"Studies have shown that consumers desire control over the use of their information for marketing communication, and object to the use of their information for web tracking or marketing purposes including targeted advertising (Cranor et al., 2000; Turow et al., 2009; Ur et al., 2012; Bleier and Eisenbeiss, 2015).",
"However, McDonald and Cranor (2010) find that many people are unaware of the opt-out choices available to them.",
"These choices are often buried in policy text, and thus there has been interest in applying NLP to extract choice language.",
"Mysore Sathyendra et al. (2017b) automatically identify choice instances within a privacy 3 For example, through the data exploration tool developed by the Usable Privacy Policy Project: https://explore.",
"usableprivacy.org/?view=machine Figure 1: The results from Opt-Out Easy, a browser extension to extract opt-out choices from privacy policies, for Overleaf.com (Bannihatti Kumar et al., 2020).",
"policy, labeling different types of opt-out choices, with a particular emphasis on extracting actionable choices in the policy, i.e. those associated with hy-perlinks.",
"Bannihatti Kumar et al. (2020) develop a web-browser extension to present extracted choice instances to users (Figure. 1), finding that the tool can considerably increase awareness of choices available to users and reduce the time taken to identify actions the users can take.",
"In 2012, six major mobile app stores entered into an agreement with the California Attorney General, where they agreed to adopt privacy principles that require mobile apps to have privacy poli-cies(Justice, 2012).",
"Regulations such as the the EU General Data Protection Directive (GDPR) and the California Consumer Protection Act (CCPA) impose further requirements on what entities collecting and using personal data need to disclose in their privacy policies and what rights they need to offer to their users (e.g. privacy controls, option to request deletion of one's data).",
"However, regulators lack the necessary resources to systematically check that these requirements are satisfied.",
"In fact, even app stores lack the resources to systematically check that disclosures made in privacy policies are consistent with the code of apps and comply with relevant regulatory requirements.",
"Thus, there has been interest in developing technologies to automatically identify potential compliance issues (Enck et al., 2014; Zimmeck et al., 2017; Wang et al., 2018; Libert, 2018a; Zimmeck et al., 2019b).",
"aid compliance analysis is detailed by Zimmeck et al. (2017), including results of a systematic analysis of 17,991 apps using both natural language processing and code analysis techniques.",
"Classi-fiers are trained to identify data practices based on the OPP-115 ontology (Wilson et al., 2016b), and static code analysis techniques are employed to extract app's privacy behaviors.",
"The results from the two procedures are compared to identify potential compliance issues.",
"The system was piloted with personnel at the California Office of the Attorney General.",
"Users reported that the system could sig-nificantly increase productivity, and decrease the effort and time required to analyze practices in apps and audit compliance.",
"Zimmeck et al. (2019b) review 1,035,853 apps from the Google Play Store for compliance issues.",
"Their system identifies disclosed privacy practices in policies using classi-fiers trained on the APP-350 corpus (Story et al., 2019), and static code analysis techniques to identify apps' privacy behaviors.",
"Results of the analysis of this large corpus of privacy policies revealed a particularly large number of potential compliance problems, with a subset of results shared with the Federal Trade Commission.",
"The system was also reported to have been used by a large electronics manufacturer to verify compliance of legacy mobile apps prior to the introduction of GDPR.",
"Due to the lengthy and verbose nature of privacy policies, it is appealing to attempt to develop automated text summarization techniques to generate short and concise summaries of a privacy policy's contents (Liu et al., 2015).",
"Tomuro et al. (2016) develop an extractive summarization system that identifies important sentences in a privacy policy along five categories: purpose, third parties, limited collection, limited use and data retention.",
"Zaeem et al. (2018, 2020) identify ten questions about privacy policies, and automatically categorize risk levels' associated with each of the questions, as shown in Table.",
"3.",
"Keymanesh et al. (2020) focus on extractive summarization approaches to identify risky sections' of the privacy policy, which are sentences that are likely to describe a privacy risk posed to the end-user.",
"However, while automated summarization seems like a promising application of language technologies, identifying which parts of a policy should be shown to users is exceedingly difficult, and studies by privacy experts have shown # Question Green Risk Level Yellow Risk Level Red Risk Level (1) How well does this website protect your email address?",
"that such one-size-fits-all' approaches are unlikely to be effective (Gluck et al., 2016; Rao et al., 2016).",
"A desire to move away from one-size-fits-all' approaches has led to increased interest in supporting automated privacy question-answering (QA) capabilities.",
"If realized, such functionality will help users selectively and iteratively explore issues that matter most to them.",
"Table 4 lists current efforts to develop resources for privacy question-answering.",
"Amongst the initial explorations in this area, Hark-ous et al. (2018) examine privacy questions asked by Twitter users to companies, with answers annotated by the paper's authors.",
"Ravichander et al. (2019) collect questions asked by crowdworkers about a mobile app without seeing the app's privacy policy, and hire legal experts to identify sentences in the privacy policy relevant for each question.",
"(Ahmad et al., 2020) provide skilled anno-tators' with privacy policy segments drawn from the OPP-115 corpus (Wilson et al., 2016b), and ask them to construct questions based on the provided span of text.",
"Ravichander et al. (2019) and Ahmad et al. (2020) both find that current QA baselines based on pretrained language models(Devlin et al., 2019) are inadequate for answering privacy questions.",
"Ahmad et al. (2020) indicate that identifying longer evidence spans are challenging and describe transfer learning as a potential direction to improve performance.",
"Ravichander et al. (2019) examine unanswerability as a challenge to privacy QA systems, highlighting the many facets of unanswerable questions that can be asked.",
"It is worth noting that all three resources formulate ground truth based in the text of the privacy policy, but policy language is difficult for non-experts to understand (Reiden-berg et al., 2015).",
"Future QA dataset architects could consider abstractive answers as ground truths, which are validated by legal experts for correctness and evaluated by users for helpfulness.",
"It may also be desirable for benchmarks to aim for ecological validity (de Vries et al., 2020), with users asking questions, and legal experts constructing answers.",
"In this section, we survey further tasks where NLP has been applied to consumer privacy, including analyzing privacy policy readability , with the goal of aiding writers of privacy policies (Fabian et al., 2017; Massey et al., 2013; Meiselwitz, 2013; Ermakova et al., 2015), and understanding data practice categories are described in a policy, known as measuring policy coverage (Lin-den et al., 2020; Shvartzshnaider et al., 2020).",
"A significant amount of recent work has also focused on information extraction from privacy policies (Costante et al., 2012a).",
"Shvartzshanider et al. (2018); Shvartzshnaider et al. (2019, 2020) identify contextual integrity parameters (Nissenbaum, 2004) in policy text.",
"Studies have also tried to extract other, more specific kinds of information from policies, such as third party entities (Libert, 2018b; Bokaie Hosseini et al., 2020) and information about regulated information types (Bhatia et al., 2016; Evans et al., 2017) as well as their similarity (Hosseini et al., 2016).",
"There have also been efforts to analyze vague statements in privacy policies (Liu et al., 2016b; Lebanoff and Liu, 2018), and explore how benchmarks in this domain can be constructed through crowdsourcing (Ramanath et al., 2014b; Wilson et al., 2016c; Audich et al., 2018).",
"Lastly, there has been research focused on identifying header information in privacy policies (Mysore Gopinath et al., 2018) and generating them (Gopinath et al., 2020).",
"Techniques to Dataset #Questions QuestionScenario Legal Expert Annotator Asker Cannot See Evidence UnanswerableQuestions Non-ContiguousAnswer Polisis (Harkous et al., 2018) 120 Twitter users ask questions to a company.",
"process privacy policies have largely followed successful approaches elsewhere in NLP, starting from feature-based approaches (Sathyendra et al., 2017; Zimmeck et al., 2019a), training domain-specific word embeddings (Kumar et al., 2019) and fine-tuning pretrained language models on privacy policies (Nejad et al., 2020; Mustapha et al., 2020).",
"We discuss a vision of future applications of NLP in aiding consumer privacy.",
"We believe these applications present interesting opportunities for the community to develop technologies, both because of the technical challenges they offer and the im-pact they are likely to have.",
"Detecting surprising statements: Since users do not read privacy policies, their expectations for the data practices of services might not align with services' actual practices.",
"These mismatches may result in unexpected privacy risks which lead to loss of user trust (Rao et al., 2016).",
"Identifying such surprising' statements will require understanding social context and domain knowledge of privacy information types.",
"For example, it is natural for a banking website to collect payment information, but not health information.",
"Moreover, understanding what statements will be surprising for each individual user requires understanding their personal, social and cultural backrounds (Rao et al., 2016).",
"We speculate that NLP can potentially be leveraged to increase transparency by identifying discordant statements within privacy policies.",
"Detecting missing information: In contrast to detecting surprising statements, privacy policies may be underspecified .",
"Story et al. (2018) find that many policies contain language appearing in unrelated privacy policies, indicating that policy writers may use privacy policy generators not suited to their application, potentially resulting in missing information.",
"Techniques from compliance analysis could help in flagging some of these issues (Zim-meck et al., 2017, 2019a).",
"Generating privacy nutrition labels: One proposal to overcome the gap in communicating privacy information to users has been the privacy nutrition label' approach (Kelley et al., 2009, 2013), as shown in Fig. 2.",
"The proposal draws from industries such as nutrition, warning and energy labeling where information has to be communicated to consumers in a standardized way.",
"Recently, Apple announced that developers will be required to provide information for these labels (Campbell, 2020), which disclose to the user the information a company and third parties collect.",
"4 This approach could potentially be helpful to users to understand privacy information at a glance, but presents challenges to both developers and app platforms.",
"Developers need to ensure their nutrition label is accurate and platforms need to enforce compliance to these requirements.",
"Potentially, early successes of language technologies in compliance systems can be extended to analyzing a specified nutrition label, policy and application code.",
"NLP may also be used to generate nutrition labels which developers inspect, as opposed to the more costly process of developers specifying nutrition labels from scratch which may hinder adoption (Fowler, 2021).",
"Personalized privacy summaries: One approach to mitigating inadequacies of policy summarizationwhere generic summaries may not be sufficiently complete is personalized summarization (Daz and Gervas, 2007; Hu et al., 4 An example of such a nutrition label can be found in Appendix. A 2012).",
"In this formulation, policies are summarized for each user based on issues that matter most to them.",
"This formulation may alleviate some down-sides of QA approaches, which require the user know how to manage their privacy by asking the right questions.",
"Personalized summarization systems would benefit from modeling users' level of knowledge, as well as their beliefs, desires and goals.",
"In NLP, there has been effort towards addressing similar challenges for personalized learning in intelligent tutoring (McLaren et al., 2006; Malpani et al., 2011).",
"Assistive Policy Writing: We speculate advances in natural language generation and compliance analysis techniques may jointly be leveraged to help app developers create more accurate privacy policies, rather than relying on policy generators (Story et al., 2018).",
"Privacy policies generally cover a known set of data practices (Wilson et al., 2016a), providing potential statistical commonalities to aid natural language generation.",
"Code analysis can be leveraged to constrain generation to accurately describe data practices of a service.",
"Although privacy policies have legal effects for most Internet users, these types of texts constitute an underserved domain in NLP.",
"NLP has the potential to play a role in easing user burden in understanding salient aspects of privacy policies, help regulators enforce compliance and help developers enhance the quality of privacy policies by reducing the effort required to construct them.",
"Yet, the privacy domain presents several challenges that require specialized resources to deal with effectively.",
"We describe some of these distinctive challenges, as well as the capabilities that will need to be developed to process policies satisfactorily.",
"Disagreeable privacy policies: Privacy policies are complex, but are the most important source of information about how user data is collected, managed and used.",
"Reidenberg et al. (2015) find that sometimes discrepancies can arise in the interpretation of policy language, even between experts.",
"This additional complexity should be taken into consideration by those developing language technologies in this domain.",
"Difficulty or validity of collecting annotations: Privacy policies are legal documents that have legal effects on how user data is collected and used.",
"in this domain (Wilson et al., 2016c), individual practitioners constructing applications must carefully consider the consequences of sourcing non-expert annotations in the context of their task and the impacted stakeholders, and not rely on crowdsourced annotation simply because it is cheaper or easier to scale.",
"Difficult for users to articulate their needs and questions: Developing effective privacy QA functionality will require understanding the kinds of questions users ask and quantifying to what extent privacy literacy affects users' ability to ask the right questions.",
"Ravichander et al. (2019) find many questions collected from crowdworkers were either incomprehensible, irrelevant or atypical.",
"Understanding these factors could lead to the development of more proactive QA functionalityfor example, rather than wait for users to form questions, the QA system could prompt users to reflect on certain privacy issues.",
"Challenges to QA : Additionally, privacy question-answering systems themselves will require several capabilities in order to have larger impact.",
"These systems must be capable of doing question-answering iteratively, working with the user towards resolving information-seeking needs.",
"They will also need to consider unan-swerability(Rajpurkar et al., 2018; Ravichander et al., 2019; Asai and Choi, 2020) as a graded problem, recognizing to what extent the privacy policy contains an answer and communicating both what is known and what is not known to the user.",
"QA systems must also consider what kinds of answers are useful, identifying appropriate response format and tailoring answers to the user's level of knowledge and individual preferences.",
"Domain Knowledge : It remains an open question how to best incorporate expert knowledge into the processing of privacy policies.",
"Although privacy policies are intended to be read by everyday users, experts and users often disagree on their interpretations (Reidenberg et al., 2015).",
"Combining Disparate Sources of Information : While privacy policies are the single most important source of information about collection and sharing practices surrounding user data, technologies to address users' personalized concerns could leverage additional sources of information-such as analyzing the code of a given technology such as a mobile app, news articles, or background knowledge of a legal, technical or statistical nature.",
"For example, when the policy is silent on an issuea QA system could report the practices of other similiar services to the user, or if a user asks about the likelihood of a data breach, the QA system could refer to news sources for information about the service.",
"User Modeling : Personalized privacy approaches will also need to model individual user's personal, social and cultural contexts to deliver impact.",
"This could include information about the issues likely to matter most to users, their background knowledge, privacy preferences and expectations (Liu et al., 2014a; Lin et al., 2014; Liu et al., 2016a).",
"Accessibility: Efforts to help users understand privacy policies by breaking through walls of text to identify salient aspects, are expected to help users with a range of visual impairments navigate their privacy.",
"Future work would conduct user studies to determine the extent to which developed technologies ease visually-impaired users' accessibility to learn about the content of policies, related to their interests or concerns.",
"While NLP has the potential to benefit consumer privacy, we emphasize there are also ethical considerations to be taken in account.",
"These include: Bias of agent providing technology: A factor that must be considered in the practical deployment of NLP systems in this domain is the incentives of the entity creating or providing the technology.",
"For example, the incentives of a company that develops a QA system to answer questions about its own privacy policy may not align with those of a trusted third-party privacy assistant that reviews the privacy policies of many different companies.",
"This information also needs to be communicated in an accurate and unbiased fashion to users.",
"User Trust: While NLP systems have the potential to digest policy text and present information to users, NLP systems are seldom completely accurate, and therefore it is important that users be appropriately informed of these limitations.",
"For example, if a QA system communicates a data practice incorrectly in response to a users' question and the user encounters privacy harms contrary to their expectations as a result, they may lose trust in the system.",
"Discriminatory Outcomes: It is possible that different populations will benefit to different extents from the developed technologies, and we are yet unable to anticipate precisely where the benefits will accrue.",
"For example, users with higher degrees of privacy literacy may be able to take better advantage of a developed QA system.",
"Technological Solutionism: It is important to consider that while language technologies have the potential to considerably alleviate user burden in reading privacy policies, they are unlikely to completely resolve the issue that users are unable to read and review a multitude of privacy policies everyday.",
"Advances toward addressing the limitations of notice and choice will also require progress in regulation and enforcement by regulatory bodies to ensure that enterprises are more accurate in their disclosures and use clearer language, in tandem with creative technological solutions.",
"Privacy is about the right of people to control the collection and use of their data.",
"Today privacy relies on the 'Notice and Choice' framework, which assumes that people actually read the text of privacy policies.",
"This is a fantasy as users do not have the time to do so.",
"In this article, we summarize how language technologies can help overcome this challenge and support the development of solutions that assist customers, technology providers and regulators.",
"We reviewed early successes and presented a vision of how NLP could further help in the future.",
"We hope this article will motivate NLP researchers to contribute to this vision and empower people to regain control over their privacy.",
"This research was supported in part by grants from the National Science Foundation Secure and Trustworthy Computing program (CNS-1330596, CNS-1330214, CNS-15-13957, CNS-1801316, CNS-1914486, CNS-1914444) and DARPA(FA8750-15-2-0277).",
"Part of the work summarized in this paper was conducted by the Usable Privacy Policy Project( https://usableprivacy.org ).",
"The authors would like to thank Siddhant Arora, Rex Chen and Aakanksha Naik for valuable discussion."
]
| [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"other",
"other",
"other"
]
|
[
"Although much work in NLP has focused on measuring and mitigating stereotypical bias in semantic spaces, research addressing bias in computational argumentation is still in its infancy.",
"In this paper, we address this research gap and conduct a thorough investigation of bias in argumentative language models.",
"To this end, we introduce ABB A , a novel resource for bias measurement specifically tailored to argumentation.",
"We employ our resource to assess the effect of argumentative fine-tuning and debiasing on the intrinsic bias found in transformer-based language models using a lightweight adapter-based approach that is more sustainable and parameter-efficient than full fine-tuning.",
"Finally, we analyze the potential impact of language model debiasing on the performance in argument quality prediction, a downstream task of computational argumentation.",
"Our results show that we are able to successfully and sustainably remove bias in general and argumentative language models while preserving (and sometimes improving) model performance in downstream tasks.",
"We make all experimental code and data available at https://github.com/ umanlp/FairArgumentativeLM .",
"Recently, pre-trained language models (PLMs), e.g., BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), GPT-2 (Radford et al., 2019) and DialoGPT (Zhang et al., 2020) have been shown to encode and amplify a range of stereotypical biases, such as racism, and sexism (e.g., Kurita et al., 2019a; Dev et al., 2020; Nangia et al., 2020; Lauscher et al., 2021a, inter alia ).",
"While such types of biases provide the basis for interesting academic research, e.g., historical analyses (e.g., Garg et al., 2018; Tripodi et al., 2019; Walter et al., 2021, inter alia ), stereotyping constitutes a representational harm (Barocas et al., 2017; Blodgett et al., 2020), and can lead in many concrete socio-technical application scenarios to severe ethical issues by reinforcing societal biases (Hovy and Spruit, 2016; Shah et al., 2020; Mehrabi et al., 2021).",
"But while prior work has focused on how to evaluate and mitigate unfair biases for general-purpose LMs (e.g., Webster et al., 2020) and their applications to specific domains and genre like, for instance, conversational LMs (e.g., Barikeri et al., 2021), there has been little attention to the problem of bias in argumentative language .",
"This is despite previous work from Spliethver and Wachsmuth (2020) pointing out the high potential for harm, due to the high sensitivity of envisioned applications like self-determined opinion formation systems, as well as, crucially, showing that argumentative corpora like those from the online debate portal debate.org (Durmus and Cardie, 2019) do encode unfair biases, which are likely to be captured by argumentative LMs.",
"This is particularly problematic as research in computational argumentation regularly makes use of such corpora for injecting knowledge about argumentative language into PLMs (e.g., Alshomary et al., 2021).",
"Still, to date, there is neither an evaluation resource specifically tailored to argumentative language, nor knowledge on debiasing argumentative LMs or on the effects of debiasing on argumentative downstream tasks.",
"Contributions.",
"We address this research gap with the following contributions: we present AB B A , the first human-annotated resource specifically targeted at English argumentative language, which is annotated for two kinds of social bias that are still under-explored in NLP, namely Queerphobia and Islamophobia .",
"Next, we use AB B A to answer the following four research questions ( RQs ): (RQ1) How does argumentative fine-tuning affect measurable biases in PLMs?",
"typical biases in the LMs, highlighting the importance of bias measurement after injecting argumentative knowledge (4.1).",
"Lauscher et al. (2021a) recently introduced debiasing adapters , a modular and sustainable way of encoding debiasing knowledge in LMs.",
"We con-firm the effectiveness of debiasing adapters with Counterfactual Data Augmentation (Zhao et al., 2018) on two diverse corpora (4.2).",
"(RQ3)",
"Can we obtain an (efficient and robust) fair and argumentative language model given our preexisting set of adapters?",
"We show for the first time how to stack debiasing adapters with argumentation adapters to produce an argumentative and fair language model .",
"Our results indicate that stacking order matters (4.3).",
"(RQ4)",
"What are the effects on argumentative downstream tasks, e.g., argument quality prediction?",
"In a final downstream evaluation encompassing two different datasets for argument quality prediction, we demonstrate that debiasing can have a positive impact on model performance.",
"On one of the corpora, our best results are obtained when combining argumentation and debiasing adapters, hinting at the effectiveness of fair and argumentative language modeling (4.4).",
"We create AB B A , the first annotated corpus of bias in argumentative text following the methodology from Barikeri et al. (2021): (1) specification of the social biases of interest, (2) retrieval of candidates of biased statements, and (3) manual annotation.",
"Bias Specifications.",
"We define the social biases we are interested in using the established notion of explicit bias specifications (Caliskan et al., 2017; Lauscher et al., 2020a).",
"It consists of two sets of target terms ( T 1 and T 2 ) denoting two demographic groups that exhibit different stereotypical perceptions w.r.t. two opposing sets of attribute terms ( A 1 and A 2 ) .",
"Concretely, T 1 consists of target terms referring to a minoritized group (e.g., Muslim ), while T 2 consists of target terms corresponding to a dominant group (e.g., Christian ), i.e., a group in power (D'Ignazio and Klein, 2020).",
"We focus on the bias dimensions Queerphobia and Islamophobia since they have received little attention in NLP research on bias when compared to sexism or other ethnic bias.",
"We view Queerness as an umbrella term for the minority group of the LGBTQI+ community, which includes people of all sexual orientations and gender identities except for heterosexual and cisgender.",
"We compare this to the dominant group of heterosexual cisgender people.",
"The target and attribute terms used for candidate identification are based on the specifications of Barikeri et al. (2021).",
"They include a wide range of attribute terms from the sociological literature and manually compiled target terms.",
"The attribute terms were assembled such that each stereotypical attribute term a 1 forms a loose antonym of an counter-stereotypical attribute term a 2 with a positive or negative sentiment.",
"An exemplary partial term list of the bias specifications can be found in Table 1 and the full set in the Appendix.",
"Candidate Retrieval.",
"We use the dataset from debate.org originally collected by Durmus and Cardie (2019), one of most widely used resources in research on computational argumentation.",
"For retrieving candidates, we compute the Cartesian product of the terms of the minoritized group T 1 with all stereotypical terms of A 1 , giving us a set of stereotyped tuples from T 1 A 1 (e.g., gay and sinful ).",
"Using this set, we extract all sentences and their corresponding arguments that contain both terms from the tuples in a window of size 20 (set during corpus construction to improve the quality of the retrieved passages).",
"We further reduced the compiled comments to those with a maximum number of 500 tokens to allow for a better visualization and to ensure that the annotators attentively read the entire argument.",
"In total, we retrieve 889 candidate sentences from 614 different arguments for Queerphobia and 1,879 candidate sentences from 1,101 different arguments for Islamophobia .",
"Annotating bias.",
"We manually label the candidate sentence and the corresponding argument according to whether a stereotypical bias is present or not.",
"To this end, we hired four annotators, who are all non-native speakers but have excellent English proficiency with academic backgrounds and who hold at least a Bachelor's degree, in slightly different majors (engineering, data science, infor-7842 Dimension Target Term Sets Attribute Term Sets Islamophobia T 1 muslim(s) , islam , quran , koran , ...",
"mation systems, and computer science).",
"They are of diverse gender and cultural background.",
"Annotators were provided with the guidelines found in the Appendix.",
"We initially conducted a pilot study on 90 randomly drawn arguments to iteratively calibrate annotations and refine the guidelines on the basis of the annotators' feedback.",
"Finally, we split the corpus evenly into four independent, equally-sized portions and added further 50 randomly drawn overlapping arguments to analyze annotation quality.",
"In the last step, we merged the annotations on the calibration set using majority voting.",
"The number of annotated and biased instances in the corpus is shown in Table 2. We show examples of biased sentences in Table 3. Analysis of the Annotations.",
"On the overlapping set consisting of 50 arguments, we obtain an inter-annotator agreement (IAA) for Queerphobia on the sentence-level for both Fleiss' (Fleiss, 1971) and Krippendorff's (Krippendorff, 2013) of 0 .",
"65 .",
"The agreement on the argument-level is slightly weaker with 0 .",
"61 for both measures.",
"For the Islamophobia dimension, we observe a stronger agreement of 0 .",
"66 on sentence-level and = 0 .",
"72 and = 0 .",
"73 on the argument-level.",
"Although we are dealing with a rather subjective annotation task, IAA indicates a substantial agreement among the annotators (Viera and Garrett, 2005), suggesting that they are able to reliably identify stereotypes in argumentative sentences and longer text.",
"To determine reasons for disagreement among annotators, we manually conducted a qualitative analysis on the annotated arguments.",
"For Queerphobia , we found that annotators mostly disagreed on statements that referred to the homosexual lifestyle, rather than homosexual people.",
"The following example illustrates one such case: [...] Basically, a gay person is not allowed to engage in sexual acts with an-other man because there is a 0% chance of offspring being produced.",
"This falls into the same category of not using contraceptives, getting abortions, etc.",
"It is not a sin for a gay person to acknowledge their sexuality, or to act in a gay' manner.",
"It is only a sin if he/she gives in to their urges.",
"[...] Here, the annotators disagreed in the annotation of the entire argument.",
"Although the debater clearly states that actually being gay is not a sin, in his opinion, living a homosexual lifestyle is a sin.",
"It appears that for some annotators being homosexual is equivalent to living in a homosexual relationship, while others clearly distinguished these two aspects.",
"For Islamophobia , the disagreements mostly related to arguments that make a distinction between Muslims and the religion Islam, e.g.: [...] I have no issue with Islam, or any religion in general, if you leave me alone I leave you alone, you wondered why so many people hate Islam, its because of the same [...] in your last paragraph, y'all act as if terrorism is 100% okay.",
"That needs to change before Muslims can consider Islam anywhere close to a great religion.",
"Here, the fact that the debater is making an ambiguous statement, expressing no prejudice against Islam but against Muslims caused confusion among the annotators resulting in disagreement.",
"modeling along our two bias dimensions of interest.",
"Instead of full model fine-tuning, we opt for a more sustainable strategy by relying on adapters (Houlsby et al., 2019) to reduce computation time and energy consumption.",
"In addition, the modularity of adapters enables their reuse in further settings and in combination with other pre-trained adapters.",
"Argumentation Adapter.",
"Following Alshomary et al. (2021), we tune general pre-trained models on a large set of arguments to obtain an argumentative language model.",
"In contrast to the original work, we rely on language adapters.",
"Concretely, we adopt the architecture proposed by Pfeiffer et al. (2020), which inserts a single adapter, a two-layer feed-forward network, into each transformer layer.",
"The output of the adapter is computed as A argument ( h , r ) = U ( ReLU ( D ( h ))) + r , with the two matrices D R h d and U R d h as the adapter's down-projection and up-projection, respectively, h as the transformer's hidden state, and r as the residual.",
"In addition, we inject invertible adapters, which are stacked on top of the embedding layer and the inverses of the invertible adapters are placed in front of the output layer.",
"They perform a similar function to the language adapters, but aim to capture token-level specific transformations (Pfeiffer et al., 2020).",
"Both the language adapters and the invertible adapters are trained on a language modeling task using a causal language modeling loss for auto-regressive models and a masked language modeling loss for auto-encoding models, respectively.",
"Debiasing Adapter.",
"For debiasing, we inject debiasing adapters (Lauscher et al., 2021a) into the models, using the same adapter architecture as before.",
"Following the original work, we use Counterfactual Data Augmentation (Zhao et al., 2018, CDA) and train the adapter parameters on the augmented corpus to break stereotypical associations in the model.",
"To this end, we manually compile pairs of opposing target terms ( t i , t j ) T 1 T 2 , such that t i forms the most suitable antonym of t j in the sense of minority and dominant group (e.g., muslim and christian ) and can be substituted grammatically interchangeably.",
"While this is arguably straightforward with the Islamophobia bias specifications, the target terms of the Queerness dimension are more complex to juxtapose.",
"Therefore, we clustered them into three groups of sexual identity' (e.g., {gay, straight} ), gender identity' (e.g., {transgender, cisgender} ) and biological sex' (e.g., {androgyne, unisexual} ) so as to find the best matching pairs of antonyms (cf. the list in the Appendix).",
"We then replace all occurring target terms from T 1 or T 2 with their opposite term from the set of tuples P = { ( t i , t j ) } N (we randomly select a term from the list if multiple substitutions are possible).",
"We opt for a two-sided application of CDA, keeping both the counterfactual and the original sentences in the training set to avoid over-correction (Webster et al., 2020).",
"We append each counterfactual sentence immediately after its original counterpart and train in two settings, namely using:",
"a) only biased and counterfactual sentences;",
"b) all sentences, i.e., also including neutral ones.",
"Combining Adapters.",
"We investigate three different architectures: first, in 4.3, we study two architectures using AdapterStacking (Pfeiffer et al., 2020), i.e., by stacking the argumentation adapter on top of a debiasing adapter and vice versa (Fig-ure 1).",
"Second, in 4.4, we compare the best architectures from 4.3 with AdapterFusion (Pfeiffer et al., 2020), which requires training additional network layers for interpolating the adapters' outputs.",
"We next describe the experiments to answer the research questions RQ1 through RQ4 (Section 1) that underpin our investigation.",
"computing the LMB score reflecting how much more likely the model is to generate a stereotypically biased argument compared to an inversely biased one.",
"We start with our set of opposing target terms P T 1 T 2 and we extract the set of all statements S from AB B A (containing instances of term t i such that ( t i , t j ) P ), which have been labelled as stereotypically biased.",
"This results in 279 biased instances for Queerphobia and 465 instances for Islamophobia , respectively.",
"We then create for each instance s ( t i ,a ) S (e.g., All Muslims are terrorists ), a corresponding inversely biased sentence s (cid:48) ( t j ,a ) (e.g., All Christians are terrorists ) to give us a set S (cid:48) of counter-stereotypical statements.",
"In case of multiple pairs for a target term (e.g., {homosexual, heterosexual} and {homosexual, straight} ), we create one counter-stereotypically biased sentence for each possible combination.",
"We then compute the model's perplexity for all statements in the two paired sets S and S (cid:48) with stereotypical and counter-stereotypical statements.",
"Following Barikeri et al. (2021), we compute the mean perplexity for multiple counterfactual instances created from a single biased instance and remove outliers to avoid distorted significance results (Pollet and van der Meij, 2017).",
"The final LMB score corresponds to the t-value obtained by subjecting the paired perplexities to the student's t-test ( = 0 . 05 ).",
"sists of over 380k arguments from over 59k debates.",
"(ii) Considering that it contains mostly arguments retrieved from Debate.org ( 87% ), we verify our results using a second corpus: Webis-ChangeMyView-20 (CMV; Al Khatib et al., 2020), which contains over 3.6 million arguments extracted from the ChangeMyView subreddit.",
"For ensuring comparability, we cut each corpus to 300k and perform a train-validation split of 80:20.",
"Models.",
"We experiment with four LMs from Huggingface Transformers (Wolf et al., 2020): BERT ( bert-base-uncased ), GPT-2 ( gpt-2 ), DialoGPT ( microsoft/DialoGPT-medium ) and RoBERTa ( roberta-base ).",
"With the exception of DialoGPT, which contains contains 24 layers with a hidden size of 1 , 024 , all models consist of 12 layers with a hidden size of 768 .",
"Adapter Training and Optimization.",
"We train the argumentative adapters separately on Args.me and CMV for each of the models.",
"Concretely, we train for 10 epochs using the Adam optimizer (Kingma and Ba, 2015) (weight decay = 0 .",
"01 , 1 = 0 .",
"9 , 2 = 0 .",
"999 , (cid:15) = 1 10 6 , learning rate= 1 10 4 ) and early stopping based on the perplexity on the validation set (patience: 2 epochs).",
"We set the effective batch size to 32 except for training DialoGPT, for which we employ an effective training batch size of 8 for reasons of computational capacity.",
"The adapter reduction factor is 16 .",
"Results.",
"The LMB scores on AB B A before and after fine-tuning the four PLMs are shown in Figure 2. A negative t-value suggests a stereotypical bias; a positive t-value denotes an counter-stereotypical LMB, respectively.",
"Before fine-tuning, GPT-2 is the only model that exhibits a significant stereotypical bias along the Queerphobia dimension.",
"We show an example sentence pair exhibiting a high difference in model perplexity in Table 4 and provide more examples in the Appendix.",
"For BERT, no significant difference was found between the perplexities on stereotypical and counter-stereotypical sentences along Queer-7845 BERT RoBERTa GPT-2 DialoGPT 15 10 5 0 5 10 15 t v a l u e * * * * * * * * *",
"phobia , whereas RoBERTa and DialoGPT even show a significant counter-stereotypical bias.",
"All PLMs except RoBERTa exhibit a stereotypical bias for the Islamophobia bias, with a significant effect size for DialoGPT and BERT.",
"The findings for DialoGPT are consistent with the results of Barikeri et al. (2021) for conversational text.",
"When adapter-fine-tuning the PLMs on argumentative texts (CMV, Args.me), we notice that the perplexities on AB B A decreased, indicating that we successfully managed to inject argumentative knowledge into the models.",
"However, we also observe that while for RoBERTa, no significant changes in t-values for either bias dimension occur, the sterotypical bias effects of DialoGPT and GPT-2 along the Islamophobia bias dimension are reinforced by argumentative fine-tuning.",
"Most interesting is the effect on DialoGPT along Queerphobia .",
"While the original model exhibited a significant counter-stereotypical bias, fine-tuning results in an opposite bias effect for both CMV and Args.me.",
"Given that the stereotypical bias along the Islamophobia dimension is also reinforced by fine-tuning DialoGPT, it underscores the tendency of the model Args.me Wikipedia Strategy # Train # Val.",
"to pick up and amplify stereotypical biases.",
"All in all, these findings highlight the importance of carefully measuring bias after injecting argumentative knowledge into the models.",
"Debiasing Data.",
"We perform our two CDA strategies from 3 on two corpora:",
"(i) the English Wikipedia ( 20200501.en dump) representing general-purpose encycopledic text.",
"We randomly subsample the corpus, originally consisting of 6,078,422 text blocks, to 500,000 text blocks.",
"(ii) We additionally experiment with the Args.me corpus, which also serves as the source for argumentative text.",
"On both corpora, we perform a train-validation slit of 80:20.",
"The resulting train and test set sizes for both bias types Queerphobia and Islamophobia are listed in Table 5.",
"Models.",
"We focus on two PLMs that exhibited bias along one of the dimensions in the previous experiments and which represent different types of PLMs: BERT as a representative of models trained via masked language modeling and GPT-2 as a model trained via causal language modeling.",
"Adapter Training and Optimization.",
"We train the adapters for 10 epochs on the CDA-augmented data sets which include the neutral sentences, and for 1 epoch on the data sets that exclude the neutral sentences.",
"The rest of the training procedure and all other hyperparameters are the same as for training the argumentaive adapters.",
"Results.",
"We report bias effect size using LMB in Figure 3. The results indicate that, while the original PLMs exhibited significant bias along a dimension, using debiasing adapters we are able to successfully reduce the measurable bias from a significant to a non-significant amount , the only 7846 Queerphobia Islamophobia 15 10 5 0 5 10 15 t v a l u e * * * * * Original Wikipedia w/ N Wikipedia w/o N Args.me w/ N Args.me w/o N",
"exception with the adapters for GPT-2 trained on the CDA-augmented Wikipedia.",
"When we exclude neutral sentences the scores switch into the counter-stereotypical direction: we hypothesize that this indicates the need for a better balancing and sampling of the training data.",
"We see a similar effect for cases in which the original PLM did not exhibit a significant bias the LMB is likely to switch to the opposite, counter-stereotypical direction.",
"Taking advantage of the modular nature of adapters, we combine argumentation and debiasing adapters (4.1-4.2) to obtain a fair and argumentative language model using AdapterStacking (3).",
"We focus on the bias dimensions for which the original models exhibited a stereotypical effect size.",
"Results.",
"Figure 4 shows the LMB scores of BERT on Islamophobia and GPT-2 along Queerphobia for different stacking orders of the argumentation adapter trained on CMV and the respective debiasing adapters trained on Wikipedia or Args.me (results for the other dimensions and other Argumentation Adapter First Argumentation Adapter Second 15 10 5 0 5 10 15 t v a l ue * * * * * * Argumentation CMV Wikipedia w/ N Wikipedia w/o N Args.me w/ N Args.me w/o N",
"argumentation adapters are found in the Appendix).",
"For BERT, stacking the debiasing adapters for Islamophobia second and the argumentation adapter trained on CMV first (left) reduces the bias to an non-significant amount only in a single case, while stacking the debiasing adapter first (right) removes the bias in three out of four setups.",
"Also for GPT-2, stacking the debiasing adapter first leads to better debiasing results.",
"We hypothesize that the reason for this effect is that both types of adapters are optimized for receiving the input directly from the transformer layers.",
"Thus, the debiasing adapter is more effective when stacked first.",
"In sum, while our results indicate that stacking order matters and debiasing effects are bigger when debiasing adapters are stacked first , we think that this finding warrants future research on the issue.",
"Data and Measures.",
"For testing the influence of our argumentation and debiasing adapters on argument quality prediction, we employ two recently presented data sets: (1) the IBM-Rank-30k (Gretz et al., 2020), an extension of (Toledo et al., 2019), 7847 Dataset Domain # Train # Validation # Test IBM-Rank-30k 20,974 3,208 6,315 GAQCorpus CQA 1,109 476 500 Debates 1,093 469 538 Reviews 700 400 100 Table 6: Number of arguments in training, validation, and test portions of IBM-Rank-30k and GAQCorpus.",
"which consists of short-length arguments (maxi-mum length of 210 characters) annotated by crowd workers.",
"We use the MACE-P aggregations provided by the authors for model training.",
"(2) Additionally, we use the GAQCorpus (Ng et al., 2020; Lauscher et al., 2020b) which covers real-world arguments from three domains, namely community questions and answers (CQA), online debate forums (Debates), and restaurant reviews (Reviews).",
"An overview of the data sets is given in Table 6. On both data sets, we report Pearson's correlation coefficient ( r ).",
"Following Reimers and Gurevych (2017), we report the average of our experiments conducted 50 times with different random seeds (using the best hyperparameter configuration according to the development set results) and additionally conduct an independent t-test.",
"Models.",
"For all AQ models, we rely on a simple linear regression head into which we input the pooled sequence representation.",
"The fine-tuning strategy for the AQ regression is aligned with our previous approaches.",
"Instead of full fine-tuning of the encoder, we add an additional task-specific adapter on top of the already existing adapters and adjust only the task-specific adapter parameters during training.",
"As before, we employ the BERT and GPT-2 base models ( Base ) as well as the adapter-augmented variants.",
"Concretely, we employ the argumentation adapters trained on Args.me and CMV ( Argsme , CMV ), and the debiasing adapters trained on the CDA-augmented Args.me ( DB-Islamo for BERT, DB-Queer for GPT-2).",
"Again, we also study combinations to optimally combine argumentation, debiasing, and task-specific knowledge using either a stacking ( Stacked ) or fusion architecture ( Fusion ).",
"On IBM-Rank-30k, we follow Gretz et al. (2020) and concatenate topic and argument with an additional separator (BERT) or end-of-sequence token (GPT-2).",
"As baselines, we additionally compare with the best results reported by the original works.",
"we optimize our models using Mean Squared Error.",
"We train all task adapters using Adam (Kingma and Ba, 2015) with a batch size of 32 (weight decay = 0 , 1 = 0 . 9 and 2 = 0 . 999 ).",
"We pad the input sequences to a maximum length of 128.",
"We choose the best hyper-parameters by grid searching for learning rate { 1 10 4 , 2 10 4 , 3 10 4 } and number of training epochs { 1 , 2 , 3 , 4 , 5 } based on the performance on the individual dataset's respective validation portion.",
"Results.",
"The results are shown in Table 7. Generally, though the trends are the same, the scores diverge from the results reported in the original works, which can be attributed to our use of task adapters.",
"Interestingly, while injecting argumentation adapters leads to performance improvements on IBM-ArgQ-Rank-30kArgs in 3 out of 4 cases, it seems to hurt the performance on GAQCorpus.",
"On the other hand, the debiasing adapters do not seem to lead to losses: in contrast, in some cases (IBM and GAQDebates for BERT, GAQDebates for GPT-2), we even note performance improvements.",
"For GAQCorpus, the best results are obtained with an argumentative and fair language model when fusing debiasing and argumentation adapters.",
"We conclude that fair and argumentative language modeling can have a positive impact on argument quality prediction as downstream task .",
"(2020), and Shah et al. (2020).",
"Bolukbasi et al. (2016) were the first to draw attention to the issue of unfair stereotypical bias in NLP, showing that static word embeddings allow for building biased analogies.",
"Later, Caliskan et al. (2017) proposed the well-known Word Embedding Association Test (WEAT), which was extended to more languages by (Lauscher and Glava, 2019; Lauscher et al., 2020c).",
"More works focused on bias evaluation and mitigation in static word embeddings (Gonen and Goldberg, 2019; Dev and Phillips, 2019; Manzini et al., 2019; Lauscher et al., 2020a), and later, the focused shifted towards detecting and attenuating biases in their successors contextualized word embeddings (Dev and Phillips, 2019; Dev et al., 2020; Tan and Celis, 2019).",
"Here, the authors focused on both, bias in general-purpose pretrained language models (May et al., 2019; Kurita et al., 2019b; Zhao et al., 2019; Webster et al., 2020), and bias in particular downstream scenarios (Dev et al., 2020).",
"For instance, Zhao et al. (2018) proposed Counterfactual Data Augmentation (CDA) for the purpose of debiasing coreference resolution systems.",
"Like many other works (Zmigrod et al., 2019; Lu et al., 2020; Webster et al., 2020; Lauscher et al., 2021a) we explore the method for our purposes.",
"Similarly, Vanmassenhove et al. (2018) focused on machine translation and Sheng et al. (2019) on general natural language generation, while Barikeri et al. (2021) specifically target conversational models.",
"In this work, we follow their process for creating AB B A .",
"Bias in Argumentation.",
"It is extremely surprising that given the plethora of works focused on mining, assessing, and generating arguments as well as reasoning over arguments (Lauscher et al., 2021b), to date, Spliethver and Wachsmuth (2020) were the only ones to investigate and quantify social bias in argumentation.",
"They performed a simple co-occurrence analysis for three different argumentation corpora and trained a custom GloVe model (Pennington et al., 2014) based on argumentative text, which they analyzed with WEAT.",
"Our work builds on top of theirs and is the first to examine bias in relation to an argumentative downstream task and also the first to conduct debiasing for computational argumentation models.",
"In this work, we presented an investigation of bias in PLMs and argumentative text.",
"To this end, we created AB B A , the first annotated corpus tailored for measuring bias in computational argumentation models.",
"Using AB B A , we showed that argumentative fine-tuning of language models may lead to an amplification of biases in the models.",
"We then demonstrated how to obtain a fair and argumentative language model by combining argumentation with debiasing knowledge encapsulated in lightweight adapters to ensure higher sustainability and flexibility, and analyzed the effect of stacking orders.",
"An additional downstream evaluation on argument quality prediction indicated that debiasing can even lead in some cases to improved results.",
"We hope that with this work, especially the novel AB B A resource, we will foster further research on fair computational argumentation.",
"The work of Anne Lauscher is funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 949944, INTEGRA-TOR).",
"We thank the anonymous reviewers for their insightful comments.",
"We like to point the reader to the following limitations and ethical considerations: first, following the large body of debiasing research in NLP, we based our evaluation, mitigation, and annotation approach on a fixed set of manually created terms.",
"We are aware that this set is never finite and may be continually revised in subsequent studies.",
"For a recent discussion we refer to Antoniak and Mimno (2021).",
"This is especially the case for the dimension of Queerphobia , where there is increasing openness and understanding toward more diverse forms of sexual orientation and (gender) identity.",
"For instance, our vocabulary does not include the variety of gender-neutral (neo)pronouns (Dev et al., 2021; Lauscher et al., 2022).",
"Further, studies have shown that the perception of prejudice is not only highly subjective, but also largely culture-dependent (Web-ster et al., 2020).",
"Consequently, in order to conduct a thoroughly unbiased annotation study, annotators should be carefully selected and as diverse as possible in terms of cultural heritage, age, ethnicity, and religious affiliation, as well as their gender identity and sexual orientation.",
"While our three annotators were of diverse cultural background such diversity of human resources was not available for this work."
]
| [
"abstain",
"method",
"objective",
"result",
"method",
"result",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"result",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"other",
"objective",
"method",
"objective",
"result",
"objective",
"abstain",
"objective",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
]
|
[
"Language models have revolutionized the field of NLP.",
"However, language models capture and proliferate hurtful stereotypes, especially in text generation.",
"Our results show that 4.3% of the time, language models complete a sentence with a hurtful word.",
"These cases are not random, but follow language and gender-specific patterns.",
"We propose a score to measure hurtful sentence completions in language models (HONEST).",
"It uses a systematic templateand lexicon-based bias evaluation methodology for six languages.",
"Our findings suggest that these models replicate and amplify deep-seated societal stereotypes about gender roles.",
"Sentence completions refer to sexual promiscuity when the target is female in 9% of the time, and in 4% to homosexuality when the target is male.",
"The results raise questions about the use of these models in production settings.",
"1 Natural Language Processing powers many applications we use (or are subjected to) every day,",
"e.g., internet search engines, virtual assistants, or recruiting tools.",
"Increasingly, these applications include text generation.",
"Unfortunately, these methods are likely to reproduce and reinforce a wide range of existing stereotypes in real-world systems.",
"It is therefore important to quantify and understand these biases.",
"Both to avoid the psychological burden of different vulnerable groups, and to advocate for equal treatment and opportunities.",
"Recent research has focused on uncovering and measuring bias in input representations, models, and other aspects (Shah et al., 2020).",
"For example, Boluk-basi et al. (2016); Caliskan et al. (2017); Gonen and Goldberg (2019) demonstrated the presence of implicit sexism in word embeddings.",
"Zhao et al. 1 Note: this paper contains explicit statements of hurtful and offensive language in various languages, which may be upsetting to readers.",
"(2017) demonstrated that models exaggerate found biases, and Kiritchenko and Mohammad (2018) showed that a simple change of pronouns or first names could significantly alter the sentiment of an otherwise identical sentence.",
"Recently, contextualized language models, lead by Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) and GPT-2 (Radford et al., 2019), have become the standard in NLP leaderboards.",
"2 Several studies (Kurita et al., 2019; May et al., 2019; Zhao et al., 2019; Sheng et al., 2019; Nangia et al., 2020) have analyzed their implicit biases related to word use and associations based on word similarity.",
"However, apart from associations, these models can also generate or complete sentences in a cloze-test style.",
"This capability opens new avenues for text generation, but also includes the risk of producing hurtful and stereotyped sentences.",
"We are the first to investigate the generation of explicitly hurtful stereotypes in language models for English and five gender-inflected languages (Italian, French, Portuguese, Romanian, and Span-2 In this paper, we use the general term language models to refer to BERT and GPT-2. ish).",
"Gender-inflected languages associate a grammatical gender case with verbs, nouns, and adjectives.",
"In English, X is known for ___\" describes statements for male and female X. In gender-inflected languages, we also have to inflect the verb and article elle/il est connue/connu comme une/un ___\". This complex gender marking makes stereotyped completions more likely, but also requires a carefully designed study to identify societal stereotypes in these less-investigated languages. 3 We manually create a benchmark set of cloze sentence templates, validated by native speakers for syntactic correctness. Table 1 shows examples of templates filled by BERT models in different languages. We fill these templates via language-specific language models (BERT and GPT-2) and measure the number of hurtful words generated that way. We further categorize the words via a lexicon of hurtful words (Bassignana et al., 2018). Finally, we introduce a measure, the HONEST score (hurt-fulness of language model sentence completion), to compute how likely each language model is to produce hurtful completions. Contributions 1) We release a novel benchmark data set of manually-created sentence templates to measure the generation of hurtful sentence completions in six languages; 2) we use this dataset to assess gendered stereotype bias in the generated results; 3) we propose a measure, HONEST , to understand which language model generates more hurtful sentences; 4) we release code and data for reproducibility at https://github.com/MilaNLProc/ honest . 2 Method Template-closed forms are an effective way of evaluating language models. Petroni et al. (2019) use cloze-based forms to evaluate the amount of relational knowledge included in BERT, and Ettinger (2020) use them as a set of psycholinguistic diagnostic tools. Cloze-based forms have a long history in psycholinguistics to understand human sentence processing (Ettinger, 2020). Here, we use a similar methodology to test hurtful language in different language models. For example, our templates look as follows: X are good at ___, 3 Grammatical gender is not the same as biological sex or societal gender, but gender-inflected languages do usually assign different grammatical gender to male and female subjects.",
"where X is a variable identity term and the blank is the part to be completed. We generate a set of these templates similarly to Sheng et al. (2019), but specifically made for gender-inflected languages. Table 2 shows some examples of our templates. We use a set of identifier terms in singular and plural (i.e., woman , women , girl , boys ) and a series of predicates (i.e., works as ___, is known for ___).",
"The identity terms differ in grammatical gender in all our languages.",
"Our templates have been checked by native speakers of each of the five languages to ensure we create syntactically correct and meaningful sentences.",
"Moreover, we asked the native speakers to make the templates as natural as possible in the respective language.",
"We created a dataset of 420 instances for each language, which are generated from 28 identity terms (14 male and 14 female) and 15 templates.",
"Language Models BERT is natively trained on a cloze-style masked language modeling task: random tokens in the input sentence are replaced with a [MASK] token, for BERT to predict the correct token.",
"After training, given a new sentence with a [MASK] token (the \"___\" in our templates), BERT returns the most probable tokens that could replace [MASK] .",
"GPT-2 does not use masked tokens.",
"Instead, we give the model the unfinished sentence up to the blank and let it complete it.",
"This potentially returns several words.",
"Lexicon We use HurtLex (Bassignana et al., 2018), a multilingual lexicon of hurtful language, to evaluate the completions.",
"We want to quantify how many times a sentence is completed with a hurtful word and if there is a difference between genders (represented by the identity terms).",
"HurtLex maps each word to a class of hurtful expressions (e.g., DEROGATORY WORDS, ANI-MALS).",
"For each BERT completion, we check if the word is present in the lexicon.",
"For GPT-2, we Italian English French Spanish Portuguese Romanian Avg HurtLexCategory M F M F M F M F M F M F ANIMALS 4.17 7.40 3.91 7.41 5.69 7.76 6.79 7.21 1.43 0.04 14.71 7.79 6.19 CRIMEANDIMMORALBEHAVIOR 1.17 0.00 0.17 0.04 3.75 2.07 0.21 0.07 0.07 0.00 0.00 0.00 0.63 DEROGATORYWORDS 11.31 7.86 6.36 2.77 12.10 6.76 14.79 6.64 4.14 8.21 15.50 4.29 8.39 FEMALEGENITALIA 0.50 6.07 0.63 0.77 1.93 4.40 2.86 6.36 1.11 8.93 7.14 2.43 3.59 MALEGENITALIA 1.07 1.81 0.96 1.00 2.83 3.14 4.07 1.29 0.71 0.79 7.57 1.36 2.22 HOMOSEXUALITY 3.74 0.26 0.54 0.96 3.11 1.29 0.07 1.00 0.32 0.07 1.50 0.86 1.14 POTENTIALNEGATIVECONNOTATIONS 2.14 0.31 2.19 0.79 0.71 0.87 1.64 1.29 0.71 0.64 3.21 0.43 1.24 PROFESSIONSANDOCCUPATIONS 1.33 0.00 0.57 0.26 0.12 0.02 0.07 0.57 0.04 0.00 0.00 0.00 0.25 PROSTITUTION 0.62 8.69 1.13 5.51 0.88 8.74 1.14 8.43 0.54 3.29 0.21 8.07 3.94 Table 3: Percentages of hurtful words in language-specific BERT model completions (K = 20).",
"check if at least one of the words generated by the model is in the lexicon.",
"Then, we compute the percentage of hurtful completions for each category among the topK candidates.",
"HONEST Finally, we define a global score to get a birds-eye view of the models' performance, which is complementary to the category analysis.",
"The HONEST score is the average of hurtful completions of any class.",
"More formally, for a language model LM , assuming we have a set T templates filled with the identity terms, we define HONEST as follows: (cid:80) t T (cid:80) c compl ( LM,t,K ) 1 HurtLex ( c ) | T | K Here, 1 HurtLex is the indicator function for the set of words in HurtLex and compl ( LM, t, K ) is the function that returns the topK completions of LM on template t .",
"We use the HuggingFace (Wolf et al., 2020) pre-trained language-specific BERT and GPT-2 models (see Appendix B for more details).",
"Tables 3 and 4 show the percentage of hurtful words BERT and GPT-2 have generated to complete the templates.",
"4 The tables distinguish the languages, gender target, and the different HurtLex 4 Spanish and Romanian GPT-2 models were not available at submission time and are therefore not included.",
"categories.",
"Higher percentages are reported in red, lower ones in green.",
"The average higher percentages of GPT-2 with respect to the BERT ones (6.2% vs. 3.1% respectively) are due to the different evaluation settings (see Section 2).",
"Because GPT-2 potentially produces more words, this difference suggests that the more text we generate, the more likely one is to find an appearance of hurtful stereotypes.",
"The category with the highest percentages across all languages is, unsurprisingly, DEROGATORY WORDS.",
"They are defined in (Bassignana et al., 2018) as words with a clearly offensive and negative value, which are not negative stereotypes or slurs.",
"These terms are associated with both men and women with relatively high percentages (10.7% on average).",
"Nevertheless, there is a slight imbalance between male and female subjects (12.3% vs. 9.2%, respectively).",
"This is due to some limitations of the lexicon.",
"While it includes numerous gender-specific insults, nouns and adjectives with negative connotations that can be used for both genders are predominantly reported in their male form.",
"For example, the Italian lexicon contains male insults such as buffone, cretino, zotico (en. clown, moron, lout ), but not their female versions buffona, cretina, zotica .",
"This does not affect the insights presented in this paper.",
"On the contrary, it suggests that our results are an underestimation of the real problem, making this only the tip of the iceberg.",
"corroborate the study by Preston and Stanley (1987).",
"Thirty years later, insults against women still continue to be based on sexual promiscuity, and slurs against men refer to homosexuality or sexual inadequacy (Scruton, 2017).",
"Words containing references to sexual promiscuity (the categories PROSTITUTION and FEMALE GENITALIA) are associated predominantly with women (9% each for women, vs. 1.4% and 3% for men).",
"This is not surprising, considering the disproportionately larger number of terms for a sexually promiscuous woman in all our languages (Stanley, 1977).",
"Examples of sentence completions for this category are: the woman is a whore, the girl likes to have sex, the woman is known as a prostitute looking for money .",
"On the other hand, the HOMOSEXUALITY category is more strongly associated with men (4% vs. 1.2% for women).",
"A psychological study (Slaatten et al., 2014) demonstrated that gay-related name-calling is associated with the endorsement of hypermasculine norms.",
"Similar to the previous case, this is also motivated by the fact that epithets associated with male homosexuality are more frequent than ones associated with female homosexuality in all our languages.",
"Examples of sentence completions for this category are: the husband is known as a homosexual, he is gay .",
"While these sentences are not necessarily hurtful, sexual orientation terms (a protected category) should not be the first completion given by a language model for general purpose templates.",
"We also investigated the possible impact of model design and training data but did not identify any systematic differences.",
"HONEST.",
"In Table 5 we show the HONEST scores for different language models and languages.",
"Our results show that CamemBERT is the BERT-derived model with the most hurtful language generation issues.",
"The same is true for GPT-2 trained on French data, suggesting that French models should take this issue into consideration.",
"The best results come from Portuguese and Spanish models.",
"These results could indicate either differences in training data or language-specific differences in the use of swearwords.",
"The analysis of bias in Natural Language Processing has gained a lot of attention in recent years (Hovy and Spruit, 2016; Shah et al., 2020), specifically on gender bias (Zhao et al., 2018; Rudinger",
"et al., 2018; Garimella et al., 2019).",
"This interest is also reflected in the organization of dedicated workshops (ws-, 2019, 2017).",
"More generally, language models generating taboo words and insults is the result of NLP systems not incorporating social norms (Hovy and Yang, 2021).",
"The pioneering work of (Bolukbasi et al., 2016) demonstrated that word embeddings (even when trained on formal corpora) exhibit gender stereotypes to a disturbing extent.",
"On top of that, several studies have been proposed to measure and mitigate bias in word embeddings (Chaloner and Maldon-ado, 2019; Zhou et al., 2019; Nissim et al., 2020) and more recently on pre-trained contextualized embeddings models (Kurita et al., 2019; May et al., 2019; Zhao et al., 2019; Field and Tsvetkov, 2019; Sheng et al., 2019; Nangia et al., 2020; Vig et al., 2020).",
"However, most studies focus on English.",
"Despite a plethora of available language-specific models (Nozza et al., 2020), there currently exist few studies on biases in other languages.",
"This is a severe limitation, as English findings do not automatically extend to other languages, especially if those exhibit morphological gender agreement.",
"Only McCurdy and Serbeti (2017); Zhou et al. (2019) examine the bias in word embeddings of gender-inflected languages, demonstrating the need for an adequate framework different from the ones proposed for English.",
"To the best of our knowledge, we are the first to investigate stereotype bias in various language model completions beyond English.",
"We present the first analysis of stereotyped sentence completions generated by contextual models in gender-inflected languages.",
"We introduce the HONEST score to quantify the amount of hurtful completions in a language model.",
"We release a novel benchmark data set of manually created templates, validated by native speakers in five gender-inflected languages, i.e., Italian, French, Portuguese, Romanian, and Spanish.",
"Our results show that BERT and GPT-2, nowadays ubiquitous in research and industrial NLP applications, demonstrate a disturbing tendency to generate hurtful text.",
"In particular, template sentences with a female subject are completed in 10% of the time with stereotypes about sexual promiscuity.",
"Sentences with male subjects are completed 5% of the times with stereotypes about homosexuality.",
"This finding raises questions about the role of these widespread models in perpetuating hurtful stereotypes.",
"In future work, we will investigate sentence completions with benevolent sexism categories (Jha and Mamidi, 2017),",
"e.g., stereotypes like women are good at cooking or men are good at ruling .",
"Moreover, we plan to study the handling of protected category terms in natural language generation systems with data augmentation (Dixon et al., 2018; Nozza et al., 2019) and regularization techniques (Kennedy et al., 2020).",
"Our experimental results suggest a need to discuss the ethical aspect of these models.",
"BERT and GPT-2 have shown astonishing capabilities and pushed the envelope of natural language understanding not without some doubts (Bisk et al., 2020; Bender and Koller, 2020).",
"However, our results, together with those of (Sheng et al., 2019; Kurita et al., 2019; Zhou et al., 2019), should make us reflect on the dual use of these models, i.e., how they are used outside our research community.",
"Can BERT or GPT-2 harm someone if used in production, by proliferating and amplifying harmful stereotypes?",
"These models are now often included in industrial pipelines that are generally driven by economic needs, not academic interest.",
"When we combine this ubiquity with the general low interpretability of deep learning methods, we can easily see a problematic issue.",
"trusting the pre-training to be fair can give a false sense of security.",
"This is directly connected to the recent easy availability of these models; almost anyone can download and use a pre-trained model now.",
"While this is a great advancement for the democratization of technology, it also raises serious questions.",
"We, as scientists, should be aware of the consequences the nave use of these models can have.",
"Democratizing without educating can damage those people who fight the most to be recognized as equal members of our society, if our models continue to spread old hurtful stereotypes.",
"Finally, we want to explicitly address the limitation of our approach with respect to the binary nature of our gender analysis.",
"The lack of representation for non-binary people and the gender assumption of the identity terms is a major limitation in our work.",
"It is due to data and language constraints, not a value judgment.",
"We want to add our voice to Mohammad (2020) in the hope of future work to disaggregate information for different genders.",
"We follow Bender and Friedman (2018) on providing a Data Statement for our templates to provide a better picture of the possibilities and limitations of the data, and to allow future researchers to spot any biases we might have missed.",
"Templates were generated by native speakers of the respective languages from European Countries, all in the age group 25-30.",
"The data we share is not sensitive to personal information, as it does not contain information about individuals.",
"Our data does not contain hurtful messages that can be used in hurtful ways.",
"This project has partially received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 949944, INTEGRATOR).",
"The authors are members of the MilaNLP group, and the Data and Marketing Insights Unit of the Bocconi Institute for Data Science and Analysis.",
"The authors would also like to thank Enrico Mestieri, Andrada Pumnea, and Margarida Ruela for the support provided in the generation of the templates."
]
| [
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"result",
"objective",
"method",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"other",
"other"
]
|
[
"Neural Network Language Models (NNLMs) generate probability distributions by applying a softmax function to a distance metric formed by taking the dot product of a prediction vector with all word vectors in a high-dimensional embedding space.",
"The dot-product distance metric forms part of the inductive bias of NNLMs.",
"Although NNLMs optimize well with this inductive bias, we show that this results in a sub-optimal ordering of the embedding space that structurally impoverishes some words at the expense of others when assigning probability.",
"We present numerical, theoretical and empirical analyses showing that words on the interior of the convex hull in the embedding space have their probability bounded by the probabilities of the words on the hull.",
"Neural Network Language Models (NNLMs) have evolved rapidly over the years from simple feed forward nets (Bengio et al., 2003) to include recurrent connections (Mikolov et al., 2010) and LSTM cells (Zaremba et al., 2014), and most recently transformer architectures (Dai et al., 2019; Radford et al., 2019).",
"This has enabled ever-increasing performance on benchmark data sets.",
"However, one thing has remained relatively constant: the softmax of a dot product as the output layer.",
"NNLMs generate probability distributions by applying a softmax function to a distance metric formed by taking the dot product of a prediction vector with all word vectors in a high-dimensional embedding space.",
"We show that the dot product distance metric introduces a limitation that bounds the expressiveness of NNLMs, enabling some words to steal probability from other words simply due to their relative placement in the embedding space.",
"We call this limitation the stolen probability effect .",
"While the net impact of this limitation is small in terms of the perplexity measure on which NNLMs are evaluated, we show that the limitation results in significant errors in certain cases.",
"As an example, consider a high probability word sequence like the United States of America that ends with a relatively infrequent word such as America.",
"Infrequent words are often associated with smaller embedding norms, and may end up inside the convex hull of the embedding space.",
"As we show, in such a case it is impossible for the NNLM to assign a high probability to the infrequent word that completes the high-probability sequence.",
"Numerical, theoretical and empirical analyses are presented to establish that the stolen probability effect exists.",
"Experiments with n-gram models, which lack this limitation, are performed to quantify the impact of the effect.",
"In a NNLM, words w i are represented as vectors x i in a high-dimensional embedding space.",
"Some combination of these vectors x c = { x i } i c are used to represent the preceding context c , which are fed into a a neural unit as features to generate a prediction vector h t .",
"NNLMs generate a probability distribution over a vocabulary of words w i to predict the next word in a sequence w t using a model of the form: P ( w t | c ) = ( f ( x c , NNLM )) (1) where is the softmax function, f is a neural unit that generates the prediction vector h t , and NNLM are the parameters of the neural unit.",
"A dot product between the prediction vector h t and all word vectors x i is taken to calculate a set of distances, which are then used to form logits : z it = x i h Tt + b i (2) where b i is a word-specific bias term.",
"Logits are used with the softmax function to generate a probability distribution over the vocabulary V such that: P ( w t = w i | c ) = e z it (cid:80) V e z vt (3) We refer to this calculation of logits and transformation into a probability distribution as the dot-product softmax .",
"NNLMs learn very different embeddings for different words.",
"In this section we show that this can make it impossible for words with certain embeddings to ever be assigned high probability in any context.",
"We start with a brief examination of the link between embedding norm and probability, which motivates our analysis of the stolen probability effect in terms of a word's position in the embedding space relative to the convex hull of the embedding space.",
"where i is the angle between x i and h t .",
"The dot-product softmax allocates probability to word w i in proportion to z it 's value relative to the value of other logits (see Eq. 3).",
"Setting aside the bias term b i for the moment (which is shown empirically to be irrelevant to our analysis in Section 4.2), this means that word A with a larger norm than word B will be assigned higher probability when the angles A and B are the same.",
"More generally, the relationship between embedding norms and the angles formed with prediction points h t can be expressed as: (cid:107) x A (cid:107) (cid:107) x B (cid:107) > cos ( B ) cos ( A ) (5) when word A has a higher probability than word B .",
"Empirical results (not presented) confirm that NNLMs organize the embedding space such that word vector norms are widely distributed, while their angular displacements relative to a reference vector fall into a narrow range.",
"This suggests that the norm terms in Eq.",
"4 dominate the calculation of logits, and thereby probability.",
"While an analysis of how embedding norms impact the assignment of probability is informative, the stolen probability effect is best analyzed in terms of a word's position in the embedding space relative to the convex hull of the embedding space.",
"A convex hull is the smallest set of points forming a convex polygon that contains all other points in a Euclidean space.",
"Theorem 1.",
"Let C be the convex hull of the embeddings { x i } of a vocabulary V .",
"If an embedding x i for word w i V is interior to C , then the maximum probability P ( w i ) assigned to w i using a dot-product softmax is bounded by the probability assigned to at least one word w i whose embedding is on the convex hull.",
"(see Appendix A for proof).",
"The stolen probability effect can be illustrated numerically in a 2D Euclidean space (see Figure 1).",
"We show two configurations of an embedding space, one where target word A is on the convex hull (Panel",
"i) and another where A is on the interior (Panel",
"ii).",
"Under both configurations, a NNLM trained to the maximum likelihood objective would seek to assign probability such that P ( A ) = 1 .",
"0 .",
"For the first configuration, this is achievable for an h t in the far lower-left quadrant (Panel",
"iii).",
"However, when A is in the interior, there is no h t that exists where the dot-product softmax can assign a probability approaching 1 .",
"0 (Panel",
"iv).",
"A similar illustration in 3D is presented in Appendix B. 4 Experiments In this section we provide empirical evidence showing that words interior to the convex hull are probability-impoverished due to the stolen probability effect and analyze the impact of this phenomenon on different models.",
"We perform our evaluations using the AWD-LSTM (Merity et al., 2017) and the Mixture of Soft-maxes (MoS) (Yang et al., 2017) language models.",
"Both models are trained on the Wikitext-2 corpus (Merity et al., 2016) using default hyper-parameters, except for dimensionality which is set to d = { 50 , 100 , 200 } .",
"The AWD-LSTM model is trained for 500 epochs and the MoS model is trained for 200 epochs, resulting in perplexities as shown in Table 1.",
"The Quickhull algorithm (Barber et al., 1996) is among the most popular algorithms used to detect the convex hull in Euclidean space.",
"However, we found it to be intractably slow for embedding spaces above ten dimensions, and therefore resorted to approximate methods.",
"We relied upon an identity derivable from the properties of a convex hull which states that a point p R d is vertex of the convex hull of { x i } if there exists a vector h t R d such that for all x i : (cid:104) h t , x i p (cid:105) < 0 .",
"Searching for directions h t which satisfy Eq 6 is not computationally feasible.",
"Instead, we rely upon a high-precision, low-recall approximate method to eliminate potential directions for h t which do not satisfy Eq.",
"6.",
"We call this method our detection algorithm .",
"If the set of remaining directions is not empty, then p is classified as a vertex, otherwise p is classified as an interior point.",
"The detection algorithm is anchored by the insight that all vectors parallel to the difference vector Train Test Interior Model d PPL PPL (radians) Points AWD 50 140.6 141.8 50 / 128 6,155 AWD 100 73.3 97.8 55 / 128 5,205 AWD 200 44.9 81.6 58 / 128 2,064 MoS 50 51.7 76.8 53 / 128 4,631 Mos 100 34.8 67.4 57 / 128 4,371 MoS 200 25.5 64.2 59 / 128 2,009 Table 1: Perplexities and Detection Results.",
"(cid:126)x i (cid:126)p do not satisfy Eq.",
"6.",
"It is also true that all directions in the range ( + , ) will not satisfy Eq.",
"6, where is the direction of the difference vector and is some increment less than / 2 .",
"The detection algorithm was validated in lower dimensional spaces where an exact convex hull could be computed ( e,g. up to d = 10 ).",
"It consistently classified interior points with precision approaching 100% and recall of 68% when evaluated on the first 10 dimensions of the MoS model with d = 100 .",
"Applying the detection algorithm to our models yields word types being classified into distinct interior and non-interior sets (see Table 1).",
"We ranked the top 500 words of each set by the maximum probability they achieved on the training corpora 1 , and plot these values in Figure 2, showing a clear distinction between interior and non-interior sets.",
"The maximum trigram probabilities (Stolcke, 2002) smoothed with KN3 for the same top 500 words in each set (separately sorted) are also shown.",
"The difference between NNLM and trigram curves for interior words shows that models like n-grams, which do not utilize a dot-product softmax, are not subject to the stolen probability effect and can assign higher probabilities.",
"A random set of words equal 1 We present our results on the training set because here, our goal is to characterize the expressiveness of the models rather than their ability to generalize.",
"in size to the interior set was also constructed by uniform sampling, and ranked on the top 500 words.",
"A comparison between the random and interior sets provides evidence that our detection algorithm is effective at separating the interior and non-interior sets, and is not simply performing random sampling.",
"Our results can be more compactly presented by considering the average probability mass assigned to the top 500 words for each set (see Table 2).",
"The impact of the stolen probability effect for each model can quantified as the difference between the interior set and each of the three reference sets (non-interior, random, and trigram) in the table.",
"The interior average maximum probability is generally much smaller than those of the reference sets.",
"Another way to quantify the impact of the stolen probability effect is to overcome the bound on the interior set by constructing an ensemble with trigram statistics.",
"We constructed a targeted ensemble of the MoS model with d = 100 and a trigram modelunlike a standard ensemble, the trigram model is only used in contexts that are likely to indicate an interior word: specifically, those that precede at least one interior word in the training set.",
"Otherwise, we default to the NNLM probability.",
"When we ensemble, we assign weights of 0.8 to the NNLM, 0.2 to the trigram (selected using the training set).",
"Overall, the targeted ensemble improved training perplexity from 34.8 to 33.6, and test perplexity from 67.4 to 67.0.",
"The improvements on the interior words themselves were much larger: training perplexities for interior words improved from 700.0 to 157.2, and test improved from 645.6 to 306.7.",
"Improvement on the interior words is not unexpected given the differences observed in Figure 2.",
"The overall perplexity differences, while small in magnitude, suggest that ensembling with a model that lacks the stolen probability limitation may provide some boost to a NNLM.",
"Returning to the question of bias terms, we find empirically that bias terms are relatively small, averaging 0 .",
"13 and 0 .",
"02 for the interior and non-interior sets of the MoS model with d = 100 , respectively.",
"We note that the bias terms are word-specific and can only adjust the stolen probability effect by a constant factor.",
"That is, it does not change the fact that words in the interior set are probability-bounded.",
"All of our empirical results are calculated on a model with a bias term, demonstrating that the stolen probability effect persists with bias terms.",
"Attributes of the stolen probability effect analyzed in this work are distinct from the softmax bottleneck (Yang et al., 2017).",
"The softmax bottleneck argues that language modeling can be formulated as a factorization problem, and that the resulting model's expressiveness in limited by the rank of the word embedding matrix.",
"While we also argue that the expressiveness of a NNLM is limited for structural reasons, the stolen probability effect that we study is best understood as a property of the arrangement of the embeddings in space, rather than the dimensionality of the space.",
"Our numerical and theoretical analyses presented do not rely upon any particular number of dimensions, and our experiments show that the stolen probability effect holds over a range of dimensions.",
"However, there is a steady increase of average probability mass assigned to the interior set as model dimensionality increases, suggesting that there are limits to the stolen probability effect.",
"This is not unexpected.",
"As the capacity of the embedding space increases with additional dimensions, the model has additional degrees of freedom in organizing the embedding space.",
"The vocabulary of the Wikitext-2 corpus is small compared to other more recent corpora.",
"We believe that larger vocabularies will offset (at least partially) the additional degrees of freedom associated with higher dimensional embedding spaces.",
"We leave the exploration of this question as future research.",
"We acknowledge that our results can also be impacted by the approximate nature of our detection algorithm.",
"Without the ability to precisely detect detect the convex hull for any of our embedding spaces, we can not make precise claims about its performance.",
"The difference between average probability mass assigned to random and interior sets across all models evaluated suggests that the detection algorithm succeeds at identifying words with substantially lower maximum probabilities than a random selection of words.",
"In Section 3.1 we motivated our analysis of the stolen probability effect by examining the impact of embeddings norms on probability assignment.",
"One natural question is to ask is Does our detection algorithm simply classify embeddings with small norms as interior points?",
"Our results suggest that this is not the case.",
"The scatter plot of embedding norm versus maximum probability (see Figure 3) shows that words classified as interior points frequently have lower norms.",
"This is expected, since points interior to the convex hull are by definition not located in extreme regions of the embedding space.",
"The embedding norms for words in the interior set range between 1.4 and 2.6 for the MoS model with d = 100 .",
"Average maximum probabilities for words in this range are 1 .",
"4% and 4 .",
"1% for interior and non-interior sets of the MoS model with d = 100 , respectively, providing evidence that the detection algorithm is not merely identifying word with small embedding norms.",
"Lastly, we note that the interior sets of the AWD-LSTM models are particularly probability impoverished relative to the more powerful MoS models.",
"We speculate that the perplexity improvements of the MoS model may be due in part to mitigating the stolen probability effect.",
"Exploration of the stolen probability effect in more powerful NNLM architectures using dot-product softmax output layers is Figure 3: Maximum Probability vs. Embedding Norm.",
"Other work has explored alternative softmax configurations, including a mixture of softmaxes, adaptive softmax and a Taylor Series softmax (Yang et al., 2017; Grave et al., 2016; de Brebisson and Vincent, 2015).",
"There is also a body of work that analyzes the properties of embedding spaces (Bur-dick et al., 2018; Mimno and Thompson, 2017).",
"We do not seek to modify the softmax.",
"Instead we present an analysis of how the structural bounds of an NNLM limit its expressiveness.",
"We present numerical, theoretical and empirical analyses showing that the dot-product softmax limits a NNLM's expressiveness for words on the interior of a convex hull of the embedding space.",
"This is structural weakness of NNLMs with dot-product softmax output layers, which we call the stolen probability effect .",
"Our experiments show that the effect is relatively common in smaller neural language models.",
"Alternative architectures that can overcome the stolen probability effect are an item of future work.",
"This work was supported in part by NSF Grant IIS-1351029.",
"We thank the anonymous reviewers and Northwestern's Theoretical Computer Science group for their insightful comments and guidance."
]
| [
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"method",
"result",
"abstain",
"result",
"abstain",
"other",
"other"
]
|
[
"The field of NLP has made substantial progress in building meaning representations.",
"However, an important aspect of linguistic meaning, social meaning, has been largely overlooked.",
"We introduce the concept of social meaning to NLP and discuss how insights from sociolinguistics can inform work on representation learning in NLP.",
"We also identify key challenges for this new line of research.",
"Variation is inherent to language.",
"Any variety of language provides its users with a multitude of linguistic formse.g., speech sounds, words, grammatical constructionsto express the same referential meaning.",
"Consider, for example, the many ways of pronouncing a given word or the variety of words that can refer to a given concept.",
"Linguistic variation is the primary object of inquiry of sociolinguistics , which has a long history of describing and explaining variation in linguistic form across society and across levels of linguistic analysis (Tagliamonte, 2015).",
"Perhaps the most basic finding of the field is that linguistic variation allows for the expression of social meaning , information about the social background and identity of the language user.",
"Such sociolinguistic variation adds an additional layer of meaning onto the basic referential meaning communicated by any utterance or text.",
"Understanding the expression of social meaning based on linguistic variation is a crucial part of the linguistic knowledge of any language user, drawn upon continuously during both the production and processing of natural language.",
"The relationship between variation and social meaning, however, has only begun to be explored computationally (e.g., Pavalanathan et al. (2017)).",
"Studies have shown, for example, that words, capitalisation, or the language variety used can index political identity (Shoemark et al., 2017; Stewart et al., 2018; Tatman et al., 2017).",
"Despite general acceptance of the link between linguistic variation and social meaning in linguistics, NLP has largely ignored this relationship.",
"Nevertheless, the importance of linguistic variation more generally is increasingly being acknowledged in NLP (Nguyen et al., 2016).",
"NLP tools are usually developed for standard varieties of language, and therefore tend to under-perform on texts written in varieties that diverge from the standard', including language identification (Blodgett et al., 2016), dependency parsing (Blodgett et al., 2016), and POS tagging (Hovy and Sgaard, 2015).",
"One approach to overcoming the challenges posed by linguistic variation is text normalisation (Han and Baldwin, 2011; Liu et al., 2011).",
"Normalisation transforms non-standard texts into a more standardised form, which can then be analysed more accurately using NLP models trained on standard language data.",
"Text normalisation, however, removes rich social signals encoded via sociolinguistic variation.",
"Other approaches have also been explored to improve the robustness of NLP models across society, such as adapting them based on demographic factors (Lynn et al., 2017) or social network information (Yang and Eisenstein, 2017).",
"Linguistic variation, however, should not simply be seen as a problem to be overcome in NLP.",
"Although variation poses a challenge for robust NLP, it also offers us a link to the social meaning being conveyed by any text.",
"To build NLP models that are capable of understanding and generating natural language in the real world, sociolinguistic variation and its role in creating social meaning must be incorporated into our models.",
"For example, over the last few years, research in NLP has been marked by substantial advancements in the area of representation learning , but although Bisk et al. (2020) and Hovy (2018) have recently argued that the social nature of language must be considered in representation learning, the concept of social meaning is still largely overlooked.",
"In this paper, we therefore introduce the concept of social meaning to NLP from a sociolinguistic perspective (Section 2).",
"We reflect on work on representation learning in NLP and how social meaning could play a role there (Section 3), and we present example applications (Section 4).",
"Finally, we identify key challenges moving forward for this important and new line of research in NLP for the robust processing of meaning (Section 5).",
"People use language to communicate a message.",
"The same message can be packaged in various linguistic forms.",
"For example, someone might say I'm not coming, pal'.",
"But they could also refer to their friend as mate', buddy', bruv', or bro' for instance.",
"Or they could say I am not coming' or I ain't comin' to express that they are not joining that friend.",
"With each of these options, or variants , the language user communicates the same referential meaning, that is, they refer to the exact same entity, action or idea in the real or an imagined world.",
"The only difference is the linguistic form used to encode that message.",
"To put it simply: these are different ways of saying the same thing (Labov, 1972).",
"Although this variation in form does not change the referential meaning of the message, it is not meaningless in itself.",
"Variation in form can also carry information about the social identity of a language user (Eckert, 2008), which sociolinguists call the social meaning of linguistic variation .",
"For example, Walker et al. (2014) define social meaning as all social attributes associated with a language feature and its users.",
"These social attributes can be highly diverse and relate to any aspect of identity a language user may want to express through their linguistic output.",
"Linguistic variation can express broad social attributes like national background or social class.",
"Saying I left the diapers in the trunk' rather than I left the nappies in the boot', may suggest that the speaker is American rather than British.",
"But linguistic variation can also be far more fine-grained and can be called upon directly by language users to construct local identities.",
"A famous example is Labov (1972)'s groundbreaking study on Martha's Vineyard, a small island off the northeast coast of the US.",
"Labov found that within the small island community there were differences in the way people pronounced the diphthongs /ay/ (as in right') and /aw/ (as in house').",
"The study shows that a more centralised pronunciation of the diphthongs was used by local fishermen who opposed the rise in tourism from the mainland on the island.",
"Conversely, islanders who were more oriented towards mainland culture used a more standard American pronunciation for these diphthongs.",
"The pronunciation of these sounds was thus used in this particular community to express the local social meaning of island identity.",
"The social meaning of linguistic variation is not fixed.",
"Over time, a linguistic variant can develop new meanings and lose others, while new forms can also emerge.",
"A single linguistic feature can also be associated with multiple social meanings.",
"Which of those meanings is activated in interaction depends on the specific context in which that interaction takes place.",
"Campbell-Kibler's research on the social meaning of the pronunciation of -ing in the US (e.g., coming' vs. comin') shows, for instance, that the variation can be linked to both social and regional identity.",
"For example, velar pronunciation coming' sounds urban, while alveolar pronunciation comin' is perceived as sounding Southern (Campbell-Kibler, 2007, 2009, 2010).",
"Information about the speaker can also influence the social meaning attached to variation in -ing pronunciation.",
"Experiments show that when a speaker is presented as a professor, they sound knowledgeable when using the velar pronunciation, while if the same speaker is presented as an experienced professional, they are perceived as knowledgeable when using the alveolar variant (Campbell-Kibler, 2010).",
"The collection of social meanings a linguistic feature could potentially evoke is referred to as the indexical field of that feature (Eckert (2008), for a theoretical discussion of indexicality, see Silverstein (2003)).",
"As the above examples suggest, social meaning can be attached to various types of linguistic features.",
"In the friend, nappy and boot examples, there is variation on the level of the lexicon, while the I ain't comin' example shows morphosyntac-tic variation (ain't' vs. am not') and variation in pronunciation (comin' vs. coming').",
"A language or language variety as a whole can also carry social meaning.",
"Think of the choice to use a standard variety or a local dialect to signal one's regional background or the use of loans from foreign languages to come across as cosmopolitan or fashionable (Vaattovaara and Peterson, 2019).",
"It is also important to acknowledge that there are other types of linguistic variationand other traditions that analyse variation within linguistics including variation across communicative contexts, as is commonly analysed in corpus linguistics (i.e. register variation) (Biber, 1988; Biber and Conrad, 2009).",
"For example, research on register variation has shown that texts that are intended to concisely convey detailed information, like academic writing, tend to have very complex noun phrases, as opposed to more interactive forms of communication, like face-to-face conversations, which tend to rely more on pronouns.",
"Crucially, register variation depends on the communicative goals, affor-dances, and constraints associated with the context in which language is used, as opposed to the social background or identity of the individual language users, although the relationship between social and situational variation is also complicated and not fully understood (Eckert and Rickford, 2002; Finegan and Biber, 2002).",
"Linguistic variation and the expression of social meaning is thus a highly complex phenomenon, and one that sociolinguists are only beginning to fully grasp despite decades of research.",
"Nevertheless, we argue that language variation and social meaning must be considered when building NLP models: not simply to create more robust tools, but to better process the rich meanings of texts in general.",
"Moreover, we believe that methods from NLP could contribute substantially to our understanding of sociolinguistic variation.",
"Distributed representations map a word (or some other linguistic form) to a k -dimensional vector, also called an embedding.",
"Sometimes these representations are independent of linguistic context (Mikolov et al., 2013; Pennington et al., 2014), but increasingly they are contextualised (Devlin et al., 2019; Peters et al., 2018).",
"These representations are shown to capture a range of linguistic phenomena (Baroni et al., 2014; Conneau et al., 2018; Glad-kova et al., 2016).",
"A key question in the development of representations is what aspects of meaning these representations should capture.",
"Indeed, recent reflections have drawn attention to challenges such as polysemy and hyponymy (Emerson, 2020) and construed meaning (Trott et al., 2020).",
"However, even though Bender and Lascarides (2019, p.20) note that [l]inguistic meaning includes so-How are you doing? How are you doin? How are you doinggg? Figure 1: With all three utterances, the author asks how someone is doing, but the spelling variants carry different social meanings. For example, how should a spelling variant like doin be represented? Providing it the same representation as doing would result in a loss of social meaning associated with g-dropping. cial meaning ', social meaning has been overlooked in the development of meaning representations, although a few recent studies have suggested that the embedding space can already exhibit patterning related to sociolinguistic variation, even when learning is based on text alone (e.g., Niu and Carpuat (2017); Nguyen and Grieve (2020); Shoemark et al. (2018)).",
"One clear example comes from spelling, where deviations from spelling conventions (e.g., 4ever , greattt , doin ) can create social meaning (Eisenstein, 2015; Herring and Zelenkauskaite, 2009; Ilbury, 2020; Nini et al., 2020; Sebba, 2007).",
"Androut-sopoulos (2000), for example, discusses how nonconventional spelling in media texts can convey social meanings of radicality or originality.",
"Furthermore, a close textual analysis by Darics (2013) shows that letter repetition can create a relaxed style and signal friendly intent'.",
"An immediate question is therefore how to handle spelling variation when building representations (Figure 1).",
"Current research on representation learning that considers spelling variation is primarily motivated by making NLP systems more robust.",
"For example, Piktus et al. (2019) modify the loss function to encourage the embeddings of misspelled words to be closer to the embeddings of the likely correct spelling.",
"Similarly, motivated by adversarial character perturbations', Liu et al. (2020) aim to push embeddings closer together for original and perturbed words (e.g. due to swapping, substituting, deleting and inserting characters).",
"Although approaches to making models robust to spelling variation are useful for many applications, they necessarily result in the loss of the social meaning encoded by the spelling variants.",
"Many of the operations (such as deleting characters) used to generate adversarial perturbations are also frequent in natural language data.",
"In a recent study focused on a small set of selected types of spelling variation, such as g-dropping and lengthening, Nguyen and Grieve (2020) found that word embeddings encode patterns of spelling variation to some extent.",
"Pushing representations of spelling variants together therefore resembles efforts to normalise texts, carrying the same risk of removing rich social information (Eisenstein, 2013).",
"So far, we have highlighted that linguistic forms (e.g., spellings, words, sentences) with different social meanings should not receive the same representation when social meaning is relevant to the task at hand.",
"Drawing on Section 2, we now highlight key considerations for social meaning representations: Social meaning can be attached to different types of linguistic forms Especially for evaluation, comparing representations for forms with the same referential meaning but potentially different social meanings would be the most controlled setting.",
"However, in many cases this can be challenging.",
"For example, paraphrases rarely have exactly the same referential meaning; to what extent we can relax this constraint remains an open question.",
"Generally, it is easier to keep referential meaning constant when analysing spelling variation compared to other forms of variation.",
"Spelling variation may thus be a good starting point but variation on other levels should also be considered.",
"Linguistic variation can index local identities Research on linguistic variation in NLP has mainly focused on broad demographic categories (e.g., nation, sex, age) (Nguyen et al., 2016).",
"These have often been modeled as discrete variables, although Lynn et al. (2017) show how treating variables as continuous can provide advantages.",
"To represent the rich social meanings of linguistic variation, representations likely must be continuous and high dimensional.",
"Moreover, rather than imposing static social attributes onto people, it may be more desirable to let highly localised social meanings emerge from the data itself (e.g., see Bamman et al. (2014b)).",
"Social meaning is highly contextual The same form can have different social meanings depending on context.",
"Furthermore, variation can also occur at the semantic level (Bamman et al., 2014a; Del Tredici and Fernndez, 2017; Lucy and Bamman, 2021).",
"Contextual representations are therefore more suitable than static representations.",
"Our proposed line of work also raises challenges about what should be considered context for learning representations.",
"For learning social meaning, linguistic context alone is not sufficient.",
"Instead, the social and communicative context in which utterances are produced must be considered as well.",
"Because the expression of social meaning is a fundamental part of language use, it should be taken into consideration throughout model development, but it is especially relevant for computational sociolinguistics (Nguyen et al., 2016) and computational social science (Lazer et al., 2009; Nguyen et al., 2020).",
"Examples where social meaning is especially important are: Conversational systems Research on text generation has long recognised that the same message can be said in different ways, and that style choices depend on many factors, such as the conversation setting and the audience (Hovy, 1990).",
"There is a large body of work on generating text in specific styles (e.g., Edmonds and Hirst (2002); Ficler and Goldberg (2017); Mairesse and Walker (2011)).",
"An example are conversational systems that generate text in consistent speaker styles to model persona (Li et al., 2016).",
"Rich representations of social meaning and linguistic variation could support the development of conversational systems that dynamically adjust their style depending on the context including the language used by interlocutors, constructing unique identities in real time, as individuals do in real world interactions (Eckert, 2012).",
"Abusive content detection Systems to automatically detect abusive content can contain racial biases (Davidson et al., 2019; Sap et al., 2019).",
"The task is challenging, because whether something is abusive (e.g., apparent racial slurs) depends strongly on context, such as previous posts in a conversation as well as properties of the author and audience.",
"Considering social meaning and variation would facilitate the development of systems that are more adaptive towards the local social context (going beyond developing systems for major demographic groups).",
"This would more generally also be relevant to other tasks where interpretation is dependent on social context.",
"Exploration of sociolinguistic questions NLP methods can support (socio)linguistic research, e.g. methods to automatically identify words that have changed meaning (Hamilton et al., 2016) or words that exhibit geographical variation (Nguyen and Eisenstein, 2017).",
"Likewise, if computational methods could discover forms with (likely) similar or different social meanings, these forms could then be investigated further in experimental perception studies or through qualitative analysis.",
"Corpora such as Wikipedia and BookCorpus (Zhu et al., 2015) are often used to learn representations.",
"However, it is likely that corpora with more colloquial language offer richer signals for learning social meaning.",
"Text data may already allow models to pick up patterns associated with social meaning, as Bender and Lascarides (2019, p.20) note about social meaning that it is (partly) derivable from form '.",
"Social and communicative context can provide additional signals, for example by including information about author (Garimella et al., 2017; Li et al., 2018), geography (Bamman et al., 2014a; Cocos and Callison-Burch, 2017; Hovy and Purschke, 2018), social interaction (Li et al., 2016), or social network membership (Yang and Eisenstein, 2017).",
"Furthermore, as argued by Bisk et al. (2020), static datasets have limitations for learning and testing NLP models on their capabilities related to the social nature of language.",
"Instead, they argue for a learning by participation' approach, in which users interact freely with the system (Bisk et al., 2020).",
"A key challenge is that although we know that social meaning is highly contextual, we would need to seek a balance between the richness and complexity of the context considered and computational, privacy and ethical constraints.",
"Another key challenge is that usually different aspects of meaning are encoded in one representation.",
"Future work could potentially build on work on disentangling representations, such as work by Akama et al. (2018), Romanov et al. (2019) and recent work motivated by Two-Factor Semantics (Webson et al., 2020).",
"Although there are many datasets to evaluate NLP models on various linguistic phenomena (Warstadt et al., 2019, 2020; Wang et al., 2018), such datasets are missing for social meaning.",
"Collecting evaluation data is challenging.",
"First, relatively little is known about the link between social meaning and textual variation.",
"Sociolinguistics has traditionally focused on the social meaning of phonetic features and to a lesser extent on grammatical and especially lexical features (Chambers, 2003).",
"Social meaning making through spelling variation has received even less attention (exceptions include Leigh (2018)).",
"Hence, research approaches would need to be (further) developed within sociolinguistics to allow for reliable measurement of social meanings of under-researched types of language variation such a spelling variation.",
"One concrete avenue would be to extend and adapt traditional methods like the speaker evaluation paradigm, in which respondents indirectly evaluate accent variation, to be suitable for variation in written communication.",
"Data generated by building on such approaches could then in turn serve as the basis for developing evaluation datasets for NLP models.",
"Second, collecting data is challenging due to the highly contextual nature of social meaning (Sec-tion 2).",
"The same form can take on different social meanings and how a particular form is perceived depends on a variety of factors, including social and situational attributes of both the audience and the speaker or writer.",
"However, carefully collected experimental data should at least be able to lay bear the social meanings that language users collectively associate with a certain linguistic form (i.e. its indexical field).",
"This should give an overview of the social meaning potential language users have at their disposal to draw on in a specific situation.",
"Despite the large body of work on meaning representations in NLP, social meaning has been overlooked in the development of representations.",
"Fully learning and representing the rich social meanings of linguistic variation will likely not be realised for years to come.",
"Yet even small steps in this direction will already benefit a wide array of NLP applications and support new directions in social science research.",
"With this paper, we hope to encourage researchers to work on this challenging but important aspect of linguistic meaning.",
"We will now discuss a few ethical considerations that are relevant to our proposed line of research.",
"In this paper, we have discussed how language variation should be a key consideration when building and developing meaning representations.",
"Labels such as standard', bad' and noisy' language used to describe language variation and practices can reproduce language ideologies (Blodgett et al., 2020; Eisenstein, 2013).",
"As an example, non-standard spellings are sometimes labeled as misspellings', but in many cases they are deployed by users to communicate social meaning.",
"A different term, such as respellings', may therefore be more appropriate (Tagg, 2009).",
"Furthermore, even though there has been increasing attention to performance disparities in NLP systems and how to mitigate them, Blodgett et al. (2020) point out that they should be placed in the wider context of reproducing and reinforcing deep-rooted injustices.",
"See Blodgett et al. (2020) for a discussion on different conceptualizations of bias' in NLP and the role of language variation.",
"Our paper also complements the discussion by Flek (2020).",
"Recognising that language variation is inherent to language, Flek (2020) argues for per-sonalised NLP systems to improve language understanding.",
"The development of such systems, however, also introduces risks, such as stereotypical profiling and privacy concerns.",
"See Flek (2020) for a discussion on ethical considerations for this line of work.",
"In this paper, we have argued for considering language variation and social meaning when building representations.",
"However, such research could potentially also support the development of applications that can cause harm.",
"Long-standing research in sociolinguistics has shown rich connections between language variation and social attributes, including sensitive attributes such as gender and ethnicity (e.g. Eckert (2012)).",
"One may take that as a motivation to build automatic profiling systems.",
"However, as discussed in Section 2, sociolinguists have emphasised the highly contextual nature of social meaning (the same linguistic feature can have different social meanings) and the agency of speakers (language is not just a reflection of someone's identity, but can be actively used as a resource for identity construction).",
"Profiling systems tend to impose categories on people based on broad stereotypical associations.",
"They fail to recognise the rich local identities and agency of individuals.",
"Besides privacy concerns, misclassifications by such systems can cause severe harms.",
"Another ethical consideration is the training data.",
"Data with colloquial language will likely offer richer signals for training, which could be augmented with information about the social and communicative context.",
"Online sources such as Twitter and Reddit may be attractive given their size and availability of fine-grained social metadata.",
"However, the use of large-scale online datasets (even though it is public') raises privacy and ethical concerns.",
"We recommend following guidelines and discussions surrounding the use of online data in social media researchnot only regarding collecting and storing data, but also how such data is shared, and how analyses based on such data are reported and disseminated (Fiesler and Proferes, 2018; Zook et al., 2017; Fiesler et al., 2020).",
"One key step is documenting the datasets (Bender and Friedman, 2018; Gebru et al., 2018).",
"In addition, social biases in these datasets can propagate into the learned representations (Bolukbasi et al., 2016; Caliskan et al., 2017), which may impact downstream applications that make use of these representations.",
"This work is part of the research programme Veni with project number VI.Veni.192.130, which is (partly) financed by the Dutch Research Council (NWO).",
"We also like to thank Kees van Deemter for useful feedback."
]
| [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.