sentences
sequence
labels
sequence
[ "Pre-trained language models (PLMs) have shown great potentials in natural language processing (NLP) including rhetorical structure theory (RST) discourse parsing.", "Current PLMs are obtained by sentence-level pre-training, which is different from the basic processing unit, i.e. element discourse unit (EDU).", "To this end, we propose a second-stage EDU-level pretraining approach in this work, which presents two novel tasks to learn effective EDU representations continually based on well pre-trained language models.", "Concretely, the two tasks are (1) next EDU prediction (NEP) and (2) discourse marker prediction (DMP).", "We take a state-of-the-art transition-based neural parser as baseline, and adopt it with a light bi-gram EDU modification to effectively explore the EDU-level pre-trained EDU representation.", "Experimental results on a benckmark dataset show that our method is highly effective, leading a 2.1-point improvement in F1-score.", "All codes and pre-trained models will be released publicly to facilitate future studies.", "1 1 Introduction Discourse analysis based on rhetorical structure theory (RST) has received increasing interest in the natural language processing (NLP) community (Yu et al., 2018; Liu et al., 2019a; Kobayashi et al., 2020; Zhang et al., 2020; Guz and Carenini, 2020; Koto et al., 2021; Zhang et al., 2021), which organizes discourse output through a well-defined tree structure.", "Figure 1 shows an example of an RST constituent tree, where the leaf nodes are element discourse units (EDUs).", "Given an EDU sequence, RST discourse parsing aims to automatically construct a hierarchical constituent tree 2 .", "Corresponding author.", "1 http://github.com/yunan4nlp/ E-NNRSTParser 2 In this study, we focus on the tree construction task, assuming the gold standard EDU as inputs.", "e 1 [CNW Corp. said] e 2 [the final step in the acquisition of the company hasbeencompletedwiththemergerofCNWwithasubsidiaryofChicago&NorthWesternHoldingsCorp.] e 3 [As reported,] e 4 [CNW agreed to be acquired by a group of investors] e 5 [led by Blackstone Capital Partners Limited Partnership] e 6 [for $50 a share, or about $950 million.] e 1 e 2 e 3 e 4 e 5 e 6 elab -NS same -NN circ -SN attr -SN elab -SN Figure 1: An example of RST discourse tree.", "The shift-reduce transition-based model has been widely adopted in RST discourse parsing (Yu et al., 2018; Mabona et al., 2019), building the constituent tree incrementally with multiple steps by a sequence of actions.", "These models take EDU-level features as inputs to score transition actions at each step.", "Recently, neural network models have achieved state-of-the-art performance for this task by using sophisticated-designed neural modules (Yu et al., 2018; Liu et al., 2019a; Mabona et al., 2019; Zhang et al., 2020; Kobayashi et al., 2020).", "In particular, the contextualized pre-trained language models (PLMs) such as XLNet (Yang et al., 2020) is able to achieve an impressive performance, resulting in F1-score gains of more than 3 points according to previous studies (Koto et al., 2021; Zhang et al., 2021; Nguyen et al., 2021) and our preliminary findings.", "Although great successes have been observed by the contextualized PLMs (Peters et al., 2018; Devlin et al., 2019; Liu et al., 2019b), an apparent mismatch in the basic processing units exists between the EDU-level RST parsing and the sentence-level contextualized language modeling, which might be unable to fully explore the pre-training paradigm.", "Several previous studies have been investigated to address the mismatch between the target tasks 4269 and the standard language model pre-training, e.g., SpanBERT (Joshi et al., 2020) for extractive question answering, BART (Lewis et al., 2020), and T5 (Raffel et al., 2020) for sequence-to-sequence (seq2seq) generation, and all these studies achieve improved performances for their target tasks.", "In this study, we investigate a second-stage EDU-level pre-training based on the above observation.", "Concretely, we conduct pre-training from a PLM with two EDU-level tasks in the second stage.", "The first task is next EDU prediction (NEP), which is inspired by next sentence prediction (NSP) in BERT (Devlin et al., 2019) learning, substituting the sentences with EDUs.", "The second task is discourse marker prediction (DMP), which is also inspired by the masked language modeling (MLM) in BERT (Devlin et al., 2019) learning, substituting the masked words with the masked discourse markers.", "To fully utilize contextualized pre-trained representations, we adapt a transition-based neural RST parser that exploits BiEDU representations with regard to the basic encoding unit instead of the standard single-EDU manner.", "We conduct experiments on RST discourse tree-bank (RST-DT) (Carlson et al., 2001) to evaluate the proposed model.", "First, we derive BiEDU representations directly from PLMs, and thus build a very strong transition-based neural RST parser.", "Then, we examine the proposed second-stage EDU-level pre-training approach.", "Experimental results show that the two second-stage pre-training tasks improve RST parsing greatly, and their combination leads to further increases.", "Our final model achieves the top performance among all the models reported in the literature.", "In summary, our contributions are as follows: We present a second-stage EDU-level pretraining approach to address the inconsistency between the EDU-level RST parsing and the sentence-level contextualized language modeling, aiming for a better pre-training paradigm for RST parsing.", "We suggest BiEDU-based representations for neural RST parsing to exploit well pre-trained language models more effectively.", "We advance the state-of-the-art RST parsing performance.", "In this section, we introduce the proposed second-stage EDU-level pre-training approach.", "It has two CLS t j 1 t jk SEP t i 1 t in ... ...", "EDU-level pre-training tasks, termed by NEP and DMP, respectively.", "NEP requiries EDU pairs as inputs, and predicts whether each EDU pair is adjacent.", "DMP requiries EDU sequences as inputs, and predicts the masked discourse marker between two adjacent EDUs.", "NEP is inspired by NSP in BERT (Devlin et al., 2019) learning.", "NSP is a binary sentence-level classification task, which determines whether two sentences are continous.", "It integrates rich inter-sentence context features into BERT and thus has a positive effect on several downstream classification tasks, such as PDTB-style discourse relation classification (Shi and Demberg, 2019) and Stanford Natural Language Inference (SNLI) (Bowman et al., 2015).", "RST parsing involves the classification between two subtrees (a single EDU can also become a subtree), which is highly similar to above downstream tasks.", "Therefore, we believe that a similar second-stage pre-training task is effective for RST parsing.", "Considering that the basic inputs of RST parsing are EDUs, we substituting the sentences with EDUs.", "We reimplement a SOTA EDU segmenter 3 (Muller et al., 2019) and use it to segment large-scale unlabeled texts.", "Based on EDU segmentation data, we apply NEP to PLM.", "Figure 2 shows an overview of NEP.", "We sample the continuous EDU pairs as positive instances, and the non-continuous EDU pairs as negative instances.", "It should be noted that these positive and negative instances are sampled on the same scale.", "When the these instances are ready, we use Equation 4 to pack each EDU pair and calculate its corresponding EDU representation.", "Then we use a 3 We also use the RST-DT corpus to train an EDU segmenter, which achieves 96.0% F1-score.", "where W e and b e are the model parameters of the linear layer, and y e indicates whether the two EDUs are continous.", "We adopt a cross entropy function as the training objective of NEP.", "We further adopt DMP to pre-train PLMs in the second stage based on the following consideration.", "Pitler et al. (2009) point out that if discourse markers (Schiffrin, 1987) exist in PDTB-style discourse parsing, the classification of discourse relation types become easier.", "RST parsing aims to classify the relationship between two discourse fragments.", "By analogy, discourse markers can also make RST parsing easier.", "The framework of DMP is shown in Figure 3.", "The input of DMP is an EDU sequence.", "We only mask the first word in each EDU that starts with a discourse marker.", "4 Then we use Equations 4 and 5 to obtain EDU representations of the masked EDU sequence.", "Finally, we feed them into a linear layer to calculate the discourse marker score: y m = W m h ei + b m (2) where W m and b m are the model parameters of the linear layer, and y m is the score distribution of the discourse markers.", "We also use a cross entropy function as the training objective of DMP.", "We adopt a transition-based neural RST parser to evaluate the second-stage EDU-level pre-training approach.", "The model has two key components, termed by a transition system and a neural network model, respectively.", "The transtion system, mainly borrowed from Yu et al. (2018), formalizes RST parsing into action sequence predictions, and the neural model yields EDU representations and outputs action sequences.", "As shown in Figure 4, our transition system consists of states and actions.", "A state has two parts, namely a stack stores partially parsed subtrees and a queue stores un-parsed EDUs.", "The initial state is an empty state, and the final state represents a full RST discourse tree.", "A action controls the transition of states.", "There are three kinds of actions: A shift action pops the first EDU of the queue and pushes it into the stack.", "It can only be executed when the queue is not empty.", "A reduce action combines the top two subtrees of the stack into a new subtree with a unclearity label and a relation label.", "It can only be executed if there are more than two subtrees are in the stack.", "A pop root action pops a full discourse tree from the stack, and the parsing process is completed.", "It can only be executed when the queue is empty, and only one element is in the stack.", "In summary, the transition system converts a tree construction into a sequence of action predictions.", "By performing the actions, a RST discourse tree is constructed incrementally.", "Concretely, given the example in Figure 1, we perform actions shift, 4271 CLS t i 1 t i 1 m SEP t i 1 t in ... ... PLM x i 1 CLS x i 1 1 x i 1 m x i SEP x i 1 x in ... ... BiLSTM ... ... ... ... h ei ... ... Decoder Action i Action j ... ... ... EDU e i 1 EDU e i Figure 5: Framework of our neural network model. The input is an EDU sequence. For convenience, here we draw two adjacent EDUs e i 1 and e i . shift, reduceattr SN , shift, shift, shift, reduce-elab NS , shift, reducesame NN , reducecirc SN , reduceelab SN , pop root to construct a full RST discourse tree step by step.", "The Vanilla Representation We use PLM to encode each text, obtaining single-EDU representations.", "Concretely, given a text that has been segmented into EDUs e 1 e n , a special symbol [CLS] is placed at the beginning of each EDU.", "Then each EDU is tokenized by byte pair encoding (BPE) (Sennrich et al., 2016), and encoded by PLM to obtain contextualized word piece embeddings.", "Finally, for each EDU, we choose the following representation of [CLS] to represent it: e i = [CLS] , t i 1 t in x i CLS , x i 1 x in = PLM ( e i ) x ei = x i CLS (3) where [CLS] , t 1 t n are word pieces, x t CLS , x t 1 x tn are word piece embeddings, and x ei is the single-EDU representation.", "Extension with BiEDU The vanilla EDU-based representation exploits the information by treated an EDU as the first segmentation type, leaving its segmentation type unused.", "Here, we make an extension by using BiEDU representations.", "Each input unit is packaged by the current EDU as well as the previous EDU jointly, forming as BiEDU.", "Then [CLS] is placed before the first EDU and [SEP] before the second EDU.", "We also use BPE to tokenize it and use a PLM for encoding.", "We still choose the representation of [CLS] to represent each EDU as follow: ( e i 1 , e i ) = [CLS] t i 1 m , [SEP] t in x i 1 CLS x i 1 m , x i SEP x in = PLM ( e i 1 , e i ) x ei = x i 1 CLS (4) where [CLS] t i 1 m , [SEP] t in are tokens, x i 1 CLS x i 1 m , x i SEP x in are word piece embeddings, and x ei is the BiEDU representation.", "BiLSTM Encoding Furthermore, we follow Koto et al. (2021), using BiLSTM to obtain high-level EDU representations: h e 1 h eu = BiLSTM ( x e 1 x eu ) (5) where h e 1 h eu are final EDU representations.", "In addition, we follow Zhang et al. (2021) and Koto et al. (2021), using paragraph features to further enhance the high-level representations.", "Decoder The decoder part predicts the next-step action based on a given state.", "We follow Yu et al. (2018), selecting the three subtrees ( s 1 , s 2 , s 3 ) at the top of the stack and the first EDU ( q 1 ) in the queue to represent the current state.", "We calculate the subtree representation by the average of its EDU representations.", "We concatenate three subtree representations ( h s 1 , h s 2 , h s 3 ) and an EDU representation ( h eq 1 ), and input them into a linear layer to calculate the score distribution of the action: y i = W i ( h s 1 h s 2 h s 3 h eq 1 ) + b (6) where W i , b are model parameters and is a concatenation operation.", "During the inference, at each step, we exploit the highest-scored action as the output.", "When actions are ready, we perform them to construct the coresponding RST discourse tree step by step according to the transition system introduced in Section 3.1.", "Training We adopt a cross-entropy loss plus with l 2 regularization term as an objective function to train our RST parser.", "Given a state, we obtain action scores according to the neural network model and compute the probability of the gold action by softmax.", "Finally, we feed it into the objective function for loss calculation as follows: p i = softmax( y i ) L ( ) = log( p i [ a gi ]) + || || 2 2 (7) 4272 where a gi is the gold-standard action of the i -th step, is a set of model parameters of our RST parser, and is the l 2 regularization factor.", "We use Adam algorithm (Kingma and Ba, 2015) to optimize the model parameters of our neural network model.", "Datasets To show the proposed model is comparable with previous state-of-the-art systems for RST parsing, we conduct experiments on RST-DT 5 (Carlson et al., 2001).", "It is a standard benchmark dataset for this task, which is collected from the Wall Street Journal news.", "It has been divided into training and test sets, which have 347 discourses and 38 discourses, respectively.", "We randomly select 35 discourses from the training set to develop our model.", "The original RST-DT contains 78 fine-grained discourse relations.", "Most of previous studies simplify these fine-grained discourse relations to 18 coarse-grained relations.", "To facilitate comparison with previous studies, we also use 18 simplified coarse-grained relations.", "To show the domain generalization capability of our proposed RST parser to unseen domain articles, we test it on the georgetown university multilayer (GUM) corpus 6 .", "It contains small-scale articles annotated based on RST in several domains, such as news, fiction, conversations, and etc.", "For more details, one can refer to their paper (Zeldes, 2017).", "The training corpus for second-stage EDU-level pre-training contains unlabeled large-scale collected from a English Wikipedia corpus 7 .", "Although using a unlabeled news corpus may lead to greater improvements, we find that using a Wikipedia corpus is sufficient to provide new SOTA results.", "Evaluation We use the evaluation recommended by Morey et al. (2017), which attaches nuclearity and relation labels to non-leaf trees to eliminate redundant evaluations.", "The evaluation includes four metrics, termd by Span, Nuclearity, Relation, and Full, respectively.", "Span evaluates the skeleton of the discourse tree.", "Nuclearity evaluates the discourse tree with nuclearity labels.", "Relation evaluates the discourse tree with relation labels.", "Full evaluates the complete discourse tree with nuclearity and relation labels.", "Hyper-parameters There are several hyper-parameters in our proposed second-stage EDU-level pre-training approach and RST parser.", "In NEP, the learning rate of PLM is set to 5e-6, and the learning rate of the other model parameters is set to 1e-3.", "The batch size is set to 50.", "The maximum norm of gradient clipping is set to 1.", "The maximum tranining epoch number is set to 10.", "In DMP, the learning rate of PLM is set to 1e-6, and the learning rate of the other model parameters is set to 1e-4.", "The batch size is set to 1.", "The output hidden size of LSTM is set to 200.", "The settings of maximum training iteration number and the norm of gradient cliping are the same as NEP.", "The hyper-parameters of our RST parser are tuned based on the preliminary results on the development set.", "The hidden size of all neural layers is set to 200.", "The dropout is set to 0.25.", "The learning rate of PLM is set to 2e-5, and the learning rate of other model parameters is set to 1e-3.", "The maximum norm of gradient clipping is set to 1, and the maximum training iteration number is set to 20.", "We use transformers library (Wolf et al., 2020) to implement PLM and use PyTorch (Paszke et al., 2019) to implement other neural network modules.", "We conduct several development experiments to show the important factors that influence the performance of our RST parser.", "Different Pre-trained Language Models First, we test our proposed RST parser based on several publicly available PLMs such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019b), XLNet (Yang et al., 2020), SpanBERT (Joshi et al., 2020), and DeBERTa (He et al., 2020).", "The max input length of BERT, RoBERTa, SpanBERT, and DeBERTa is 512 tokens.", "Therefore, we extend them with BiEDU to better exploit these PLMs.", "Since XLNet has no input length limit, we do not need to apply BiEDU extension to our XLNet RST parser.", "Table 1 shows the performances of with different PLMs.", "We find that our BiEDU extension is able to further improve the performances of these PLM-based RST parsers.", "The SpanBERT RST parser achieves worst performance among these RST parsers.", "It is probably because that the basic processing units of SpanBERT learning are not matched with RST parsing.", "The XLNet RST parser achieves the best performance among these RST parser.", "Therefore, following experiments are 4273 Models Full (dev) Full (test) BERT 49.0 45.2 BERT + BiEDU 51.4 48.9 RoBERTa 50.8 48.0 RoBERTa + BiEDU 51.7 49.5 SpanBERT 41.0 38.8 SpanBERT + BiEDU 42.5 39.2 DeBERTa 48.6 47.0 DeBERTa + BiEDU 49.8 48.1 XLNet 52.2 51.4 Table 1: Performances of our RST parser with different PLMs.", "Unlabeled Article Size We study how the unlabeled articles size in second-stage EDU-level pretraining influences the performance of our RST parser.", "First, we apply NEP to PLMs.", "As shown in Figure 6, the performance of our RST parser shows a similar trend when increasing the size of unlabeled articles to perform DMP based pre-training.", "When the size of the unlabeled articles reaches 30k, the Full metric reaches its peak.", "Therefore, we use 30k unlabeled articles in NEP.", "Then, we adopt DMP to further pre-train the PLM part of our RST parser in the second stage.", "As can be seen from Figure 6, the performance of our RST parser first increases and then decreases as the size of the unlabeled articles as the size the unlabeled articles gradually increases from 0 to 240k.", "When the size of the unlabeled articles reaches 120k, the Full metric reaches its peak.", "Therefore, we use 120k unlabeled articles in DMP.", "Above experimental results show that we do not need an ultra large-scale unlabeled corpus for our proposed second-stage EDU-level pre-training approach.", "As shown in Table 2, we report main results on the RST-DT test set.", "Our proposed RST parser achieves 73.4 on the Span metric, 63.3 on the Nuclearity metric, 52.4 on the Relation metric, and Models S N R F XLNet (transition-based) 73.4 63.3 52.4 51.4 + NEP + DMP 76.4 66.1 54.5 53.5 XLNet (top-down) 73.3 62.7 51.9 49.7 + NEP + DMP 72.9 62.7 52.5 50.5 Feng and Hirst (2014) 68.6 55.9 45.8 44.6 Ji and Eisenstein (2014) 64.1 54.2 46.8 46.3 Joty et al. (2015) 65.1 55.5 45.1 44.3 Surdeanu et al. (2015) 65.3 54.2 45.1 44.2 Li et al. (2016) 64.5 54.0 38.1 36.6 Hayashi et al. (2016) 65.1 54.6 44.7 44.1 Braud et al. (2016) 59.5 47.2 34.7 34.3 Braud et al. (2017) 62.7 54.5 45.5 45.1 Yu et al. (2018) 71.4 60.3 49.2 48.1 Mabona et al. (2019) 67.1 57.4 45.5 45.0 Zhang et al. (2020) 67.2 55.5 45.3 44.3 Nguyen et al. (2021) 74.3 64.3 51.6 50.2 Koto et al. (2021) 73.1 62.3 51.5 50.3 Zhang et al. (2021) 76.3 65.5 55.6 53.8 Human 78.7 66.8 57.1 55.0 Table 2: Final results of RST parsing on the test set.", "51.4 on the Full metric, exceeding most of the previous state-of-the-art systems.", "When we apply second-stage EDU-level pre-training to XLNet, it achieves 76.4 on the Span metric and 66.1 on the Nucleairty metric, resulting a Full metric improvement 53.5 51.4 = 2.1.", "The Span, nuclearity, and relation metrics have similar tendencies as well.", "In addition, we implement a top-down RST parser, and also enhance it with using our proposed second-stage EDU-level pre-training approach.", "We find that the proposed approach is able to improve the performance of top-down RST parser as well.", "We compare our proposed RST parser with previous state-of-the-art systems.", "Feng and Hirst (2014) propose a linear-chain conditional random field (CRF) parser.", "Ji and Eisenstein (2014) adopt a statistical transition-based parser with a representation learning.", "Surdeanu et al. (2015) employ a perceptron and a logistic regression to parse a text.", "Li et al. (2016) propose a hierarchical neural parser with attention.", "Joty et al. (2015) propose an intra-sentential and multi-sentential parser.", "Hayashi et al. (2016) reimplement the HILDA parser (Heilman and Sagae, 2015), using a linear SVM classification to parse a text from the bottom up.", "Braud et al. (2016) present a BiLSTM RST parser with multi-task learning.", "Braud et al. (2017) propose a neural greedy parser with cross-lingual recourse.", "Yu et al. (2018) propose a transition-based neural parser, and further enhance it with hidden-layer vectors extracted from a neural syntax parser.", "Mabona et al. (2019) propose a generative RST parser with beam search.", "Zhang et al. (2020) propose a top-4274 Models S N R F Our proposed model 76.4 66.1 54.4 53.5 DMP 74.8 64.3 53.1 52.1 NEP 75.3 65.0 53.8 52.6 NEP DMP 73.4 63.3 52.4 51.4 NEP DMP Para 73.1 62.6 51.9 50.5 Table 3: Ablation study on the test set.", "down neural parser.", "Koto et al. (2021) propose a transformer top-down parser with dynamic or-cale.", "Nguyen et al. (2021) propose a seq2seq neural parser based on point network.", "Koto et al. (2021) propose a sequence labelling parser with dynamic oracle.", "Zhang et al. (2021) propose a neural top-down parser with adversarial learning.", "As shown in Table 2, our transition-based XLNet RST parser achieves the best performance among the systems studied on the Span and the Nuclearity metrics.", "We find that the Relation and the Full metrics of our RST parser are lower than that of Zhang et al. (2021).", "It is probably because that our proposed second-stage EDU-level pre-training approach only requires predicted EDU segmentation, lacking the information of predicted RST discourse trees.", "Ablation Studies Here we conduct several ablation experiments to examine the effectiveness of our proposed second-stage EDU-level pre-training approach and paragraph features.", "As shown in Table 3, we find that NEP and DMP are effective for RST discourse parsing.", "NEP improves our XLNet RST parser by an increase of 52.1 51.4 = 0.7 on the Full metric.", "The tendency of DMP is similar to NEP, obtaining an increase of 52.6 51.4 = 0.8 on the Full metric.", "Our proposed model can be further improved when two EDU-level tasks are applied to XLNet, resulting the Full metric improvement 53.5 51.4 = 2.1.", "In addition, the paragraph features is also effective for RST discourse parsing, which results the overall improvements.", "Effect of EDU Segmentation Performance As mentioned earlier, the second-stage EDU-level pretraining approach requires EDU segmentation produced by a supervised EDU segmenter.", "Predicted EDU segmentation could have errors, which may propagate into RST parsing.", "Here we examine how +NEP+DMP 51 52 53 54 F u ll ( % ) 0 (0.0%) 10 (57.0%) 100 (80.5%) 300 (96.0%) Figure 7: Effect of EDU segmenter performances on our proposed RST parser.", "the performance of the supervised EDU segmenter influence the performance of RST parsing.", "The full EDU segmenter is trained on 300 discourses.", "We retrain two weaker EDU segmenters on 10 and 100 discourses.", "Figure 7 shows the RST parsing performances with different EDU segmenters.", "We find that the EDU segmentation performance influences the RST parsing quality, indicating the importance of correct EDU segmentation.", "Analysis by Number of EDUs in Subtrees As mentioned earlier, NEP predicts whether each EDU pair is continous, and it is able to integrate rich inter-EDU context features into PLMs.", "Therefore, it is expected that the introduce of NEP may bring better improvements for the spans containing more EDUs.", "As such, here we investigate the benefit by using NEP.", "Table 4 shows the comparison results.", "We find that performances are improved significantly when spans contains more EDUs.", "Effect of Different Sampling Strategies Forther-more, we examine how different EDU pair sampling strategies influence RST discourse parsing.", "The training set of NEP is sampled from a large-scale unlabeled corpus.", "We sample the continuous EDU pairs as the positive instances and the noncontinuous EDU pairs as the negative instances.", "The difficulty of NEP changes depending on how the non-continuous EDU pairs are sampled.", "Here we compare four strategies of sampling the noncontinuous EDU pairs: from a sentence, two adjacent sentences, two sentences in an article, and two 4275 Sampling Strategies S N R F From a sentence 74.4 63.9 53.4 52.3 From adjacent sentences 74.9 64.6 53.4 52.2 From a article 74.8 64.3 53.1 52.1 From two articles 73.9 64.2 52.6 51.9 XLNet 73.4 63.3 52.4 51.4 Table 5: Influence of different sampling strategies on our XLNet RST parser.", "different articles, respectively.", "Table 5 shows the comparsion results.", "We find that these sampling strategies do not make difference to RST parsing.", "Analysis by Number of Discourse Markers As mentioned earlier, DMP predicts the masked discourse markers of an EDU sequence and discourse markers are essential cues for RST parsing.", "Therefore, it is expected that the introduce of DMP may bring better performance for the spans containing discourse markers.", "As such, here we investigate the benefit by adopting DMP.", "Table 6 shows the performance of our XLNet RST parser with DMP and without DMP.", "The performances are improved significantly when spans contain discourse markers, which is consistent with our intuitions.", "Effect of Different Masking Strategies Then we change the masking strategy in DMP to show how different masking strategies influences RST parsing.", "We use a random word set to replace the discourse marker set in DMP.", "The number of random words is the same as the number of discourse markers.", "Compared with discourse markers, these random words may be unable to offer key cues for discourse parsing.", "As shown in Table 7, the masking discourse markers strategy leads to performance improvement, and the masking random words strategy leads to slight performance degradation.", "Therefore, it is thus clear that discourse markers are useful for RST parsing.", "Result on GUM Corpus Finally, we test our proposed RST parser on the GUM corpus (Zeldes, 2017) to show the domain generalization capability of our proposed RST parser.", "As shown in Table 8, the performance of our XLNet RST parser declines significantly for these out-of-domain articles, especially in conversation and vlog domains.", "By using our proposed second-stage EDU-level pretraining approach, the performance of the XLNet RST parser can be improved in academic, conversation, textbook, and whow domains significantly, and the performances declines slightly in bio, speech, and vlog domains.", "Therefore, there is still a lot of room for improvement in the generalization ability of our proposed RST parser.", "RST discourse parsing is an important task in the NLP community, which has been studied since early (Soricut and Marcu, 2003).", "Early studies adopt statistical models for this task, using human-designed discrete features (Hernault et al., 2010; Feng and Hirst, 2012; Joty et al., 2013; Feng and Hirst, 2014; Heilman and Sagae, 2015; Wang et al., 2017).", "Recently, several neural network models show great promising for this task (Braud et al., 2016, 2017; Liu and Lapata, 2017; Yu et al., 2018; Mabona et al., 2019; Zhang et al., 2020; Guz and Carenini, 2020).", "With PLMs such EMLo (Pe-ters et al., 2018), BERT (Devlin et al., 2019), XLM-RoBERTa (CONNEAU and Lample, 2019), and XLNet (Yang et al., 2020), these neural RST parsers report high competitive performances (Liu et al., 2019a; Lin et al., 2019; Liu et al., 2020; Kobayashi et al., 2020; Zhang et al., 2021; Nguyen et al., 2021).", "We follow the line of these studies, using neural networks to perform RST parsing.", "Recently, several studies aim to alleviate the mismatch between pre-trained language models and target tasks.", "Joshi et al. (2020) use a span masked language modeling to pre-train a language model for extraction question answering.", "Lewis et al. (2020) propose a pre-training approach for text generation tasks, which maps corrupt documents 4276 Domains XLNet XLNet + NEP + DMP S N R F S N R F academic 65.8 50.7 35.2 34.6 66.6 (+0.8) 52.0 (+0.3) 35.6 (+0.4) 35.0 (+0.4) bio 57.3 41.3 32.0 31.6 57.2 (-0.1) 41.0 (-0.3) 31.0 (-1.0) 30.6 (-1.0) conversation 36.8 23.7 12.6 12.2 32.7 (+0.9) 22.0 (-1.7) 12.6 (+0.0) 12.4 (+0.2) fiction 57.3 41.0 28.1 27.4 57.1 (-0.2) 40.8 (-0.2) 28.1 (+0.0) 27.5 (+0.1) interview 61.5 43.8 31.5 30.8 61.8 (+0.3) 42.8 (-1.0) 31.5 (+0.0) 30.9 (+0.1) news 67.4 51.7 40.5 39.5 68.6 (+1.2) 52.5 (+0.8) 40.5 (+0.0) 39.6 (+0.1) speech 71.5 58.2 45.6 45.6 70.7 (-0.8) 56.9 (-1.3) 44.2 (-1.4) 44.0 (-1.6) textbook 64.7 51.3 39.8 39.2 65.9 (+1.2) 53.2 (+0.9) 41.8 (+1.0) 41.7 (+1.5) vlog 47.0 33.2 21.1 20.2 44.7 (-2.3) 31.7 (-1.5) 19.2 (-1.9) 18.5 (-1.7) voyage 65.0 45.3 31.6 30.6 64.9 (-0.1) 45.5 (+0.2) 31.8 (+0.2) 30.6 (+0.0) whow 60.1 41.8 27.7 26.8 62.7 (+1.6) 44.5 (+2.7) 28.2 (+0.5) 27.4 (+0.6) Table 8: Performances on GUM corpus.", "to the original.", "Raffel et al. (2020) propose an unified text-to-text pre-training framework for several NLP tasks.", "Our work mainly inspired by above studies.", "In this paper, we propose a second-stage EDU-level pre-training approach to alleviate the mismatching between EDU-level RST parsing and sentence-level language modeling.", "There are several studies have shown that pesudo data is useful for RST parsing.", "Huber and Carenini (2019) use pesudo RST discourse trees to train a RST parser, which generated by distant supervision on a sentiment classification.", "Kobayashi et al. (2021) improve RST parsing with large-scale sliver agreement subtrees, which is produced by a well trained RST parser.", "Zhang et al. (2021) train a top-down RST paser with predicted RST discourse trees.", "Above approaches requires a well trained RST parser to generate pesudo RST discourse trees.", "In this work, the generation of our pesudo data merely requires an EDU segmenter and discourse markers, without using a well trained RST parser to further generate pesudo RST discourse trees.", "We proposed a second-stage EDU-level pretraining approach for PLM-based RST discourse parser, reducing the mismatch between the EDU-level RST discourse parsing and the pre-training of sentence-level contextualized language modeling.", "In addition, we extended our RST discourse parser with a light bi-gram EDU modification, finding that it is able to exploit PLMs more effectively.", "Experiments on RST-DT (Carlson et al., 2001) showed that the proposed approach can bring significantly better performance for RST discourse parsing.", "We further conducted several experimental analysis to better understand the proposed approach.", "and the GUM (Zeldes, 2017) corpora suggest two possibilities for future research.", "First, although the XLNet RST parser obtains significantly improvements when the second-stage EDU-level pretraining approach is adopted, the Relation and the Full metrics of our RST parser are still lower than the best system.", "Future research might extend the second-stage EDU-level pre-training task, using pesudo RST discourse trees.", "Second, the generalization ability of our proposed RST parser needs to be improved in multi-domain scenarios.", "So in future we may continue to explore the issue of domain adapation in RST parsing on the basis of the second-stage EDU-level pre-training framework.", "The authors would like to thank the anonymous reviewers for their constructive comments, which help to improve the paper.", "This work was supported by National Natural Science Foundation of China under grants 62076173, U1836222, and 62176180." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "result", "abstain", "other", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "result", "abstain", "other", "objective", "method", "abstain", "abstain", "method", "abstain", "objective", "objective", "abstain", "result", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "result", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "abstain", "objective", "other", "other", "other", "other", "other", "method", "objective", "objective", "abstain", "objective", "abstain", "objective", "abstain", "objective", "objective", "other", "other" ]
[ "Recent advances in pre-trained multilingual language models lead to state-of-the-art results on the task of quality estimation (QE) for machine translation.", "A carefully engineered ensemble of such models won the QE shared task at WMT19.", "Our in-depth analysis, however, shows that the success of using pre-trained language models for QE is overestimated due to three issues we observed in current QE datasets: ( i ) The distributions of quality scores are imbalanced and skewed towards good quality scores; ( ii ) QE models can perform well on these datasets while looking at only source or translated sentences; ( iii ) They contain statistical artifacts that correlate well with human-annotated QE labels.", "Our findings suggest that although QE models might capture fluency of translated sentences and complexity of source sentences, they cannot model adequacy of translations effectively.", "Quality Estimation (QE) (Blatz et al., 2004; Specia et al., 2009) for machine translation is an important task that has been gaining interest over the years.", "Formally, given a source sentence, s and a translated sentence, t = ( s ) where is a machine translation system, the goal of QE is to learn a function f such that f ( s, t ) returns a score that represents the quality of t , without the need to rely on reference translations.", "QE has many useful applications: QE systems trained to estimate Human-mediated Translation Error Rate (HTER) (Snover et al., 2006) can automatically identify and filter bad translations, thereby reducing costs and human post-editing efforts.", "Industry players use QE systems to evaluate translation systems deployed in real-world applications.", "Finally, QE can also be used as a feedWork done when Shuo Sun was an intern at Facebook.", "the source language.", "Recently, language models pre-trained on large amounts of text documents lead to significant improvements on many natural language processing tasks.", "For instance, an ensemble of multilingual BERT (Devlin et al., 2019) and XLM (Con-neau and Lample, 2019) models (Kepler et al., 2019a) won the QE shared task at the Workshop on Statistical Machine Translation (WMT19) (Fon-seca et al., 2019), outperforming the baseline neural QE system (Kepler et al., 2019b) by 42.9% and 127.7% on the English-German and English-Russian sentence-level QE tasks respectively.", "While pre-trained language models contribute to tremendous improvements on publicly available benchmark datasets, such increases in performance beg the question: Are we really learning to estimate translation quality?", "Or are we just guessing the quality of the test sets?", "We performed a careful analysis which reveals that the latter is happening, given several issues with QE datasets which undermine the apparent success on this task: ( i ) The distributions of quality scores in the datasets are imbalanced and skewed towards high-quality translations.", "( ii )", "The datasets suffer from the partial-input baseline problem (Poliak et al., 2018; Feng et al., 2019) where QE systems can still perform well while ingesting only source or translated sentences.", "( iii )", "The datasets contain domain-specific lexical artifacts that correlate well with human judgment scores.", "Our results show that although QE systems trained on these datasets can capture fluency of the target sentences and complexity of the source sentences, they over-leverage lexical artifacts instead of modeling adequacy .", "From these findings, we conclude that QE models cannot generalize, and the successes in this task are over-estimated.", "In this paper, we analyze three different instances of sample bias that are prevalent in QE datasets, which affect the generalization that models trained on them can achieve.", "Lack of label diversity With the advent of NMT models, we have seen an increase in the quality of translation systems.", "As a result, a random sample of translations might have few examples with low-quality scores.", "Systems trained on imbalanced datasets and tested on similar distributions can get away with low error rates without paying much attention to samples with bad quality scores.", "To detect these issues, we analyze the labels and predicted score distributions for several models.", "Lack of representative samples We want to have datasets that adequately represent both the fluency and adequacy aspects of translation.", "QE datasets should have a mixture of instances that model both high and low adequacy irrespective of the fluency.", "To evaluate if our models learn both aspects of translation quality, we run partial input experiments, where we train systems with only the source or target sentences and analyze the discrepancies w.r.t to the full-input experiments.", "Lack of lexical diversity Most QE datasets come from a single domain (e.g., IT, life sci-ences), and certain lexical items can be associated with high-quality translations.", "Lexical artifacts are also observed in monolingual datasets across different tasks (Goyal et al., 2017; Jia and Liang, 2017; Kaushik and Lipton, 2018).", "For example, Gururangan et al. (2018) find that annotators are responsible for introducing lexical artifacts into some natural language inference datasets because they adopt heuristics to generate plausible hypothesis during annotation quickly.", "Here, we use Normalized Pointwise Mutual Information (NPMI) (Bouma, 2009) to find possible lexical artifacts associated with different levels of HTER.", "We experiment with recent QE datasets from WMT18 and WMT19.", "For every dataset, a Statistical Machine Translation (SMT) system or Neural Machine Translation (NMT) system was used to translate the source sentences.", "The translated sentences were then post-edited by professional translators.", "HTER scores between translated sentences and post-edited sentences were calculated with the TER 1 tool and clipped to the range [0, 1].", "HTER score of 0 means the translated sentence is perfect, while 1 means the translated sentence requires complete post-editing.", "Since the test sets for WMT18 are not publicly available, we randomly shuffled those datasets into train, dev, and test splits, following the ratio of approximately 8 to 1 to", "1. Table 1 presents statistics of the QE datasets.", "BERT We experiment with a strong neural QE approach based on BERT (Devlin et al., 2019).", "In particular, we focus on the bert-base-cased version of the multilingual BERT.", "2 We join the source and translated sentences together using the special SEP token and predict the QE score from the vector representation of the final CLS token via a Multilayer Perceptron (MLP) layer.", "Our models perform competitively to the state-of-the-art QE models (Kepler et al., 2019a; Kim et al., 2019).", "However, we do not treat this as a multitask learning problem where word-level labels are also needed because this is severely limited by the availability of data.", "We also do not do further optimizations (e.g. model ensembling) given that our focus is on what can be learned with the current data, and not maximizing performance.", "Our simpler models allow us to carefully analyze and determine the effects of source and translated sentences on the performance of the models.", "We expect the trends to be the same as other neural QE models.", "1 http://www.umiacs.umd.edu/ snover/terp/ 2 https://github.com/google-research/bert QUEST We also trained and evaluated SVM regression models over 17 baseline features highly relevant to the QE task (Specia et al., 2013, 2015).", "Figure 1 presents the distributions of HTER scores for QE datasets from WMT18 and WMT19.", "The distributions of quality scores are skewed towards zero, i.e. most of the translated sentences require few or no post-editing.", "This phenomenon is especially true for the WMT19 datasets, which are exclusively NMT-based, and for which the majority of the translated sentences have HTER scores of less than 0.1.", "When we examine the estimations from our QE models, we find that they rarely output values above 0.3, which implies that these models fail to capture sentences with low-quality scores.", "For example, 15.8% of the samples from the WMT19 En-De test set have HTER scores above 0.3, yet a BERT QE model outputs scores above 0.3 for only 14.5% of those samples.", "In fact, our BERT model predicts scores above 0.3 for only 2.3% of the whole test set.", "This defeats the purpose of QE, especially when the objective of QE is to identity unsatisfactory translations.", "Recommendation: To alleviate this issue, we recommend that QE datasets are balanced by design and that they include high-, mediumand low-quality translations.", "One way to ensure this would be to include models with different levels of quality.", "Table 3 shows some examples of the domain-specific lexical artifacts we found in en-de and en-cs datasets, although other datasets exhibit similar issues.", "Around 37% of translated sentences in En-De datasets contain the double inverted comma, and more than 70% of these sentences require little to no post-editing.", "A QE system can get strong performance simply by associating any translated sentences containing double inverted commas with low HTER scores.", "These lexical artifacts are introduced when the lack of diversity in labels interacts with a lack of diversity in vocabulary and sentences.", "For example, the En-De dataset, which was sampled from an IT manual, contains many repetitive sentences similar to Click X to go to Y .", "problem by sampling source sentences from various documents across multiple domains.", "In principle, a QE system should predict the quality of a translation given: ( i ) its closeness to the source text, and ( ii ) how well it fits in the target language.", "Here, we present results from training and testing systems under partial-input conditions, where either the source or the translation are used to make predictions.", "In Table 2 we report the average Pearson correlation over five different training runs of the same model.", "We observe that QE systems trained on partial inputs perform as well as systems trained on the full input.", "This is especially true for the target-only systems that use BERT: they achieve 90% or more of the full-input performance on five out of eight test sets.", "Similarly, source-only QE systems consistently perform at a correlation of 0.4 or more.", "The partial-input problem is less pronounced for the feature-based SVM models, where the high performance happens in one case.", "The partial-input baseline problem was also reported by the top-performing QE system from WMT19 (Kepler et al., 2019a).", "There, the best re-Dataset langs syst SVM + 17 features BERT src (%) tgt (%) src (%) tgt (%) WMT18 de-en SMT 0.342 62.3% 57.6% 0.697 62.0% 81.2% en-cs SMT 0.398 57.3% 79.9% 0.609 88.2 % 96.1 % en-de NMT 0.290 63.4% 78.6% 0.456 92.5 % 88.4 % SMT 0.326 113.2 % 100.0 % 0.597 71.2% 100.3 % en-lv NMT 0.273 52.4% 60.8% 0.621 68.8% 77.3% SMT 0.311 38.6% 51.5% 0.509 82.5% 93.9 % WMT19 en-de NMT --0.423 94.6 % 90.5 % en-ru NMT --0.439 75.2% 95.9 % Table 2: Pearson correlation ( ) between predictions from various QE models and gold HTER labels, and the percentage of performance obtained by presenting the model with partial input from only the source (src) or target (tgt) sentences.", "sults on the word-level QE task were obtained by ignoring the source sentences when making predictions on translated sentences and vice versa.", "The strong performances on partial-inputs show that these datasets are cheatable , and QE systems trained on them would not generalize well (Feng et al., 2019).", "Recommendation: When designing and annotating QE datasets, we suggest using a metric that intrinsically represents both fluency and adequacy as labels, such as direct assessments (Graham, 2015) and ensure we have enough representation instances with high and low adequacy and fluency.", "Our results suggest that source sentences or translated sentences alone might already contain cues that correlate well with human-annotated scores in the QE datasets.", "Given this, it seems highly unlikely that these QE models can capture inter-Dataset langs syst.", "dependencies between source and translated sentences, which usually requires several levels of linguistic analysis.", "We hypothesize that QE models rely on either the complexity of source sentences or the fluency of translated sentences, but not on adequacy, to make their predictions.", "To test this, we create adversarial test sets across all language directions by randomly shuffling all source sentences and changing the HTER scores to 1.0.", "A good model should be able to assign high HTER scores to mismatched pairs.", "In Table 4, we show the Pearson correlations on the adversarial sets.", "As expected, our QE models perform poorly, getting correlations close to zero.", "The results confirm our suspicion: systems trained on these datasets fail to model adequacy.", "They assign high scores to fluent translations or source sentences with low complexity, regardless of whether these translated sentences are semantically related to their corresponding source or translated sentences.", "In this work, we presented our analysis of QE datasets used in recent evaluation campaigns.", "Although recent advances in pre-trained multilingual language models significantly improve performances on these benchmark QE datasets, we highlight several instances of sampling bias embedded in the QE datasets which undermine the apparent successes of modern QE models.", "We identified ( i ) issues with the balance between high-and lowquality instances ( ii ) issues with the lexical variety of the test sets and ( iii ) the lack of robustness to partial input.", "For each of these problems, we proposed recommendations.", "Upon the submission of this paper, we implemented the proposed recommendations by creating a new dataset for quality estimation that addresses the limitations in current datasets.", "We collected data for six language pairs, namely two high-resource languages (EnglishGerman and EnglishChinese), two mediumresource languages (RomanianEnglish and EstonianEnglish), and two low-resource languages (SinhalaEnglish and NepaliEnglish).", "Each language pair contains 10,000 sentences extracted from Wikipedia and translated by state-of-the-art neural models, manually annotated for quality with direct assessment (0-100) by multiple annotators following industry standards for quality control.", "Improving label diversity We selected language pairs with varying degrees of resource availability, which led to more diverse translation quality distributions (particularly for the medium-resource languages), mitigating the issue of imbalanced datasets, as shown in Figure", "2. Improving lexical diversity We sampled sentences from a diverse set of topics from Wikipedia, which led to a more diverse vocabulary.", "Now, the average type-token ratio (TTR) for the English sentences in this set is 0.166, which is a 417% increase from the average TTR of the QE dataset from WMT18 and a 259% increase from the average TTR of the QE dataset from WMT19.", "Improving representatation This dataset is based on direct assessment, which balances between adequacy and fluency.", "Hopefully, this will mitigate the problems associated with partial-inputs by having more instances with high fluency but low adequacy.", "In Figure 3, we show one of such examples.", "This dataset, named MLQE, has been released to the research community 3 and will be used for the WMT20 shared task on Quality Estimation.", "4 In future work, we will test the partial input hypothesis on this data.", "We hope it will be useful for general research in QE towards more reliable models." ]
[ "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "result", "method", "result", "abstain", "method", "result", "method", "objective", "objective", "method", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain" ]
[ "The focus of a negation is the set of tokens intended to be negated, and a key component for revealing affirmative alternatives to negated utterances.", "In this paper, we experiment with neural networks to predict the focus of negation.", "Our main novelty is leveraging a scope detector to introduce the scope of negation as an additional input to the network.", "Experimental results show that doing so obtains the best results to date.", "Additionally, we perform a detailed error analysis providing insights into the main error categories, and analyze errors depending on whether the model takes into account scope and context information.", "Negation is a complex phenomenon present in all human languages.", "Horn (2010) put it beautifully when he wrote negation is what makes us human, imbuing us with the capacity to deny, to contradict, to misrepresent, to lie, and to convey irony.", "Broadly speaking, negation relates an expression e to another expression with a meaning that is in some way opposed to the meaning of e (Horn and Wansing, 2017).", "The key challenge to understanding negation is thus to figure out the meaning that is in some way opposed to e a semantic and highly ambiguous undertaking that comes naturally to humans in everyday communication.", "Negation is generally understood to carry positive meaning, or in other words, to suggest an affirmative alternative.", "For example, John didn't leave the house implicates that John stayed inside the house .", "Hasson and Glucksberg (2006) show that comprehending negation involves considering the representation of affirmative alternatives.", "While not fully understood, there is evidence that negation involves reduced access to the affirmative mental representation (Djokic et al., 2019).", "Orenes et al. (2014) provide evidence that humans switch to the affirmative alternative in binary scenarios (e.g., from not red to green when processing The figure could be red or green. The figure is not red ).", "In such multary scenarios, however, humans keep the negated representation unless the affirmative interpretation is obvious from context (e.g., humans keep not red when processing The figure is red, green, yellow or blue. The figure is not red. ).", "From a linguistic perspective, negation is understood in terms of scope and focus (Section 2).", "The scope is the part of the meaning that is negated, and the focus is the part of the scope that is most prominently or explicitly negated (Huddleston and Pullum, 2002).", "Identifying the focus is a semantic task, and it is critical for revealing implicit affirmative alternatives.", "Indeed, the focus of negation usually contains only a few tokens, and it is rarely grammatically modified by a negation cue such as never or not .", "Only the focus of a negation is actually intended to be negated, and the resulting affirmative alternatives range from implicatures to entailments as exemplified below (focus is underlined, and affirmative alternatives are in italics): He didn't report the incident to his superiors until confronted with the evidence.", "He reported the incident to his superiors, but not until confronted with the evidence .", "The board didn't learn the details about the millions of dollars wasted in duplicate work.", "The board learnt about the millions of dollars wasted in duplicate work, but not the details .", "In this paper, we experiment with neural networks for predicting the focus of negation.", "We work with the largest corpus annotating the focus of negation (PB-FOC, 3,544 negations), and obtain the best results to date.", "The main contributions of this paper are:", "(a) neural network architecture taking into account the scope of negation and context,", "(b) experimental results showing that scope information as predicted by an automated scope detector is more beneficial than context,", "(c) quantitative analysis profiling which foci are easier and harder to predict, and", "(d) detailed qualitative analysis providing insights into the errors made by the models.", "Crucially, the scope detector we leverage to predict focus is trained with CD-SCO, a corpus created independently of PB-FOC (Section 2).", "Our results suggest that negation scopes may transfer across", "(a) genres (short stories vs. news) and", "(b) negation types (all negations vs. only verbal negations, i.e., when the negation cue modifies a verb).", "It is generally understood that negation has scope and focus.", "Scope is the part of the meaning that is negated and includes all elements whose individual falsity would make the negated statement strictly true (Huddleston and Pullum, 2002).", "Consider the following statement (1) John doesn't know exactly how they met .", "This statement is true if one or more of the following propositions are false: (1a) Somebody knows something, (1b) John is the one who knows, (1c) exactly is the manner of knowing , and (1d) how they met is what is known.", "Thus, the scope of the negation in statement (1) is (1ad).", "The focus of a negation is the part of the scope that is most prominently or explicitly negated, or in other words, the element of the scope that is intended to be interpreted as false to make the overall negative true (Huddleston and Pullum, 2002).", "Determining the focus consists in pinpointing which parts of the scope are intended to be interpreted as true and false given the original statement.", "Without further context, one can conclude that the intended meaning of statement (1) is John knows how they met, but not exactly , or alternatively, that (1ab, 1d) are intended to be interpreted as true, and (1c) as false.", "This interpretation results from selecting as focus (1c), i.e., the manner of knowing .", "We summarize below corpora annotating scope and focus of negation, emphasizing the ones we work with.", "The survey by Jimenez-Zafra et al. (2020) provides a more comprehensive analysis including corpora in languages other than English.", "Corpus Annotating Scope.", "In the experiments described here, we work with a scope detector trained with CD-SCO (Morante and Daelemans, 2012), which annotates negation cues and negation scopes in two stories by Conan Doyle: The Hound of the Baskervilles and The Adventure of Wisteria Lodge.", "The corpus contains 5,520 sentences, 1,227 %foci %verb with %role is focus ARG 0 4.09 67.44 6.06 ARG 1 43.76 90.47 48.36 ARG 2 5.53 14.24 38.81 ARG 3 0.39 1.49 26.42 ARG 4 0.51 0.79 64.29 M-NEG 26.08 99.89 26.11 M-TMP 7.16 16.80 42.62 M-MNR 5.50 7.36 74.71 M-ADV 3.30 13.53 24.38 M-LOC 1.01 3.72 27.27 M-EXT 0.45 0.56 80.00 M-DIR 0.25 1.07 23.68 M-PNC 1.49 2.42 61.63 M-DIS 0.28 7.81 3.61 M-CAU 0.11 2.88 3.92 Table 1: Analysis of PB-FOC: overall percentages of foci per role, percentages of negated verbs having each role, and percentage of each role being the focus.", "of which contain a negation.", "CD-SCO annotates all negations, including verbs (e.g., I fail to see how you could have done more), adverbs (e.g., It was never proved that [. . . ]), determiners (e.g., There is no friend like [. . . ]), pronouns (e.g., [. . . ] has yielded nothing to a careful search), affixes (e.g., The in explicable tangle seemed [. . . ]), and others.", "Other corpora annotating scope in English include efforts with biomedical texts (Vincze et al., 2008) and working with reviews (Councill et al., 2010; Konstantinova et al., 2012).", "Corpora Annotating Focus.", "Although focus of negation is defined as a subset of the scope, there is no corpus annotating both of them in the same texts.", "We work with PB-FOC, the largest publicly available corpus annotating focus of negation (Blanco and Moldovan, 2011).", "PB-FOC annotates the focus of the negations marked with M-NEG role in PropBank (Palmer et al., 2005), which in turn annotates semantic roles on top of the Penn TreeBank (Taylor et al., 2003).", "As a result, PB-FOC annotates the focus of 3,544 verbal negations (i.e., when a negation cue such as never or not syntactically modifies a verb).", "As per the authors, the annotation process consisted of selecting the semantic role most likely to be the focus.", "Therefore, focus annotations in PB-FOC are always all the tokens corresponding to a semantic role of the (negated) verb.", "Finally, M-NEG role is chosen when the focus is the verb.", "The annotations in PB-FOC were carried out taking into account the previous and next sentences.", "We provide examples below, and Section 5 provides additional examples.", "Even if [that deal] ARG 1 is[n't] M-NEG [revived] verb , NBC hopes to find another.", "[A decision] ARG 1 is[n't] M-NEG [expected] verb [until some time next year] M-TMP .", "But [quite a few money managers] ARG 0 are[n't] M-NEG [buying] verb [it] ARG 1 .", "Table 1 presents basic statistics for PB-FOC.", "ARG 1 is the most frequent role to be focus (43.76%) followed by M-NEG (26.08%) and a relatively long list of infrequent roles ( ARG 0 , ARG 2 , M-TMP , MMNR : 4.097.16%).", "More interestingly, the last two columns in Table 1 indicate", "(a) how often a negated verb has each semantic role, and", "(b) how often a role of a negated verb is the focusif a negated verb-argument structure does not have a particular role, that role obviously cannot be the focus.", "These percentages reveal that role presence does not uniquely identify foci, but some semantic roles, although infrequent overall, are likely to be the focus if present ( M-EXT : 80.00%, MMNR : 74.71%, ARG 4 : 64.29%, M-PNC : 61.63%).", "Other corpora annotating the focus in English redefine the annotation guidelines (Anand and Martell, 2012), use dependency trees instead of roles (Sarabi and Blanco, 2016), target non-verbal negations (Sarabi and Blanco, 2017), and work with tutorial dialogues (Banjade and Rus, 2016).", "In addition to identifying negation cues and resolving the scope and focus of negation, there is work showing that processing negation is important for natural language understanding in general.", "In particular, sentiment analysis benefits from processing negation (Wiegand et al., 2010).", "For example, like generally carries positive sentiment, but not when modified by a negation cue (e.g., don't like ).", "Wilson et al. (2005) introduce the idea of contextual polarity, and note that negation may intensify rather than change polarity (e.g., not good vs. not only good but amazing ).", "Jia et al. (2009) present a set of heuristic rules to determine sentiment when negation is present, and Councill et al. (2010) show that information about the scope of negation is beneficial to predict sentiment.", "Outside sentiment analysis, Bentivogli et al. (2016) point out that neural machine translation struggles translating negation, and point to focus detection as a possible solution.", "Neural networks are hard to interpret, but there is evidence that they learn to process negationto a certain degreewhen trained to predict sentiment analysis.", "Li et al. (2016) visually show that neural networks are capable of meaning composition in the presence of, among others, negation and intensi-fication.", "Wang et al. (2015) show that an LSTM architecture is capable of determining sentiment of sequences containing negation such as not good and not bad .", "These previous works train a model for a particular task (i.e., sentiment analysis) and then investigate whether the model learnt anything related to negation that is useful for that task.", "Unlike them, we target focus of negation detectionand the resulting affirmative alternativesand work with task-independent negations.", "Scope Identification.", "Compared to focus identification, scope identification has received substantially more attention.", "The first proposals (Morante and Daelemans, 2009) were trained in the biomedical domain with BioScope (Szarvas et al., 2008).", "The *SEM-2012 Shared Task (Morante and Blanco, 2012) included scope identification with CD-SCO (Section 2), and the winner proposed an SVM-based ranking of syntactic constituents to identify the scope (Read et al., 2012).", "More recently, Fancellu et al. (2016) present neural networks for this task, and Packard et al. (2014) present a complementary approach that operates over semantic representations obtained with an off-the-shelf parser.", "Finally, Fancellu et al. (2017) present an error analysis showing that scope is much easier to identify when delimited by punctuation.", "In this paper, we use a scope detector trained with CD-SCO to predict the focus of negation.", "While we only incorporate small modifications to previously proposed architectures, our scope detector outperforms previous work (Section 4).", "Focus Identification.", "Although focus is part of the scope, state-of-the-art approaches to identify the focus of negation ignore information about scope.", "Possible reasons are that", "(a) existing corpora annotating scope and focus contain substantially different texts (Section 2), and", "(b) incorporating scope information is not straightforward with traditional machine learning and manually defined features.", "The initial proposals obtain modest results and only consider the sentence containing the negation (Blanco and Moldovan, 2011), including scope information in a rule-based system (Rosenberg and Bergler, 2012).", "Zou et al. (2014, 2015) propose The not turned a profit FC FC FC FC FCCRF Layer N_F Word emb.", "graph-based models that incorporate discourse information and obtain improvements over previous works.", "In addition, Shen et al. (2019) present a neural model that leverages word-level and topic-level attention mechanisms to utilize contextual information.", "We compare our results and theirs in Section 4.2.", "In this paper, we show that", "(a) neural networks considering the scope of negation obtain the best results to date and", "(b) context is not beneficial if scope is available (Section 4).", "We approach the task of predicting focus of negation as a sequence labeling task with a neural network.", "We first describe the network architecture, and then present quantitative results.", "Section 5 presents a detailed error and qualitative analysis.", "The network architecture (Fig. 1) consists of a base", "NN (all components except those inside dotted shapes) plus additional components to include information about the scope and context of negation.", "Base NN.", "The base network is inspired by Huang et al. (2015) and Reimers and Gurevych (2017).", "It is a 3-layer Bidirectional Long Short-Term Memory (BiLSTM) network with a Conditional Random Field (CRF) layer.", "The network takes as input the sentence containing the negation whose focus is to be predicted, where each word is represented with the concatenation of", "(a) its pre-trained ELMo embedding Peters et al. (2018),", "(b) a specialized embedding indicating whether a token is the negated verb (not the negation cue), and", "(c) a specialized embedding indicating semantic roles (one per role label).", "The specialized embeddings are trained from scratch as part of the tuning of the network.", "Scope Information.", "We add an extra input at the token level indicating whether a token belongs to the scope of the negation whose focus is to be predicted.", "This new input is then mapped to a third specialized embedding (two values: inside or outside the scope), and concatenated to the word representation prior to feeding it to the 3-layer BiLSTM.", "Scope information is taken from a scope detector inspired by Fancellu et al. (2016).", "Our modifications are as follows.", "First, we add a CRF layer on top of the 2-layer BiLSTM.", "Second, we use GloVe embeddings instead of word2vec embeddings.", "We train the scope detector with CD-SCO (Section 3), and our simple modifications yield the best results to date predicting the scope of negation: 79.41 F1 (vs. 77.77 F1).", "We do not elaborate on the scope detector as we only leverage it to predict focus.", "Context.", "We also experiment with an additional component to add contextual information (previous and next sentences), as previous work has shown empirically that doing so is beneficial (Zou et al., 2014).", "While we tried many strategies (e.g., concatenating sentence embeddings to the representations from the 3-layer BiLSTM), we present only the one yielding the best results.", "Specifically, we use 2-layer Bi-LSTMs with an attention mechanism (Bahdanau et al., 2014; Yang et al., 2016).", "The attention weights (a p and a n for the previous P R F1 Acc Zou et al. (2014) 71.67 67.43 69.49 67.1 Zou et al. (2015) n/a n/a n/a 69.4 Shen et al. (2019) n/a n/a n/a 70.5 NN (baseline) 72.14 71.63 71.88 71.6 NN + S 75.92 75.7 75.81 75.7 NN + Cntxt 73.69 73.17 73.43 73.2 NN + S + Cntxt 74.15 73.74 73.94 73.7 Table 2: Focus prediction results of the best performing previous works and our neural network (baseline network and adding components).", "Hyperparameters and Training Details.", "The cell states of all BiLSTMs have size 350 and we use dropout with a ratio of 0.6.", "We use the stochastic gradient descent algorithm with Adam optimizer (Kingma and Ba, 2014) and a learning rate of 0.001 for tuning weights.", "We set batch size to 24 and stop the training process after the F1 on the development split does not increase for 50 epochs.", "The final model is the one which yields the highest F1 on the development split.", "We combined the original train and development splits from PB-FOC and used 95% of the result as training split and the remaining 5% as development split.", "The implementation uses PyTorch (Paszke et al., 2019).", "1 We refer the readers to the supplemental material for additional details on the neural architecture.", "Table 2 presents the results obtained with the *SEM Shared Task test split and evaluation script.", "Our best network architecture (NN + Scope) outperforms all previous works (Accuracy: +5.2, 7.4%).", "Not all components of the architecture we experiment with are beneficial.", "Our main finding is that scope information, as predicted by a scope detector trained on CD-SCO, is very useful.", "Indeed, the core of the network (3-layer BiLSTM and CRF layer) obtains 75.81 F1 (vs. 71.88) when the input includes scope information.", "Disabling other specialized embeddingsindicating the negated verb and semantic rolesresults in substantial drops in performance (not shown in Table 2).", "According to the creators of PB-FOC and more recent work (Zou et al., 2014, 2015), context is important to determine the focus of negation.", "Our results confirm this observation: adding the previous and next sentences via attention mechanisms improves the results: 73.43 vs. 71.88 F1.", "Our results also show, however, that the scope of negation not previously consideredis more beneficial than context.", "As a matter of fact, adding context is detrimental if scope is taken into account.", "Table 3 presents the results of the best system (NN + Scope) per role.", "We observe that all roles obtain relatively high F1 scores ( > 60.5) with two exceptions: ARG 3 (22.2) and M-CAU (0.0).", "Many roles are rarely the focus ( 5%: ARG 0 , ARG 2 , ARG 3 , ARG 4 , etc.), yet the F1 scores with those roles are similar or even higher than more frequent roles (e.g., ARG 1 ).", "In other words, the neural model is able to predict the focus with similar F1 scores, regardless of what role is the focus.", "In Table 4, we provide a quantitative analysis of the results obtained with the best system (NN + Scope).", "We split the test set into four categories and subcategories, and then evaluate the test instances that fall into each subcategory.", "Specifically, we consider the focus length measured in tokens, the sentence length measured in tokens, the number of roles in the verb-argument structure of the negated verb (intuitively, the more roles to choose from, the harder to predict the right one), and the verb class of the negated verb.", "We obtained verb classes from the lexical files in WordNet (Miller, 1995).", "This leads to the conclusion that the network struggles to represent single words and long sequences of words.", "We note that many foci are single words (39.47%) despite this subcategory obtaining the worst results (F1: 66.0).", "Regarding sentence length, we observe comparable F1 scores (74.176.7) except with sentences between 11 and 15 tokens (85.5).", "These results lead to the conclusion that since the focus prediction task is defined at the semantic role level, role length is more important than sentence length.", "Unsurprisingly, the model obtains worse results depending on the number of roles in the verb-argument structure of the negated verbeffectively, the model suffers when it has more roles to choose from.", "Negated verbs with up to three roles obtain the highest F1 scores (89.7), and results drop sig-nificantly (64.7) when there are more than 5 roles (only 16.43% of instances).", "Finally, we provide detailed results for the verbs belonging to the most frequent verb classes: possession ( buy , take , get , etc.), communication ( say , allege , etc.), cognition ( think , believe , imagine , etc.), and social ( meet , party , etc.).", "Communication and cognition verbs obtain the best results; this is due in part to the fact that verbs belonging to those verb classes tend to have fewer semantic roles.", "To better understand the strengths and weaknesses of our models, we perform a detailed qualitative analysis of the errors made in predicting focus.", "Negation is a complex semantic phenomenon which interacts with other aspects of the meaning and structure of sentences, and this complexity is reflected in the diversity of errors.", "We perform the analysis over all 712 negations in the test set, investigating how linguistic properties of the negated sentences influence performance across the four models (baseline, scope, context, and combined); we consider nearly 3,000 predictions in total.", "The counts in this section reflect instance-model pairings; it could happen, for example, that three of the four models predict the wrong focus for a sentence with a particular linguistic property.", "For some sentences, multiple error types are relevant.", "We identify three broad categories of errors: syntactic (5.1), semantic (5.2), and other (5.3).", "There are multiple error types within each category, and each error type is associated with a particular linguistic property of the negated sentence.", "Here we focus on the most frequently-occurring error types per category, as these offer the greatest insight into specific strengths and weaknesses of the models.", "The distribution of error categories across the four models is shown in Table 8 and discussed in more detail below (5.4).", "Representative examples from PB-FOC for each error type appear in Tables 5, 6, and 7.", "For each example, we show the full sentence, with predicted scope (as output by the scope detector trained with CD-SCO) between double angle brackets and semantic roles in square brackets.", "For each negated sentence, the table shows the gold focus ( GF ) 2 and the predicted focus ( PF ), along with the model(s) responsible for the incorrect prediction.", "Our analysis reveals three prominent error types related to the structure of negated sentences.", "1. Complex verb errors occur when the target verb is part of a complex verb constellation, due to passivization, complex tense constructions, or modal constructions.", "These constructions result in multi-word verb constellations, such as can't be cured in example 1.1 (Table 5).", "These are challeng-2 Gold focus annotations come from the PB-FOC corpus and may include some errors.", "Some properties of the PB-FOC annotations are discussed in Section", "2. Syntactic Error Type Examples from PB-FOC 1.1.", "ing for all models, but especially for the baseline, with 56 error cases (vs. 36, 43, and 41 for the scope, context, and combined models).", "2. Complex sentence structure errors are even more common, with 116/73/87/63 occurrences for the four models.", "Instances triggering this error type are sentences with relative clauses or complement clauses, as well as sentences with non-canonical linking between argument structure and grammatical function, such as passives and questions.", "According to Horn (2010), relative and complement clauses can alter the behavior of negation, compared to simple declarative sentences.", "Example 1.2 in Table 5 shows scope helping with complex sentence structureboth models which incorporate scope predict the correct focus, which occurs within the predicted scope.", "The other two models choose an argument outside of the predicted scope.", "Our third type of syntactic error occurs due to", "3. Role adjacency in the sentence, leading to errors in span prediction.", "The property associated with this error type is linear adjacency of semantic roles, with no textual material in between.", "Example 1.3 in Table 5 shows that the model predicts part of the correct role but then extends the span to incorporate a second role.", "In summary, models with access to predicted scope make fewer syntactic errors than models without scope.", "frequent individual error type.", "The term distractor is most familiar from pedagogical discussion of multiple-choice questions, where a distractor is an incorrect option that test-takers are likely to mistake for a correct answer.", "We use the term here to refer to textual material which leads the neural network away from the gold focus.", "Specifically, distractors are found in two aspects of the input representation for a given instance: the predicted scope, and the adjacent sentences (previous and next) provided as part of the models which incorporate context.", "This error type is, by definition, not applicable for the baseline model.", "We identify 124 occurrences of distractor errors for the scope model, 87 for the context model, and 130 for the combined model, making this the largest error category.", "Example 2.1 in Table 6 marks distractors in bold-face type.", "In this case, all models predict after the last crash as the focus.", "3 The predicted focus occurs in the predicted scope, and the head noun crash appears in the surrounding context.", "In addition to the direct repetition the 1987 crash in the sentence following, we see the synonym market plunge in the previous sentence.", "2. Lack of referential specificity in the gold focus is a less-frequent and more speculative error type.", "The idea is that focus is difficult to predict correctly when the focused semantic role is pronominal or otherwise requires additional information for reference resolution.", "Across the models, we count 22 occurrences.", "In most of these cases, the gold focus is a pronoun ( it , ex. 2.2).", "All models seem to 3 An argument could be made for M-NEG as the negated role; however, we show the gold focus according to PB-FOC.", "disprefer predicting bare pronouns as focus.", "Occurrence of", "3. negative polarity items (NPIs) also influences the accuracy of the model.", "Negative polarity items (such as any or yet , see Horn (2010)) are licensed in the scope of negation but ungrammatical elsewhere.", "For example, it's ungrammatical to say *I have eaten any fish .", "Given the strong association between negation and NPIs, it is not surprising that our models tend to predict as focus any role which contains an NPI (example 2.3).", "This error type occurs roughly twice as often in models with scope than in models without scope.", "Two other error types occur often enough to deserve mention.", "1. Quotation errors generally involve quoted direct speech, which seems to be especially problematic when only part of a clause is quoted speech.", "In example 3.1, the quoted speech is the verb plus its direct object, and all models select the role of the direct object as predicted focus.", "The final error type is a sort of catch-all:", "2. Particle verbs, prepositional phrases, and infinitival complements.", "As with complex sentence structures, these error types reflect complex verbal argument structure.", "Table 8 shows the distribution of error types across the four systems.", "Errors due to particular syntactic structures are the most common, with the subtype of complex sentences making up the bulk of these (339).", "4 The baseline network deals very poorly with both complex verb constellations and complex sentence structures, and incorporating predicted scope consistently reduces the number of errors 4 An error count is incremented whenever the relevant linguistic property is identified in a sentence for which the relevant system has made an incorrect prediction.", "Note that one sentence may present more than one linguistic property.", "of this type.", "This suggests that considering scope helps the system to deal with complex sentences.", "For errors related to semantics, the picture is reversed.", "The systems which consider scope are especially prone to distractor errors, the most common error type over all (341).", "When we have both scope and context, the system has even more potential distractor candidates and makes more errors.", "The two error types in the Other category are distributed roughly evenly across the models, suggesting that none of the current models is any better than the others at dealing with these error types.", "In Table 9 we see a second view on the error distributions, now considering each category as a proportion of the errors made by the system.", "Again we see that predicted scope shifts the balance of error types from syntactic to semantic.", "By reinforcing a subsection of the text in the input representation, the search space for complex sentences narrows and the system has a better chance of selecting the correct focus.", "This same behavior is a disadvantage when the gold focus is not part of the predicted scope, as the scope distracts attention away from other plausible candidate roles.", "Similarly, including context through adjacent sentences sometimes reinforces the correct focus through introduction of other semantically-related terms, and sometimes clutters the field through the very same mechanism.", "Negation is generally understood to carry positive meaning, or in other words, to suggest affirmative alternatives.", "Predicting the focus of negation (i.e., pinpointing the usually few tokens that are actually negated) is key to revealing affirmative alternatives.", "In this paper, we have presented a neural architecture to predict the focus of negation.", "We work with PB-FOC, a corpus of verbal negations (i.e., when a negation cue grammatically modifies a verb) in which one semantic role is annotated as focus.", "Experimental results show that incorporating scope of negation information yields better results, despite the fact that we train the scope detector with data in a different domain (short stories vs. news).", "These results suggest that scope of negation transfers across domains.", "Our best model (NN + Scope) obtains the best focus prediction results to date.", "A quantitative analysis shows that this model is robust across most role labels (Table 3), sentence lengths, and verb classes (Table 4).", "The model obtains worse results, however, when the role that is the focus is only one token, or the negated verb has more than 5 roles (Table 4).", "In addition to state-of-the-art results, we have presented a detailed qualitative analysis.", "We discover three main error categories (syntactic, semantic, and other) and 8 error types after manual analysis of the predictions made by the four models with all test instances.", "We draw two main insights from the qualitative analysis.", "First, including scope information solves many syntactic errors but introduces semantic errors (recall that scope information is beneficial from a quantitative point of view).", "Second, the lower results after including context, at least with the current architecture, are largely due to additional semantic errors via distractors in the previous and next sentences.", "Thanks to the anonymous reviewers for their insightful comments.", "This material is based upon work supported by the NSF under Grant No. 1845757.", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.", "The Titan Xp used for this research was donated by the NVIDIA Corporation.", "Computational resources were also provided by the UNT office of High-Performance Computing." ]
[ "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "method", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other" ]
[ "Recently, word enhancement has become very popular for Chinese Named Entity Recognition (NER), reducing segmentation errors and increasing the semantic and boundary information of Chinese words.", "However, these methods tend to ignore the information of the Chinese character structure after integrating the lexical information.", "Chinese characters have evolved from pictographs since ancient times, and their structure often reflects more information about the characters.", "This paper presents a novel Multi-metadata Embedding based Cross-Transformer (MECT) to improve the performance of Chinese NER by fusing the structural information of Chinese characters.", "Specifi-cally, we use multi-metadata embedding in a two-stream Transformer to integrate Chinese character features with the radical-level embedding.", "With the structural characteristics of Chinese characters, MECT can better capture the semantic information of Chinese characters for NER.", "The experimental results obtained on several well-known benchmarking datasets demonstrate the merits and superiority of the proposed MECT method.", "1 1 Introduction Named Entity Recognition (NER) plays an essential role in structuring of unstructured text.", "It is a sequence tagging task that extracts named entities from unstructured text.", "Common categories of NER include names of people, places, organizations, time, quantity, currency, and some proper nouns.", "NER is the basis for many Natural Language Processing (NLP) tasks such as event extraction (Chen et al., 2015), question answering (Diefenbach et al., 2018), information reCorresponding author.", "trieval (Khalid et al., 2008), knowledge graph construction (Riedel et al., 2013), etc.", "Compared with English, there is no space between Chinese characters as word delimiters.", "Chinese word segmentation is mostly distinguished by readers through the semantic information of sentences, posing many difficulties to Chinese NER (Duan and Zheng, 2011; Ma et al., 2020).", "Besides, the task also has many other challenges, such as complex combinations, entity nesting, and indefinite length (Dong et al., 2016).", "In English, different words may have the same root or affix that better represents the word's semantics.", "For example, physiology, psychology, sociology, technology and zoology contain the same suf-fix, -logy', which helps identify the entity of a subject name.", "Besides, according to the information of English words, root or affixes often determine general meanings (Yadav et al., 2018).", "The root, such as ophthalmo-' ( ophthalmology ), esophage-' ( esophagus ) and epithelio-' ( epithelium ), can help human or machine to better recognize professional nouns in medicine.", "Therefore, even the state-of-the-art methods, such as BERT (Devlin et al., 2019) and GPT (Radford et al., 2018), trained on large-scale datasets, adopt this delicate word segmentation method for performance boost.", "For Chinese characters, there is also a structure Radicals Denotation Examples ( bird ) birds ( chicken ), ( duck ), ( goose ), ( eagle ) ( grass ) herbaceous plants ( flower ), ( grass ), ( vegetable ), ( tea ) ( meat ) body parts ( kidney ), ( foot ), ( leg ), ( brain ) Table 2: Some examples of Chinese radicals, including '( bird ), '( grass ) and '( meat ).", "similar to the root and affixes in English.", "According to the examples in Table 1, we can see that the structure of Chinese characters has different decomposition methods, including the Chinese radical (CR), head and tail (HT) and structural components (SC).", "Chinese characters have evolved from hieroglyphs since ancient times, and their structure often reflects more information about them.", "There are some examples in Table 2. The glyph structure can enrich the semantics of Chinese characters and improve the performance of NER.", "For example, the Bi-LSTM-CRF method (Dong et al., 2016) firstly obtains character-level embedding through the disassembly of Chinese character structure to improve the performance of NER.", "However, LSTM is based on time series modeling, and the input of each cell depends on the output of the previous cell.", "So the LSTM-based model is relatively complicated and the parallel ability is limited.", "To address the aforementioned issues, we take the advantages of Flat-Lattice Transformer (FLAT) (Li et al., 2020) in efficient parallel computing and excellent lexicon learning, and introduce the radical stream as an extension on its basis.", "By combining the radical information, we propose a Multi-metadata Embedding based Cross-Transformer (MECT).", "MECT has the latticeand radical-streams, which not only possesses FLAT's word boundary and semantic learning ability but also increases the structure information of Chinese character radicals.", "This is very effective for NER tasks, and has improved the baseline method on different benchmarks.", "The main contributions of the proposed method include: The use of multi-metadata feature embedding of Chinese characters in Chinese NER.", "A novel two-stream model that combines the radicals, characters and words of Chinese characters to improve the performance of the proposed MECT method.", "The proposed method is evaluated on several well-known Chinese NER benchmarking datasets, demonstrating the merits and superiority of the proposed approach over the state-of-the-art methods.", "The key of the proposed MECT method is to use the radical information of Chinese characters to enhance the Chinese NER model.", "So we focus on the mainstream information enhancement methods in the literature.", "There are two main types of Chinese NER enhancement methods, including lexical information fusion and glyph-structural information fusion.", "Lexical Enhancement In Chinese NER, many recent studies use word matching methods to enhance character-based models.", "A typical method is the Lattice-LSTM model (Zhang and Yang, 2018) that improves the NER performance by encoding and matching words in the lexicon.", "Recently, some lexical enhancement methods were proposed using CNN models, such as LR-CNN (Gui et al., 2019a), CAN-NER (Zhu and Wang, 2019).", "Graph networks have also been used with lexical enhancement.", "The typical one is LGN (Gui et al., 2019b).", "Besides, there are Transformer-based lexical enhancement methods, such as PLT (Xue et al., 2019) and FLAT.", "And SoftLexicon (Ma et al., 2020) introduces lexical information through label and probability methods at the character representation layer.", "Glyph-structural Enhancement Some studies also use the glyph structure information in Chinese NER.", "For example, Dong et al. (2016) were the first to study the application of radical-level information in Chinese NER.", "They used Bi-LSTM to extract radical-level embedding and then concatenated it with the embedding of characters as the final input.", "The radical information used in Bi-LSTM is structural components (SC) as shown in Table 1, which achieved state-of-the-art performance on the MSRA dataset.", "The Glyce (Meng et al., 2019) model used Chinese character images to extract features such as strokes and structure of Chinese characters, achieving promising perfor-TransformerEncoder South 1 1 Capital 2 2 City 3 3 Long 4 4 River 5 5 Big 6 6 Bridge 7 7 Nanjing 1 2 NanjingCity 1 3 Mayor 3 4 YangtzeRiver 4 5 YangtzeRiver Bridge 4 7 Bridge 6 7 MASKB-LOC I-LOC I-LOC B-LOC I-LOC I-LOC I-LOC Figure 1: The input and output of FLAT.", "mance in Chinese NER.", "Some other methods (Xu et al., 2019; Song et al., 2020) also proposed to use radical information and Tencent's pre-trained embedding 2 to improve the performance.", "In these works, the structural components of Chinese characters have been proven to be able to enrich the semantics of the characters, resulting in better NER performance.", "The proposed method is based on the Flat-Lattice Transformer (FLAT) model.", "Thus, we first briefly introduce FLAT that improves the encoder structure of Transformer by adding word lattice information, including semantic and position boundary information.", "These word lattices are obtained through dictionary matching.", "Figure 1 shows the input and output of FLAT.", "It uses the relative position encoding transformed by head and tail position to fit the word's boundary information.", "The relative position encoding, R ij , is calculated as follows: R ij = ReLU ( W r ( p h i h j p h i t j p t i h j p t i t j )) , (1) where W r is a learnable parameter, h i and t i represent the head position and tail position of the i-th character, denotes the concatenation operation, and p span is obtained as in Vaswani et al. (2017): p (2 k ) span = sin( span 10000 2 k/d model ) , (2) p (2 k +1) span = cos( span 10000 2 k/d model ) , (3) where p span corresponds to p in Eq.", "(1), and span denotes h i h j , h i t j , t i h j and t i t j .", "Then the scaled dot-product attention is obtained by: Att ( A , V ) = softmax ( A ) V , (4) A ij = ( Q i + u ) (cid:62) K j + ( Q i + v ) (cid:62) R ij , (5) [ Q , K , V ] = E x [ W q , W k , W v ] , (6) 2 https://ai.tencent.com/ailab/nlp/en/ embedding.html where R ij = R ij WR .", "To better integrate the information of Chinese character components, we use Chinese character structure as another metadata and design a two-stream form of multi-metadata embedding network.", "The architecture of the proposed network is shown in Figure 2a.", "The proposed method is based on the encoder structure of Transformer and the FLAT method, in which we integrate the meaning and boundary information of Chinese words.", "The proposed two-stream model uses a Cross-Transformer module similar to the self-attention structure to fuse the information of Chinese character components.", "In our method, we also use the multi-modal collaborative attention method that is widely used in vision-language tasks (Lu et al., 2019).", "The difference is that we add a randomly initialized attention matrix to calculate the attention bias for the two types of metadata embedding.", "Chinese characters are based on pictographs, and their meanings are expressed in the shape of objects.", "In this case, the structure of Chinese characters has certain useful information for NER.", "For example, the radicals such as ' ( grass ) and ' ( wood ) generally represent plants, enhancing Chinese medicine entity recognition.", "For another example, ' ( body ) represents human body parts or organs, and ' ( disease ) represents diseases, which benefits Chinese NER for the medical field.", "Besides, the Chinese have their own culture and belief in naming.", "Radicals ' ( metal ), ' ( wood ), ' ( water ), ' ( fire ), and ' ( earth ) represented by the Wu-Xing (Five Elements) theory are often used as names of people or companies.", "But ' ( rust ), ' ( kill ), ' ( dirt ), ' ( disaster ) and ' ( fall ) are usually not used as names, even if they contain some elements of the Wu-Xing theory.", "It is because the other radical components also determine the semantics of Chinese characters.", "Radicals that generally appear negative or conflict with Chinese cultural beliefs are usually not used for naming.", "Therefore, we choose the more informative Structural Components (SC) in Table 1 as radical-level features of Chinese characters and use Convolutional Neural Network (CNN) to extract character <pad><pad> <pad><pad><pad> <pad> LatticeEmbedding Radical-levelEmbedding Split characters into SC South Capital City Long River Big Bridge CNNforRadical-levelEmbedding B-LOC I-LOC I-LOC B-LOC I-LOC I-LOC I-LOCMASKS o u t h C a p i t a l C i t y L o n g R i v e r B i g N a n j i n g C i t y M a y o r Y a n g t z e R i v e r Y a n g t z e R i v e r B r i d g e B r i d g e C o n s t r u c t l a tt i c e B r i d g e N a n j i n g S o u t h C a p i t a l C i t y L o n g R i v e r B i g B r i d g e", "features.", "The structure diagram of the CNN network is shown in Figure 3. We first disassemble the Chinese characters into SC and then input the radicals into CNN.", "Last, we use the max-pooling and fully connected layers to get the feature embedding of Chinese characters at the radical-level.", "After radical feature extraction, we propose a Cross-Transformer network to obtain the supplementary semantic information of the structure of Chinese characters.", "It also uses contextual and lexical information to enrich the semantics of Chinese characters.", "The Cross-Transformer network is illustrated in Figure 2b.", "We use two Transformer encoders to cross the lattice and radical information of Chinese characters, which is different from the self-attention method in Transformer.", "where EL and ER are lattice embedding and radical-level embedding, I is the identity matrix, and each W is a learnable parameter.", "Then we use the relative position encoding in FLAT to represent the boundary information of a word and calculate the attention score in our Cross-Transformer: Att L ( AR , VL ) = Softmax ( AR ) VL , (8) Att R ( AL , VR ) = Softmax ( AL ) VR , (9) AL ( R ) ,ij = ( QL ( R ) ,i + u L ( R ) ) (cid:62) ER ( L ) ,j + ( QL ( R ) ,i + v L ( R ) ) (cid:62) R L ( R ) ,ij , (10) where u and v are learnable parameters for attention bias in Eq.", "(10), AL is the lattice attention score, and AR denotes the radical attention score.", "And R ij = R ij WR .", "WR are learnable parameters.", "The relative position encoding, R ij , is calculated as follows: R ij = ReLU ( W r ( p h i h j p t i t j )) .", "We empirically found that the use of random attention in Cross-Transformer can improve the performance of the proposed method.", "This may be due to the requirement of attention bias in lattice and radical feature embedding, which can better adapt to the scores of two subspaces.", "Random attention is a randomly initialized parameter matrix B max len max len that is added to the previous attention score to obtain a total attention score: V L = Softmax ( AR + B ) VL , (12) V R = Softmax ( AL + B ) VR .", "To reduce information loss, we directly concatenate the lattice and radical features and input them into a fully connected layer for information fusion:", "Fusion ( V L , V R ) = ( V R V L ) W o + b , (14)", "where denotes the concatenation operation, W o and b are learnable parameters.", "After the fusion step, we mask the word part and pass the fused feature to a Conditional Random Field (CRF) (Lafferty et al., 2001) module.", "In this section, we evaluate the proposed MECT method on four datasets.", "To make the experimental results more reasonable, we also set up two additional working methods for assessing the performance of radicals in a two-stream model.", "We use the span method to calculate F1-score (F1), precision (P), and recall (R) as the evaluation metrics.", "We use four mainstream Chinese NER benchmarking datasets: Weibo (Peng and Dredze, 2015; He and Sun, 2016), Resume (Zhang and Yang, 2018), MSRA (Levow, 2006), and Ontonotes 4.0 (Weischedel and Consortium, 2013).", "The corpus of MSRA and Ontonotes 4.0 comes from news, the corpus of Weibo comes from social media, and the corpus of Resume comes from the resume data in Sina Finance.", "Table 3 shows the statistical information of these datasets.", "Among them, the Weibo dataset has four types of entities, including PER, ORG, LOC, and GPE.", "Resume has eight types of entities, including CONT, EDU, LOC, PER, ORG, PRO, RACE, and TITLE.", "OntoNotes 4.0 has four types of entities: PER, ORG, LOC, and GPE.", "The MSRA dataset contains three types of entities, i.e. , ORG, PER, and LOC.", "We use the state of the art method, FLAT, as the baseline model.", "FLAT is a Chinese NER model based on Transformer and combined with lattice.", "Besides, we also compared the proposed method with both classic and innovative Chinese NER models.", "We use the more informative SC' as the radical feature, which comes from the online Xinhua Datasets Types Train Dev Test Weibo SentencesEntities 1.35k1.89k 0.27k0.39k 0.27k0.42k Resume SentencesEntities 3.8k1.34k 0.46k0.16k 0.48k0.15k OntoNotes SentencesEntities 15.7k13.4k 4.3k6.95k 4.3k7.7k MSRA SentencesEntities 46.4k74.8k --4.4k6.2k Table 3: Statistics of the benchmarking datasets.", "Dictionary 3 .", "The pre-trained embedding of characters and words are the same as FLAT.", "For hyper-parameters, we used 30 1-D convolution kernels with the size of 3 for CNN.", "We used the SMAC (Hutter et al., 2011) algorithm to search for the optimal hyper-parameters.", "Besides, we set a different learning rate for the training of the radical-level embedding with CNN.", "Readers can refer to the appendix for our hyper-parameter settings.", "In this section, we evaluate and analyze the proposed MECT method with a comparison to both the classic and state of the art methods.", "The experimental results are reported in Tables 4 7 4 .", "Each table is divided into four blocks.", "The first block includes classical Chinese NER methods.", "The second one reports the results obtained by state of the art approaches published recently.", "The third and 3 http://tool.httpcn.com/Zi/ .", "4 In Tables 4 7, ' denotes the use of external labeled data for semi-supervised learning and ' denotes the use of discrete features.", "MECT method as well as the baseline models.", "Weibo: Table 4 shows the results obtained on Weibo in terms of the F1 scores of named entities (NE), nominal entities (NM), and both (Over-all).", "From the results, we can observe that MECT achieves the state-of-the-art performance.", "Compared with the baseline method, MECT improves 2.98% in terms of the F1 metric.", "For the NE metric, the proposed method achieves 61.91%, beating all the other approaches.", "Resume: The results obtained on the Resume dataset are reported in Table 5.", "The first block shows Zhang and Yang (2018) comparative results on the character-level and word-level models.", "We can observe that the performance of incorporating word features into the character-level model is better than other models.", "Additionally, MECT combines lexical and radical features, and the F1 score is higher than the other models and the baseline method.", "Ontonotes 4.0: Table 6 shows the results obtained on Ontonotes 4.0.", "The symbol ' indicates gold segmentation, and the symbol ' denotes automated segmentation.", "Other models have no segmentation and use lexical matching.", "Compared to the baseline method, the F1 score of MECT is increased by 0.47%.", "MECT also achieves a high recall rate, keeping the precision rate and recall rate relatively stable.", "MSRA: Table 7 shows the experimental results obtained on MSRA.", "In the first block, the result proposed by Dong et al. (2016) is the first method using radical information in Chinese NER.", "From the table, we can observe that the overall performance of MECT is higher than the existing SOTA methods.", "Similarly, our recall rate achieves a higher performance so that the final F1 has a certain performance boosting.", "With BERT: Besides the single-model evaluation on the four datasets, we also evaluated the proposed method when combining with the SOTA method, BERT.", "The BERT model is the same as FLAT using the BERT-wwm' released by Cui et al. (2020).", "The results are shown in the fourth block of each table.", "The results of BERT are taken from the FLAT paper.", "We can find that MECT further improves the performance of BERT significantly.", "There are two sub-modules in the proposed Cross-Transformer method: lattice and radical attentions.", "Figure 4 includes two heatmaps for the normalization of the attention scores of the two modules.", "From the two figures, we can see that lattice attention pays more attention to the relationship between words and characters so that the model can obtain the position information and boundary information of words.", "Radical attention focuses on global information and corrects the semantic information of Models P R F1 Chen et al. (2006) 91.22 81.71 86.20 Zhang et al. (2006) 92.20 90.18 91.18 Zhou et al. (2013) 91.86 88.75 90.28 Lu et al. (2016) -87.94 Dong et al. (2016) 91.28 90.62 90.95 Lattice-LSTM 93.57 92.79 93.18 CAN-NER 93.53 92.42 92.97 LR-CNN 94.50 92.93 93.71 LGN 94.19 92.73 93.46 PLT 94.25 92.30 93.26 SoftLexicon (LSTM) 94.63 92.70 93.66 + bichar 94.73 93.40 94.06 Baseline -94.12 MECT 94.55 94.09 94.32 BERT -94.95 BERT + MECT -96.24 Table 7: Results obtained on MSRA (%).", "each character through radical features.", "Therefore, lattice and radical attentions provide complementary information for the performance-boosting of the proposed MECT method in Chinese NER.", "We visualized the radical-level embedding obtained by the CNN network and found that the cosine distance of Chinese characters with the same radical or similar structure is smaller.", "For example, Figure 5 shows part of the Chinese character embedding trained on the Resume dataset.", "The highlighted dots represent Chinese characters that are close to the character '.", "We can see that they have the same radicals or similar structure.", "It can enhance the semantic information of Chinese characters to a certain extent.", "We also examined the inference results of MECT and FLAT on Ontonotes 4.0 and found many exciting results.", "For example, some words with a Figure 5: Embedding visualization of the characters related to ' in two-dimensional space.", "percentage like (43.2%)' is incorrectly labelled as PER in the training dataset, which causes FLAT to mark the percentage of words with PER on the test dataset, while MECT avoids this situation.", "There are also some words such as ' and ' that appear in the lexicon, which was mistakenly identified as valid words by FLAT, leading to recognition errors.", "Our MECT addresses these issues by paying global attention to the radical information.", "Besides, in FLAT, some numbers and letters are incorrectly marked as PER, ORG, or others.", "We compared the PER label accuracy of FLAT and MECT on the test dataset.", "FLAT achieves 81.6%, and MECT reaches 86.96%, which is a very significant improvement.", "We use the same FLAT method to evaluate the parallel and non-parallel inference speed of MECT on an NVIDIA GeForce RTX 2080Ti card, using batch size = 16 and batch size = 1.", "We use the non-parallel version of FLAT as the standard and calculate the other models' relative inference speed.", "The results are shown in Figure 6. According to the figure, even if MECT adds a Transformer encoder to FLAT, the speed is only reduced by 0.15 in terms of the parallel inference speed.", "Our model's speed is considerable relative to LSTM, CNN, and some graph-based network models.", "Because Transformer can make full use of the GPU's parallel computing power, the speed of MECT does not drop too much, but it is still faster than other models.", "The model's parameter is between 2 and 4 million, determined by the max sentence length in the dataset and the d model size in the model.", "To validate the effectiveness of the main components of the proposed method, we set up two experiments in Figure 7. In Experiment A, we only use a", "single-stream model with a modified self-attention, which is similar to the original FLAT model.", "The difference is that we use a randomly initialized attention matrix (Random Attention) for the attention calculation.", "We combine lattice embedding and radical-level embedding as the input of the model.", "The purpose is to verify the performance of the two-stream model relative to the single-stream model.", "In Experiment B, we do not exchange the query's feature vector.", "We replace the cross-attention with two sets of modified self-attention and follow the two modules' output with the same fusion method as MECT.", "The purpose of experiment B is to verify the effectiveness of MECT relative to the two-stream model without crossover.", "Besides, we evaluate the proposed MECT method by removing the random attention module.", "Table 8 shows the ablation study results.", "1) By comparing the results of Experiment A with the results of Experiment B and MECT, we can find that the two-stream model works better.", "The use of lattice-level and radical-level features as the two streams of the model helps the model to better understand and extract the semantic features of Chinese characters.", "2) Based on the results of Experiment B and MECT, we can see that by exchanging the two query feature vectors, the model can extract features more effectively at the lattice and radical levels.", "They have different attention mechanisms to obtain contextual information, resulting in global and local attention interaction.", "This provides better information extraction capabilities for the proposed method in a complementary way.", "3) Last, the performance of MECT drops on all the datasets by Lattice Embedding Radical-levelEmbedding Adapt Self Attention", "removing the random attention module (the last row).", "This indicates that, as an attention bias, random attention can eliminate the differences caused by different embeddings, thereby improving the model's performance further.", "This paper presented a novel two-stream network, namely MECT, for Chinese NER.", "The proposed method uses multi-metadata embedding that fuses the information of radicals, characters and words through a Cross-Transformer network.", "Additionally, random attention was used for further performance boost.", "Experimental results obtained on four benchmarks demonstrate that the radical information of Chinese characters can effectively improve the performance for Chinese NER.", "The proposed MECT method with the radical stream increases the complexity of a model.", "In the future, we will consider how to integrate the characters, words and radical information of Chinese characters with a more efficient way in two-stream or multi-stream networks to improve the performance of Chinese NER and extend it to other NLP tasks.", "This work was supported in part by the National Key Research and Development Program of China", "(2017YFC1601800), the National Natural Science Foundation of China (61876072, 61902153) and the Six Talent Peaks Project of Jiangsu Province (XYDXX-012).", "We also thank Xiaotong Xiang and Jun Quan for their help on editing the manuscript." ]
[ "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "objective", "objective", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "other" ]
[ "LSTM-based recurrent neural networks are the state-of-the-art for many natural language processing (NLP) tasks.", "Despite their performance, it is unclear whether, or how, LSTMs learn structural features of natural languages such as subject-verb number agreement in English.", "Lacking this understanding, the generality of LSTMs on this task and their suitability for related tasks remains uncertain.", "Further, errors cannot be properly attributed to a lack of structural capability, training data omissions, or other exceptional faults.", "We introduce influence paths , a causal account of structural properties as carried by paths across gates and neurons of a recurrent neural network.", "The approach refines the notion of influence (the subject's grammatical number has influence on the grammatical number of the subsequent verb) into a set of gate-level or neuron-level paths.", "The set localizes and segments the concept (e.g., subject-verb agree-ment), its constituent elements (e.g., the sub-ject), and related or interfering elements (e.g., attractors).", "We exemplify the methodology on a widely-studied multi-layer LSTM language model, demonstrating its accounting for subject-verb number agreement.", "The results offer both a finer and a more complete view of an LSTM's handling of this structural aspect of the English language than prior results based on diagnostic classifiers and ablation.", "Traditional rule-based NLP techniques can capture syntactic structures, while statistical NLP techniques, such as n-gram models, can heuristically integrate semantics of a natural language.", "Modern RNN-based models such as Long Short-Term Memory (LSTM) models are tasked with incorporating both semantic features from the statistical associations in their training corpus, and structural features generalized from the same.", "Despite evidence that LSTMs can capture syntactic rules in artificial languages (Gers and Schmid-huber, 2001), it is unclear whether they are as capable in natural languages (Linzen et al., 2016; Lakretz et al., 2019) in the context of rules such as subject-verb number agreement, especially when not supervised for the particular feature.", "The incongruence derives from this central question: does an LSTM language model's apparent performance in subject-verb number agreement derive from statistical heuristics (like n-gram models) or from generalized knowledge (like rule-based models)?", "Recent work has begun addressing this question (Linzen et al., 2016) in the context of language models : models tasked with modeling the likelihood of the next word following a sequence of words as expected in a natural language (see Figure 1, bottom).", "Subject-verb number agreement dictates that the verb associated with a given subject should match its number (e.g., in Figure 1, the verb run should match with the subject boys).", "Giu-lianelli et al. (2018) showed that the subject grammatical number is associated with various gates in an LSTM, and Lakretz et al. (2019) showed that ablation (disabling activation) of an LSTM model at certain locations can reduce its accuracy at scoring verbs of the correct grammatical number.", "Influence offers an alternate means of exploring properties like number agreement.", "We say an input is influential on an outcome when changing just the input and nothing else induces a change on the outcome.", "In English grammar, the number of a subject is influential on the number of its verb, in that changing the number of that subject while keeping all other elements of a sentence fixed would necessitate a change in the number of the verb.", "Algorithmic transparency literature offers formal definitions for empirically quantifying notions of influence for systems in general (Datta et al., 2016) and for deep neural networks specifically (Leino et al., 2018; Sundararajan et al., 2017).", "The mere fact that subject number is influential on verb number as output by an LSTM model is sufficient to conclude that it incorporates the agreement concept in some way but does not indicate whether it operates as a statistical heuristic or as a generalized rule.", "We address this question with influence paths , which decompose influence into a set of paths across the gates and neurons of an LSTM model.", "The approach has several elements:", "1. Define an input parameter to vary the concept-specific quantity under study (e.g., the grammatical number of a particular noun, bottom-left node in Figure 1) and a concept-specific output feature to measure the parameter's effect on (e.g, number agreement with the parameterized noun, bottom-right node in Figure 1).", "2. Apply a gradient-based influence method to quantify the influence of the concept parameter on the concept output feature; as per the chain rule, decompose the influence into model-path-specific quantities.", "The paths demonstrate where relevant state information necessitated by the concept is kept, how it gets there, how it ends up being used to affect the model's output, and how and where related concepts interfere.", "Our approach is state-agnostic in that it does not require a priori an assumption about how or if the concept will be implemented by the LSTM.", "This differs from works on diagnostic classifiers where a representation of the concept is assumed to exist in the network's latent space.", "The approach is also time-aware in that paths travel through cells/gates/neurons at different stages of an RNN evaluation.", "This differs from previous ablation-based techniques, which localize the number by clearing neurons at some position in an RNN for all time steps.", "Our contributions are as follows: We introduce influence paths , a causal account of the use of concepts of interest as carried by paths across gates and neurons of an RNN.", "We demonstrate, using influence paths, that in a multi-layer LSTM language model, the concept of subject-verb number agreement is concentrated primarily on a single path (the red path in Figure 1), despite a variety of surrounding and intervening contexts.", "We show that attractors (intervening nouns of opposite number to the subject) do not diminish the contribution of the primary subject-verb path, but rather contribute their own influence of the opposite direction along the equivalent primary attractor-verb path (the blue path in the figure).", "This can lead to incorrect number prediction if an attractor's contribution overcomes the subject's.", "We corroborate and elaborate on existing results localizing subject number to the same two neurons which, in our results, lie on the primary path.", "We further extend and generalize prior compression/ablation results with a new path-focused compression test which verifies our localization conclusions.", "Our results point to generalized knowledge as the answer to the central question.", "The number agreement concept is heavily centralized to the primary path despite the varieties of contexts.", "Further, the primary path's contribution is undiminished even amongst interfering contexts; number errors are not attributable to lack of the general number concept but rather to sufficiently influential contexts pushing the result in the opposite direction.", "LSTMs Long short-term memory networks (LSTMs) (Hochreiter and Schmidhuber, 1997) have proven to be effective for modeling sequences, such as language models, and empirically, this architecture has been found to be optimal compared to other second-order RNNs (Greff et al., 2017).", "LSTMs utilize several types of gates and internal states including forget gates ( f ), input gates ( i ), output gates ( o ) , cell states ( c ), candidate cell state ( c ), and hidden states ( h ).", "Each gate is designed to carry out a certain function, or to fix a certain drawback of the vanilla RNN architecture.", "E.g., the forget gate is supposed to determine how much information from the previous cell state to retain or forget, helping to fix the vanishing gradient problem (Hochreiter, 1998).", "Number Agreement in Language Models The number agreement (NA) task, as described by Linzen et al. (2016), is an evaluation of a language model's ability to properly match the verb's grammatical number with its subject.", "This evaluation is performed on sentences specifically designed for the exercise, with zero or more words between the subject and the main verb, termed the context .", "The task for sentences with non-empty contexts will be referred to as long-term number agreement.", "Human-level performance for this task can be achieved with a 2-layer LSTM language model (Gulordava et al.), indicating that the language model incorporates grammatical number despite being trained only for the more general word prediction task.", "Attempts to explain or localize the number concept within the model include (Lakretz et al., 2019), where ablation of neurons is applied to locate specific neurons where such information is stored; and (Giulianelli et al., 2018; Hupkes et al., 2018), where diagnostic classifiers are trained on gate activations to predict the number of the subject to see which gates or timesteps the number concept exhibits itself.", "These works also look at the special cases involving attractors intervening nouns with grammatical number opposite to that of the subject (deemed instead helpful nouns if their number agrees with the subject)such as the word tree in Figure", "1. Both frameworks provide explanations as to why attractors lower the performance of NA tasks.", "However, they tend to focus on the activation patterns of gates or neurons without justifying their casual relationships with the concept of grammatical number, and do not explicitly identify the exact temporal trajectory of how the number of the subject influences the number of the verb.", "Other relevant studies that look inside RNN models to locate specific linguistic concepts include visualization techniques such as (Karpathy et al., 2015), and explanations for supervised tasks involving LSTMs such as sentiment analysis (Murdoch et al., 2018).", "Attribution Methods Attribution methods quantitatively measure the contribution of each of a function's individual inputs to its output.", "Gradient-based attribution methods compute the gradient of a model with respect to its inputs to describe how important each input is towards the output predictions.", "These methods have been applied to assist in explaining deep neural networks, predominantly in the image domain (Leino et al., 2018; Sundararajan et al., 2017; Bach et al., 2015; Simonyan et al., 2013).", "Some such methods are also axiomatically justified to provide a causal link between inputs (or intermediate neurons) and the output.", "As a starting point in this work, we consider Integrated Gradients (IG) (Sundararajan et al., 2017).", "Given a baseline , x 0 , the attribution for each input at point, x , is the path integral taken from the baseline to x of the gradients of the model's output with respect to its inputs.", "The baseline establishes a neutral point from which to make a counterfactual comparison; the attribution of a feature can be interpreted as the share of the model's output that is due to that feature deviating from its baseline value.", "By integrating the gradients along the linear interpolation from the baseline to x , IG ensures that the attribution given to each feature is sensitive to effects exhibited by the gradient at any point between the baseline and instance x .", "Leino et al. (2018) generalize IG to better focus attribution on concepts other than just model outputs, by use of a quantity of interest (QoI) and a distribution of interest (DoI).", "Their measure, Distributional Influence , is given by Definition", "1. The QoI is a function of the model's output expressing a particular output behavior of the model to calculate influence for; in IG, this is fixed as the model's output.", "The DoI specifies a distribution over which the influence should faithfully summarize the model's behavior; the influences are found by taking an expected value over DoI.", "Definition 1 (Distributional Influence) .", "With quantity of interest , q , and distribution of interest , D , the influence, , of the inputs on the quantity of interest is: ( q, D ) = E (cid:126)x D (cid:20) q x ( (cid:126)x ) (cid:21) The directed path integral used by IG can be implemented by setting the DoI to a uniform distribution over the line from the baseline to (cid:126)x : D = Uniform (cid:0) (cid:126)x 0 (cid:126)x (cid:1) , for baseline, (cid:126)x 0 , and then multiplying by (cid:126)x (cid:126)x 0 .", "Conceptually, by multiplying by (cid:126)x (cid:126)x 0 , we are measuring the attribution , i.e., the contribution to the QoI, of (cid:126)x (cid:126)x 0 by weighting its features by their influence .", "We use the framework of Leino et al. in this way to define our measure of attribution for NA tasks in Section", "3. Distributional Influence can be approximated by sampling according to the DoI.", "In particular, when using D = Uniform (cid:0) (cid:126)x 0 (cid:126)x (cid:1) as noted above, Definition 1 can be computationally approximated with a sum of n intervals as in IG: n (cid:88) i =1 q x (cid:18) i n(cid:126)x + (cid:18) 1 i n (cid:19) (cid:126)x 0 (cid:19) Other related works include Fiacco et al. (2019), which employs the concept of neuron paths based on cofiring of neurons instead of influence, also on different NLP tasks from ours.", "Our method for computing influence paths begins with modeling a relevant concept, such as grammatical number, in the influence framework of Leino et al. (Definition 1) by defining a quantity of interest that corresponds to the grammatical number of the verb, and defining a component of the input embedding that isolates the subject's grammatical number (Section 3.1).", "We then decompose the influence measure along the relevant structures of LSTM (gates or neurons) as per standard calculus identities to obtain a definition for influence paths (Section 3.2).", "For the NA task, we view the initial fragment containing the subject as the input, and the word distribution at the position of its corresponding verb as the output.", "Formally, each instance in this task is a sequence of d -dimensional word embedding vectors, w def = (cid:104) (cid:126)w i (cid:105) i , containing the subject and the corresponding verb, potentially with intervening words in between.", "We assume the subject is at position t and the verb at position t + n .", "The output score of a word, w , at position i will be written s i ( w ) .", "If w has a grammatical number, we write w + and w to designate w with its original number and the equivalent word with the opposite number, respectively.", "Quantity of Interest We instrument the output score with a QoI measuring the agreement of the output's grammatical number to that of the subject: Definition 2 (Number Agreement Measure) .", "Given a sentence, w , with verb, w , whose correct form (w.r.t. grammatical number) is w + , the quantity of interest , q , measures the correctness of the grammatical number of the verb: q ( w ) def = s t + n (cid:0) w + (cid:1) s t + n (cid:0) w (cid:1) In plain English, q captures the weight that the model assigns to the correct form of w as opposed to the weight it places on the incorrect form.", "Note that the number agreement concept could have reasonably been measured using a different quantity of interest.", "E.g., considering the scores of all vocabulary words of the correct number and incorrect number in the positive and negative terms, respectively, is an another alternative.", "However, based on our preliminary experiments, we found this alternative does not result in meaningful changes to the reported results in the further sections.", "Distribution of Interest We also define a component of the embedding of the subject that captures its grammatical number, and a distribution over the inputs that allows us to sensitively measure the influence of this concept on our chosen quantity of interest.", "Let (cid:126)w 0 be the word embedding mid-way between its numbered variants, i.e., (cid:126)w + + (cid:126)w 2 .", "Though this vector will typically not correspond to any English word, we interpret it as a number-neutral version of (cid:126)w .", "Various works show that linear arithmetic on word embeddings of this sort preserves meaningful word semantics as demonstrated in analogy parallelograms (Mikolov et al., 2013).", "Finally, given a sentence, w , let w 0 t be the sentence w , except with the word embedding (cid:126)w t replaced with its neutral form (cid:126)w 0 t .", "We see that w w 0 t captures the part of the input corresponding to the grammatical number of the subject, (cid:126)w t .", "Definition 3 (Grammatical Number Distribution) .", "Given a singular (or plural) noun, w t , in a sentence, w , the distribution density of sentences, D w , exercising the noun's singularity (or plurality ) linearly interpolates between the neutral sentence, w 0 t , and the given sentence, w : D w def = Uniform (cid:16) w 0 t w (cid:17) If (cid:126)w t is singular, our counterfactual sentences span w with number-neutral (cid:126)w 0 t all the way to its singular form (cid:126)w t = (cid:126)w + t .", "We thus call this distribution a singularity distribution.", "Were w t plural instead, we would refer to the distribution as a plurality distribution.", "Using this distribution of sentences as our DoI thus allows us to measure the influence of w w 0 t (the grammatical number of a noun at position t ) on our quantity of interest sensitively (in the sense that Sundararajan et al. define their axiom of sensitivity for IG (Sundararajan et al., 2017)).", "Subject-Verb Number Agreement Putting things together, we define our attribution measure.", "Definition 4 (Subject-Verb Number Agreement Attribution) .", "The measure of attribution, , of a noun's grammatical number on the subject-verb number agreement is defined in terms of the DoI, D w , and QoI, q , as in Definitions 3 and 2, respectively.", "Essentially, the attribution measure weights the features of the subject's grammatical number by their Distributional Influence, .", "Because D w is a uniform distribution over the line segment between w and w 0 t , as with IG, the attribution can be interpreted as each feature's net contribution to the change in the QoI, q ( w ) q ( w 0 t ) , as (cid:80) i ( w ) i = q ( w ) q ( w 0 t ) (i.e., Definition 4 sat-isfies the axiom Sundararajan et al. term completeness (Sundararajan et al., 2017)).", "In Figure 1, for instance, this definition measures the attribution from the plurality of the subject (boys), towards the model's prediction of the correctly numbered verb (run) versus the incorrectly numbered verb (runs).", "Later in this paper we will also investigate the attribution of intervening nouns on this same quantity.", "We expect the input attribution to be positive for all subjects and helpful nouns, and negative for attractors, which can be verified by the P + columns of Table 1 (the details of this experiment are introduced in Section 4).", "Input attribution as defined by IG (Sundararajan et al., 2017) provides a way of explaining a model by highlighting the input dimensions with large attribution towards the output.", "Distributional Influence (Leino et al., 2018) with a carefully chosen QoI and DoI (Definition 4) further focuses the influence on a concept at hand, grammatical number agreement.", "Neither, however, demonstrate how these measures are conveyed by the inner workings of a model.", "In this section we define a decomposition of the influence into paths of a model, thereby assigning attribution not just to inputs, but also to the internal structures of a given model.", "We first define arbitrary deep learning models as computational graphs, as in Definition", "5. We then use this graph abstraction to define a notion of influence for a path through the graph.", "We posit that any natural path decomposition should satisfy the following conservation property: the sum of the influence of each path from the input to the output should equal the influence of the input on the QoI .", "We then observe that the chain rule from calculus offers one such natural decomposition, yielding Definition", "6. Definition 5 (Model) .", "A model is an acyclic graph with a set of nodes, edges, and activation functions associated with each node.", "The output of a node, n , on input x is n ( x ) def = f n ( n 1 ( x ) , , n m ( x )) where n 1 , , n m are n 's predecessors and f n is its activation function.", "If n does not have predecessors (it is an input), its activation is f n ( x ) .", "We assume that the domains and ranges of all activation functions are real vectors of arbitrary dimension.", "We will write n 1 n 2 to denote an edge (i.e., n 1 is a direct predecessor of n 2 ), and n 1 n 2 to denote the set of all paths from n 1 to n 2 .", "The partial derivative of the activation of n 2 with respect to the activation of n 1 will be written n 2 n 1 .", "This view of a computation model is an extension of network decompositions from attribution methods using the natural concept of layers or slices (Dhamdhere et al., 2018; Leino et al., 2018; Bach et al., 2015).", "This decomposition can be tailored to the level of granularity we wish to expose.", "Moreover, in RNN models where no single and consistent natural layer can be found due to the variable-length inputs, a more general graph view provides the necessary versatility.", "Definition 6 (Path Influence) .", "Expanding Definition 4 using the chain rule, the influence of input node, s , on target node, t , in a model, G , is: s = E x D ( x ) (cid:20) t s ( x ) (cid:21) = E x D ( x ) (cid:88) p ( s t ) (cid:89) ( n 1 n 2 ) p n 2 n 1 ( x ) = (cid:88) p ( s t ) E x D ( x ) (cid:89) ( n 1 n 2 ) p n 2 n 1 ( x ) (cid:124) (cid:123)(cid:122) (cid:125) ps Note that the same LSTM can be modeled with different graphs to achieve a desired level of abstraction.", "We will use two particular levels of granularity: a coarse gate-level abstraction where nodes are LSTM gates, and a fine neuron-level abstraction where nodes are the vector elements of those gates.", "Though the choice of abstraction granularity has no effect on the represented model semantics, it has implications on graph paths and the scale of their individual contributions in a model.", "Gate-level and Neuron-level Paths We define the set of gate-level nodes to include: (cid:8) f lt , i lt , o lt , c lt , c lt , h lt : t < T, l < L (cid:9) , where T is the number of time steps (words) and L is number of LSTM layers.", "The node set also includes an attribution-specific input node ( w w 0 t ) and an output node (the QoI).", "An example of this is illustrated in Figure", "2. We exclude intermediate calculations (the solid nodes of Figure 2, such as f t (cid:12) c t 1 ) as their inclusion does not change the set of paths in a graph.", "We can also break down each vector node into scalar components and further decompose the gate-level model into a neuron-level one: { f lti , i lti , o lti , c lti , c lti , h lti : t < T, i < H, l < L } , where H is the size of each gate vector.", "This decomposition results in an exponentially large number of paths.", "However, since many functions between gates in an LSTM are element-wise operations, neuron-level connections between many neighboring gates are sparse.", "Path Refinement While the neuron-level path decomposition can theoretically be performed on the whole network, in practice we choose to specify a gate-level path first, then further decompose that path into neuron-level paths.", "We also collapse selected vector nodes, allowing us to further localize a concept on a neuron level while avoiding an explosion in the number of paths.", "The effect of this pipeline will be empirically justified in Section", "4. c h c i x o f c h Subject Intervening Noun c i x o f c h c i x o f c h c h c i o f c h c i o f c h c i o f c h QoI l 0 l 1 T t t + n 1 t + n t 1 Figure 2: Influence path diagram in a NA task for the 2-layer LSTM model.", "In this section we apply influence path decomposition to the NA task.", "We investigate major gate-level paths and their influence concentrations in Section 4.2.", "We further show the relations between these paths and the paths carrying grammatical number from intervening nouns (i.e. attractors & helpful nouns) in Section 4.3.", "In both we also investigate high-attribution neurons along primary paths allowing us to compare our results to prior work.", "We study the exact combination of language model and NA datasets used in the closely related prior work of Lakretz et al. (2019).", "The pre-trained language model of Gulordava et al. and Lakretz et al. is a 2-layer LSTM trained from Wikipedia articles.", "The number agreement datasets of Lakretz et al. are several synthetically generated datasets varying in syntactic structures and in the number of nouns between the subject and verb.", "For example, nounPP refers to sentences containing a noun subject followed by a prepositional phrase such as in Figure", "1. Each NA task has subject number (and intervening noun number if present) realizations along singular (S) and plural (P) forms.", "In listings we denote subject number (S or P) first and additional noun (if any) number second.", "Details including the accuracy of the model on the NA tasks are summarized by Lakretz et al. (2019).", "Our evaluation replicates part of Table 2 in said work.", "We begin with the attribution of subject number on its corresponding verb, as decomposed per Definition", "6. Among all NA tasks, the gate-level path carrying the most attribution is one following the same pattern with differences only in the size of contexts.", "With indices t and t + n referring to the subject and verb respectively, this path, which we term the primary path of subject-verb number agreement , is as follows: x t ( DoI ) c 0 c 0 h 0 c 1 (cid:0) c 1 (cid:1) h 1 QoI The primary path is represented by the red path in Figure", "2. The influence first passes through the temporary cell state c 0 , the only non-sigmoid cell states capable of storing more information than sigmoid gates, since i, f, o (0 , 1) while the tanh gate c ( 1 , 1) .", "Then the path passes through c 0 , h 0 , and similarly to c 1 through c 1 , jumping from the first to the second layer.", "The path then stays at c 1 , through the direct connections between cell states of neighbouring time steps, as though it is stored there without any interference from subsequent words.", "As a result, this path is intuitively the most efficient and simplistic way for the model to encode and store a number bit.", "The extent to which this path can be viewed as primary is measured by two metrics.", "The results across a subset of syntactic structures and number conditions mirroring those in Lakretz et al. (2019) are shown in Table", "1. We include 3 representative variations of the task.", "The metrics are:", "1. t -value: probability that a given path has greater attribution than a uniformly sampled path on a uniformly sampled sentence.", "2. Positive/Negative Share ( Share): expected (over sentences) fraction of total positive (or negative) attribution assigned to the given positive (or negative) path.", "Observation", "1. The same one primary path consistently carries the largest amount positive attribution across all contexts as compared to all other paths.", "Even in the case of its smallest share (nounPPAdv), the 3% share is large when taking into account more than 40,000 paths in total.", "Sentences with singular subjects (top part of Table 1) have a slightly stronger concentration of attribution in the primary path than plural subjects (bottom part of Table 1), possibly due to English plural (infinitive) verb forms occurring more frequently than singular forms, thus less concentration of attribution is needed due to the default signal in place.", "Primary Neurons We further decompose the primary path into influence passing through each neuron.", "Since only connections between second layer cell states are sparse, we only decompose the segment of the primary path from c 1 t to c 1 t + n , resulting in a total of 650 (the number of hidden units) neuron-level paths.", "(We leave the non-sparse decompositions for future work).", "The path for neuron i , for example, is represented as: x t ( DoI ) c 0 c 0 h 0 c 1 (cid:0) c 1 i (cid:1) h 1 QoI To compare the attribution of an individual neuron with all other neurons, we employ a similar aforementioned t -value, where each neuron-level path is compared against other neuron-level paths.", "The results of the neuron-level analysis are shown in Table 1 (From Subject, Primary Neuron).", "Out of the 650 neuron-level paths in the gate-level primary path, we discover two neurons with consistently the most attribution (neurons 125 and 337 of the second layer).", "This indicates the number concept is concentrated in only two neurons.", "Comparison with Lakretz et al. (2019) Uncoincidentally, both neurons match the units found through ablation by Lakretz et al., who use the same model and dataset (neurons 988 and 776 are neurons 125 and 337 of the second layer).", "This accordance to some extent verifies that the neurons found through influence paths are functionally important.", "However, the t -values shown in Table 1 show that both neuron 125 and 337 are influential regardless of the subject number, whereas Lakretz et al. assign a subject number for each of these two neurons due to their disparate effect in lowering accuracy in ablation experiments.", "One possible reason is that the ablation mechanism used in (Lakretz et al., 2019) assumes that a neutral number state can be represented by zero-activations for all gates, while in reality the network may encode the neutral state differently for different gates.", "Another major distinction of our analysis from Lakretz et al. (2019) regards simple cases with no Task C From Subject From Intervening Noun P + | P | Primary Path Primary Neuron P + | P | Primary Path Primary Neuron +Share t t 125 t 337 Share t t 125 t 337 Simple S 1.0 16 0.47 1.0 0.99 1.0 ---nounPP SS 1.0 6946 0.1 1.0 1.0 1.0 0.82 16 0.31(+) 0.9 0.78 0.98 nounPP SP 1.0 6946 0.1 1.0 1.0 1.0 0.23 16 0.24(-) 0.23 0.06 0.15 nounPPAdv SS 1.0 41561 0.07 1.0 1.0 1.0 0.92 152 0.09(+) 0.96 0.85 1.0 nounPPAdv SP 1.0 41561 0.07 1.0 1.0 1.0 0.32 152 0.09(-) 0.14 0.13 0.01 Simple P 1.0 16 0.33 0.93 0.97 0.99 ---nounPP PS 1.0 6946 0.05 0.91 0.99 1.0 0.06 16 0.28(-) 0.21 0.22 0.12 nounPP PP 1.0 6946 0.05 0.92 0.99 1.0 0.95 16 0.31(+) 0.9 0.97 0.79 nounPPAdv PS 1.0 41561 0.03 0.93 0.99 1.0 0.32 152 0.04(-) 0.28 0.41 0.16 nounPPAdv PP 1.0 41561 0.03 0.92 0.99 1.0 0.83 152 0.07(+) 0.92 0.99 0.84 Table 1: Statistics for attribution of primary paths and neurons from the subject/intervening noun: P + is the percentage of sentences with positive input attribution.", "word between subjects and verbs.", "Unlike Lakretz et al., who claim that the two identified neurons are long-term neurons, we discover that these two neurons are also the only neurons important for short-term number agreement.", "This localization cannot be achieved by diagnostic classifiers used by Lakretz et al., indicating that the signal can be better uncovered using influence-based paths rather than association-based methods such as ablation.", "Next we focus on NA tasks with intervening nouns and make the following observation:", "Observation", "2. The primary subject-verb path still accounts for the largest positive attribution in contexts with either attractors or helpful nouns.", "A slightly worse NA task performance (Lakretz et al., 2019) in cases of attractors (SP, PS) indicates that they interfere with prediction of the correct verb.", "In contrast, we also observe that helpful nouns (SS, PP) contribute positively to the correct verb number (although they should not from a grammar perspective).", "Primary Path from the Intervening Noun We adapt our number agreement concept (Definition 2) by focusing the DoI on the intervening noun, thereby allowing us to decompose its influence on the verb number not grammatically associated with it.", "In Table 1 (From Intervening Noun) we discover a similar primary path from the intervening noun: Observation", "3. Attribution towards verb number from intervening nouns follows the same primary path as the subject but is of lower magnitude and Task C Compression Scheme C si C s C i C si C s C i C nounPP SS .66 .77 .95 .93 .71 .77 .95 nounPP SP .64 .36 .94 .64 .75 .40 .74 nounPP PS .34 .24 .92 .40 .69 .18 .80 nounPP PP .39 .66 .91 .76 .68 .58 .97 nounPP mean .51 .51 .93 .68 .70 .48 .87 nounPPAdv SS .70 .86 .98 .73 .56 .43 1.0 nounPPAdv SP .70 .43 .99 .50 .60 .27 .88 nounPPAdv PS .38 .22 .98 .76 .79 .56 .96 nounPPAdv PP .39 .67 .98 .84 .83 .76 1.0 nounPPAdv mean .54 .55 .99 .71 .69 .50 .96 Table 2: Model compression accuracy under various compression schemes.", "reflects either positive or negative attribution in cases of helpful nouns or attractors, respectively.", "This disparity in magnitude is expected since the language model possibly identifies the subject as the head noun through the prepositions such as behind in Figure 1, while still needing to track the number of the intervening noun in possible clausal structures.", "Such need is comparably weaker compared to tracking numbers of subjects, possibly because in English, intervening clauses are rarer than intervening non-clauses.", "Similar arguments can be made for neuron-level paths.", "Though the primary paths are the highest contributors to NA tasks, it is possible that collections of associated non-primary paths account for more of the verb number concept.", "We gauge the extent to which the primary paths alone are responsible for the concept with compression/ablation experiments.", "We show that the computations relevant to a specific path alone are sufficient in maintaining performance for the NA task.", "We compress the model by specifying node sets to preserve, and intervene on the activations of all other nodes by setting their activations to constant expected values (average over all samples).", "We choose the expected values instead of full ablation (setting them to zero), as ablation would nullify the function of Sigmoid gates.", "For example, to compress the model down to the red path in Figure 2, we only calculate the activation for gates c 0 t and c 1 t for each sample, while setting the activation of all other c, f, o, i to their average values over all samples.", "In Table 2, we list variations of the compression schemes based on the following preserved node sets: C def = (cid:110) f lt , i lt , o lt , c lt : t sub < t < t verb , l { 0 , 1 } (cid:111) C s def = (cid:8) c 0 t sub , c 1 t sub (cid:9) C i def = (cid:8) c 0 t int , c 1 t int (cid:9) C si def = C s C i For example, column C si in Table 2 shows the accuracy when the compressed model only retains the primary path from both the subject and the intervening noun while the computations of all other paths are set to their expected values; while in C si , all paths but the paths in C si are kept.", "We observe that the best compressed model is C i , where the primary path from the intervening noun is left out; it performs even better than the original model; the increase comes from the cases with attractors (PS, SP).", "This indicates that eliminating the primary path from the attractor improves the model.", "The next best models apart from C are C s and C si , where primary paths are kept.", "Compressed models without the primary subject-verb path ( C si , C s , C i ) have performances close to random guessing.", "Observation", "4. Accuracy under path-based model compression tests corroborate that primary paths account for most of the subject number agreement concept of the LSTM.", "By comparing the SP and PS rows of C si , C s , C s , and C i , we observe the effect of attractors in misguiding the model into giving wrong predictions.", "Similarly, we see that helpful nouns (SS, PP) help guide the models to make more accurate predictions, though this is not grammatically justified.", "The combination of finely-tuned attribution and gradient decomposition lets us investigate the handling of the grammatical number agreement concept attributed to paths across LSTM components.", "The concentration of attribution to a primary path and two primary cell state neurons and its persistence in a variety of short-term and long-term contexts, even with confounding attractors, demonstrates that the concept's handling is, to a large degree, general and localized.", "Though the heuristic decisioning aspect of an LSTM is present in the large quantities of paths with non-zero influence, their overall contribution to the concept is insignificant as compared to the primary path.", "Node-based compression results further corroborate these conclusions.", "We note, however, that our results are based on datasets exercising the agreement concept in contexts of a limited size.", "We speculate that the primary path's attribution diminishes with the length of the context, which would suggest that at some context size, the handling of number will devolve to be mostly heuristic-like with no significant primary paths.", "Though our present datasets do not pose computational problems, the number of paths, at both the neuron and the gate level, is exponential with respect to context size.", "Investigating longer contexts, the diminishing dominance of the primary path, and the requisite algorithmic scalability requirements are elements of our ongoing work.", "We also note that our method can be expanded to explore number agreement in more complicated sentences with clausal structures, or other syntac-tic/semantic signals such as coreference or gender agreement.", "Acknowledgement This work was developed with the support of NSF grant CNS-1704845 as well as by DARPA and the Air Force Research Laboratory under agreement number FA8750-15-2-0277.", "The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright notation thereon.", "The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of DARPA, the Air Force Research Laboratory, the National Science Foundation, or the U.S. Government.", "We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this work." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "result", "objective", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "other", "other", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "objective", "abstain", "objective", "objective", "objective", "abstain", "abstain", "method" ]
[ "Text summarization helps readers capture salient information from documents, news, interviews, and meetings.", "However, most state-of-the-art pretrained language models (LM) are unable to efficiently process long text for many summarization tasks.", "In this paper, we propose SUMMN , a simple, flexible, and effective multi-stage framework for input texts that are longer than the maximum context length of typical pretrained LMs.", "SUMMN first splits the data samples and generates a coarse summary in multiple stages and then produces the final fine-grained summary based on it.", "Our framework can process input text of arbitrary length by adjusting the number of stages, while keeping the LM input size fixed.", "Moreover, it can deal with both single-source documents and dialogues, and it can be used on top of different backbone abstractive summarization models.", "To the best of our knowledge, SUMMN is the first multi-stage split-then-summarize framework for long input summarization.", "Our experiments demonstrate that SUMMN outperforms previous state-of-the-art methods by improving ROUGE scores on three long meeting summarization datasets AMI, ICSI, and QMSum, two long TV series datasets from SummScreen, and a long document summarization dataset GovReport.", "Our data and code are available at https://github.com/ psunlpgroup/Summ-N .", "Abstractive summarization helps readers capture salient information from various sources such as documents, news, interviews, and meetings.", "Previous work has primarily focused on short texts of news (Gehrmann et al., 2018; Zhang et al., 2019) and short conversations (Gliwa et al., 2019; Chen and Yang, 2021).", "Recently proposed longer dialogue and document summarization tasks (Zhong et al., 2021b; Huang et al., 2021; Chen et al., 2021; Zhu et al., 2021a) pose challenges for current large pretrained language models due to the time and memory complexity of training, as well as limited input lengths these models can consume.", "A common method to handle long text reduces the input to a shorter one.", "This can be accomplished by truncating inputs (Lewis et al., 2020) or employing retrieve-then-summarize pipelines (Zhong et al., 2021b).", "However, these methods break the dependency of the context and decrease the number of tokens that the model can read, i.e., the receptive field of the model.", "The cutting-off model depends on the lead bias of the source text, while the retrieve-then-summarize models heavily rely on the independence of retrieved units (turns or sentences) which are usually scattered throughout the source text.", "Another approach optimizes the attention mechanism in Transformers to accommodate longer inputs by reducing the impact of quadratic complexity of the attention process using Locality-sensitive hashing (LSH) attention (Kitaev et al., 2020) and Sinkhorn attention (Tay et al., 2020).", "Additionally, HMNet (Zhu et al., 2020) and HAT-BART (Rohde et al., 2021) use hierarchical self-attention to extend the input limitation of typical self-attention models.", "However, the simplified attention mechanism weakens the power of pretrained Transformer models, e.g., HMNet is not pretrained on external large-scaled unsupervised datasets as BART did.", "In this paper, we propose SUMMN , a multi-stage framework for long dialogue and document summarization.", "Figure 1 shows the structure of SUMMN .", "First, it divides each source text into segments so that each can be completely fed into the backbone abstractive summarization model.", "Then, it matches each of them with the subset of target text using a ROUGE-based greedy algorithm.", "Next, each stage generates a coarse summary for each segment and concatenates them together as the input to the 1592 Input Segments TargetTargetTargetTargetTarget Match S u mm a r i z e r Finetune Inference S u mm a r i z e r Fine-grained Summary Target Segments Coarse Segments Coarse Summary Source Target Data Segmentation Coarse Summary Generation Fine-grained Summary Generation Text and Summary to form a pair of sample Model to generate summaries Original target Source Target Split Coarse Stage Fine-grained Stage Finetune Inference Figure 1: Workflow of the proposed SUMMN framework.", "next stage.", "After multiple stages of compression and summarization, the final stage produces a fine-grained summary.", "The process expands the model context to the full reception field, meaning that the proposed model can read the full input no matter how long the input is.", "Additionally, retrieve-then-summarize pipelines (Zhang et al., 2019) extract sentences individually, leading to the loss of the context information for understanding utterances.", "By contrast, SUMMN only cuts the source text at the end of each segment, so that the context of most sentences are retained.", "It does not assume lead bias because each part of the source is fully used.", "In addition, in each stage, it leverages a backbone abstractive summarization model to recursively generate the summaries.", "Therefore, it enjoys the full power of the pretrained language models because the framework preserves the intact structure of Transformers.", "SUMMN is flexible to inputs with different lengths by adjusting the number of stages.", "SUMMN can change the number of coarse stages according to the compression ratio between source and target, the input limit of the backbone model, and the input source length.", "We give the empirical formula to decide the number of needed stages for every tested dataset.", "Our experiments show that ROUGE increases on all datasets when increasing the number of stages from one to the appropriate number.", "Additionally, SUMMN is flexible because it can be applied to different backbone summarization models.", "For example, we found that the ROUGE scores increase sharply on the AMI dataset when replacing the backbone BART model with T5 (Raffel et al., 2020) and PEGASUS (Zhang et al., 2019).", "We conduct extensive experiments on long-input summarization datasets in multiple domains.", "The results demonstrate that the proposed model significantly outperforms previous state-of-the-art methods according to automatic and human evaluations on three long meeting summarization datasets (AMI, ICSI, QMSum) and one long TV series summarization dataset (SummScreen).", "It also achieves state-of-the-art performance on a long document summarization dataset (GovReport).", "These datasets include document summarization as well as both query-based and query-independent long dialogue summarization tasks.", "Our contributions are: (1) We propose SUMMN , a simple, flexible, and effective framework for long dialogue and document summarization.", "To the best of our knowledge, SUMMN is the first multi-stage split-then-summarize framework to solve long text summarization tasks.", "(2) We evaluate SUMMN on both dialogue and document domains and improve the baseline model by a large margin.", "(3) We analyze and compare the proposed framework with baselines and discuss its merits in detail.", "Long Document Summarization Long document summarization has been studied in multiple domains, such as news (Liu et al., 2021; Zhu et al., 2021b), patterns (Trappey et al., 2009), books (Kryscinski et al., 2021; Wu et al., 2021), sci-entific publications (Qazvinian and Radev, 2008; Mao et al., 2021), and medical records (Cohan", "et al., 2018).", "Gidiotis and Tsoumakas (2020) proposed a divide-and-conquer method by splitting the input into multiple segments, summarizing them separately, and combining the summary pieces.", "Grail et al. (2021) proposed a hierarchical neural model to process segmented input blocks.", "Compared with SUMMN , these models only split the input once, implying the lack of flexibility when handling longer input.", "The GovReport dataset was recently introduced containing documents with more than 9000 words, thus greatly challenging the capabilities of current models such as PEGASUS (Zhang et al., 2019), TLM (Pilault et al., 2020), and BIGBIRD (Zaheer et al., 2020).", "To handle this dataset, Huang et al. (2021) proposed head-wise positional strides to reduce the cost of the encoder-decoder attention.", "Similarly, models such as Longformer (Beltagy et al., 2020) and Reformer (Kitaev et al., 2020) adjust attention mechanisms in Transformers to consume longer inputs.", "However, these models sparsify the attention structure of the pretrained model to fit the longer source text.", "By contrast, SUMMN is able to maintain the full structure of various pretrained models.", "Long Dialogue Summarization Various models have also been proposed to handle long dialogue summarization.", "HMNet (Zhu et al., 2020) and HAT-BART (Rohde et al., 2021) leverage a two-level transformer-based model to obtain word level and sentence level representations.", "DialLM (Zhong et al., 2021a), Longformer-BART-arg (Fabbri et al., 2021) use finetuning or data augmentation to incorporate the external knowledge to maintain the accuracy of lengthy input.", "Different from these models, SUMMN is a framework without modifying the structure of the backbone attention model.", "Multi-Stage Text Generation Multiple multistage coarse-to-fine frameworks have been studied in many other text generation tasks, such as dialogue state tracking (Chen et al., 2020), neural story generation (Fan et al., 2018), and extractive summarization (Xu and Lapata, 2020).", "In a summarization task, a two-stage extract-and-summarize pipeline is commonly used (Zhang et al., 2019; Pilault et al., 2020; Zhao et al., 2020).", "However, unlike that work, our framework aims at long input summarization with fully abstractive intermediate summaries, meaning that SUMMN can be viewed as a summarize-then-summarize pipeline.", "Figure 1 shows the workflow of SUMMN .", "The workflow includes two types of stages, N coarse stages, and one fine-grained stage.", "Coarse stages include the data segmentation and coarse summary generation, while the fine-grained stage directly generates the summary as the final result.", "Besides, we have N + 1 separate models for each stage and each was separately trained.", "Our experiments show that the performance drops if different stages share the parameters (Section 4.2).", "SUMMN can adjust and compute the number of coarse stages N according to the stats of dataset and model.", "To formulate our task, we denote one sample of the source text as D = { D 1 , D 2 , , D m } , where D i indicates one sentence in a document or one turn in a dialogue.", "For query-based summarization, there is also a query Q .", "The goal is to generate a summary T , given D and the optional Q .", "In long text summarization, the number of tokens in the source data usually exceeds the limit of the backbone summarization models, thus reducing the quality of the summary.", "To make sure that the model can capture information about all source tokens, we apply a segmentation algorithm for long input summarization datasets.", "First, we segment the source text so that the data input to the backbone model does not exceed the length limit.", "Then, we apply a greedy algorithm to find the best target summary that matches the source segments.", "Source Segmentation Assume that the number of the maximum input tokens of the backbone model is K .", "To completely input the source information, we cut the input D (between sentences) into multiple segments, each of them containing fewer than K tokens.", "Given the input D , we will have n segments S = { S 1 , S 2 , , S n } where S i D is a segment in D .", "For query-based summarization tasks, we simply concatenate the query to the beginning of the S , i.e. S i Q (cid:76) S i .", "In both cases, the number of tokens in each segment is less than the hyper-parameter K .", "Target Matching Segmenting the source text results in n source pieces S i .", "We match each S i with a target segment T i T to form the new pair ( S i , T i ) for the next step.", "We use a greedy algorithm for target matching.", "We first split T into separate sentences T s = { T s 1 , T s 2 , , T s k } .", "Then, each segment S i is matched with a subset of T s such that the ROUGE-1 score between the subset and S i is maximized.", "However, it is not feasible to find the optimal set due to the considerable running time.", "We apply a simple greedy approximation to find such a subset.", "From a null set T i , we iteratively add to the subset the sentence with the highest ROUGE-1 gain between T i and S i .", "Algorithm 1 shows how we obtain the new training pair ( S i , T i ) .", "(cid:76) indicates the concatenation of sentences while keeping them in the same order as in the original text.", "We use ROUGE-1 as the matching criterion because the higher ROUGE-1 score usually implies higher scores on the other metrics such as ROUGE-2 or ROUGE-L, while ROUGE-1 enjoys lower time complexity compared with other ROUGE metrics.", "This matching algorithm also ensures T i (cid:54) = so that each S i can be matched to at least one target sentence.", "A sentence t T s can be added to multiple subsets T i because one sentence of summary may need the information from multiple segments.", "In coarse summary generation, we train a summarization model, that takes the segmented data as input.", "We first collect the training samples ( S i , T i ) generated by data segmentation to form a new dataset.", "This augments the source data to d 1 /K times compared with the cut-off methods, where d 1 = | D 1 | indicates the averaged number of tokens of original source text.", "Thus, data segmentation helps the summarizer to better learn the task of the current stage.", "Additionally, because we incorporate the full input using segmentation, it does not rely on the leading bias in the cut-off method that only considers the first segment S 1 .", "Afterward, we use these data to train a neural summarizer.", "This way, our model treats each part of the source text as equally important.", "Given a source segment S i and an optional query Q , we obtain the coarse summary segments using a backbone summarization model: C li = SUMM l ( Q, S i ) Where l [1 , N ] is the index of the current stage.", "Then, the n coarse summaries corresponding to the original source S = { S 1 , S 2 , , S n } are concatenated: C l = C l 1 (cid:76) C l 2 (cid:76) (cid:76) C ln .", "We use C l as the new source text of next stage, which compresses the input source data D l .", "i.e. D l +1 = C l .", "To pair with the D l +1 , the target to the next stage is copied from the original dataset, i.e. T l +1 = T .", "The proposed framework is applicable to different backbone models SUMM l ( ) , such as BART (Lewis et al., 2020) and T5 (Raffel et al., 2020).", "We pick BART as the backbone model because it can best illustrate the benefits of our framework (Section 4.2).", "The number of stages can be estimated by data stats and model characteristics.", "In SUMMN , each coarse stage compresses the input to a shorter length.", "After N turns of coarse stages, the averaged length of source text is below K , the dataset is then fed into the fine-grained stage.", "Hence, the number of coarse stages can be computed by the following equation (details can be found in Appendix A): N = (cid:100) log K log d 1 log c 1 log K (cid:101) where d 1 and c 1 are the average length of source text and coarse segments in stage 1.", "In Section 5.7 and Table 9, we demonstrate this estimation is close to the empirical number of coarse stages.", "The greedy algorithm in SUMMN for target matching is critical to the performance.", "Consider a duplication algorithm where each segment S i is simply paired with the target T , i.e. T i = T .", "Since the target text is longer than the text segmented by Algorithm 1, the generated summary of each coarse stage will be longer as well, leading to a lower compression speed and larger N .", "Besides, 1595 Dataset Type Domain Size Source length Target length Query N + 1 AMI Dialogue Meetings 137 6007.7 296.6 (cid:55) 2 ICSI Dialogue Meetings 59 13317.3 488.5 (cid:55) 3 QMSum Dialogue Meetings 1808 9069.8 69.6 (cid:51) 2 SummScreen Dialogue TV shows 26851 6612.5 337.4 (cid:55) 2 GovReport Document Reports 19466 9409.4 553.4 (cid:55) 3 Table 1: The summarization datasets for evaluation.", "the duplication of the target will confuse the model, because some source segments will probably be paired with the same target, causing the model to generate duplicated content.", "Experiments (Table 7, stage 2 versus stage 2 & tar. seg.) show that ROUGE scores declines a lot when greedy target segment is replaced by the duplication algorithm .", "When the input source of D l is shorter than K we can proceed to the fine-grained stage.", "In this stage, D l is used to train a summarization model from scratch to obtain the final summary.", "The fine-grained stage works the same way as the vanilla backbone model.", "In fact, SUMMN with N = 0 is the backbone summarizer.", "In the fine-grained stage, the model is directly trained on dataset ( DN , T ) from the last coarse stage, and obtain the summary as the final output of SUMMN : F = SUMMN +1 ( Q, DN ) It is worth noting that, although source text may be shorter than 2 segments, i.e. d i K , we still add them in all stages, so that each summarization model can be trained on the full dataset.", "We first list the datasets and metrics to evaluate the model.", "Then, we introduce the backbone model and baselines for comparisons.", "Finally, we present some implementation details.", "AMI & ICSI (McCowan et al., 2005; Janin et al., 2003) are meeting scripts generated by Automatic Speech Recognition (ASR) systems.", "AMI is collected from product design meetings in a company while ICSI is collected from academic group 1 Both QMSum and SummScreen can be accessed through SummerTime (Ni et al., 2021).", "meetings.", "Because the transcript is produced by ASR, there is a word error rate of 36% for AMI and 37% for ICSI.", "QMSum (Zhong et al., 2021b) is a query-based meeting summarization dataset.", "It consists of meetings from three domains, including AMI and ICSI, and the committee meetings of the Welsh Parliament and the Parliament of Canada.", "Each query and sample are written by experts.", "SummScreen (Chen et al., 2021) consists of community-contributed transcripts of television show episodes from The TVMegaSite, Inc. (TMS) and ForeverDream (FD).", "The summary of each transcript is the recap from TMS, or a recap of the FD shows from Wikipedia and TVMaze.", "GovReport (Huang et al., 2021) is a large-scale long document summarization dataset with 19,466 long reports published by the U.S. Government Accountability Office on national policy issues.", "We use ROUGE (Lin, 2004) as the automatic evaluation metric.", "2 We split summary outputs into sentences to calculate the ROUGE-L score.", "If not specified, F1 scores are used in all results.", "We pick BART (Lewis et al., 2020) as our backbone summarization model because it performs well on short text summarization but not as good on longer texts, illustrating the benefits of our framework.", "Compared with other pretrained parameters, the BART-large model pretrained on the CNN/DM dataset yields the best performance (Zhang et al., 2021).", "So we use the BART-large-cnn parameter as a better starting point.", "It is worth noting that we use separate backbone models for each stage and each was separately trained.", "We experimented with reusing the model parameters in multiple stages but obtained a lower 2 We use pyrouge , a Python wrapper for the ROUGE: https://github.com/bheinzerling/pyrouge 1596 AMI ICSI QMSum-All QMSum-Gold R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L PGNet 42.60 14.01 22.62* 35.89 6.92 15.67* 28.74 5.98 25.13 31.52 8.69 27.63 TopicSeg 51.53 12.23 25.47* ----HMNET 52.36 18.63 24.00* 45.97 10.14 18.54* 32.29 8.67 28.17 36.06 11.36 31.27 TextRank 35.19 6.13 16.70* 30.72 4.69 12.97* 16.27 2.69 15.41 --HAT-BART 52.27 20.15 50.57 43.98 10.83 41.36 ---DDAMS 53.15 22.32 25.67* 40.41 11.02 19.18* ---SUMMN 53.44 20.30 51.39 45.57 11.49 43.32 34.03 9.28 29.48 40.20 15.32 35.62 Table 2: ROUGE scores on three meeting summarizing tasks, AMI, ICSI, and QMSum.", "score, e.g. the ROUGE-1 score of stage 2 on the QMSum dataset decreases around two points if we use the best parameters of stage 1 summarizer as the starting point of training stage 2 summarizer.", "This is because the tasks of the different stages differ significantly.", "For instance, the input to the first stage of dialogue summarization is the dialogue turn while the input to the latter stages is the document.", "We compare the proposed framework with various baselines.", "PGNet (See et al., 2017) uses a pointer mechanism to copy the token from the training sample.", "TopicSeg (Li et al., 2019) is a multi-modal model jointly learning the segmentation and summarization.", "HMNet (Zhu et al., 2020) uses a hierarchical attention structure and cross-domain pre-training for meeting summarization.", "TextRank (Mihalcea and Tarau, 2004) is a graph-based ranking model for text processing.", "HAT-BART (Rohde et al., 2021) is a new hierarchical attention transformer-based architecture that outperforms standard Transformers.", "DDAMS (Feng et al., 2021) uses a relational graph to model the interaction between utterances by modeling different discourse relations.", "For the SummScreen dataset, we use the neural and hybrid model scores reported by Chen et al. (2021).", "We rename these two baselines as Long-former+ATT and NN+BM25+Neural to clarify the difference between other baselines.", "The baseline scores we report on GovReport are from the original paper (Huang et al., 2021).", "BART Variant indicates self-attention variants with full attention.", "BART HEPOS indicates encoder variants with head-wise positional strides (HEPOS) encoder-decoder attention.", "We fit all models into a single RTX A6000 GPU with a 48 GiB memory.", "We adopt the fairseq 3 implementation for BART.", "The learning rate is set to 2e-5 and the beam width is set to 2 for coarse stages and 10 for fine-grained stages.", "The maximum number of tokens in each batch is set to 2048.", "The maximum number of tokens in each source text is set to 1024 because we tried to extend the positional embeddings to 2048 or longer but obtained worse performance.", "We stop the coarse stage and start the fine-grained stage when the averaged source length is shorter than 2048 rather than 1024 to obtain a better performance (Section 5.7).", "For the output of each intermediate stage, we use <s> and </s> to separate each generated target segments C li .", "We discuss the evaluation results and effects of each component of SUMMN in this section.", "Meeting Summarization Table 2 shows the ROUGE scores on AMI, ICSI, and QMSum.", "Compared with the baseline models, SUMMN achieves state-of-the-art results on almost all metrics.", "Specifically, SUMMN improves SOTA on ICSI by 0.83 , and 1.96 ROUGE-2/L scores, improves SOTA on QMSum-Gold by 4.14 , 3.96 , and 4.35 ROUGE-1/2/L scores.", "These results demonstrate the effectiveness of SUMMN on long dialogue summarization tasks.", "TV Series Summarization Table 3 shows ROUGE scores on SummScreen.", "SUMMN outperforms on almost all metrics on two SummScreen datasets.", "Specifically, we improve 6.58 , 1.65 , and 3.75 ROUGE-1/2/L scores on the SummScreen-FD dataset.", "This result demonstrates the generalizability of SUMMN over various domains including meetings and TV series.", "Document Summarization Table 4 shows ROUGE scores on GoveReport.", "SUMMN achieves state-of-the-art performance on ROUGE-2 and ROUGE-L, and compatible results on ROUGE-1.", "The results show that SUMMN is applicable to both long dialogue and document summarization tasks.", "We also notice that the performance increases consistently when the number of stages goes up until the predefined number of stages.", "Figure 2 shows the ROUGE-1 scores of different tasks across stages.", "Stage 1 indicates the model with only one coarse stage and no fine-grained stage.", "In this model, We directly use the first segment of the coarse summary as the output, i.e. C 11 of each sample.", "Stage i ( i > 1 ) model contains i 1 coarse stages and one fine-grained stage, the generated 25 30 35 40 45 50 55 60 ROUG1 S c o r e Datasets Stage 1 Stage 2 Stage 3 Figure 2: ROUGE-1 scores of various datasets at different stages.", "Although stage 2 of SUMMN on the ICSI dataset has already outperformed the baselines, the scores can be further improved by adding one more coarse stage.", "In fact, on all datasets, increasing the number of stages leads to a performance gain.", "This gain can be explained as the following: if the output of the current stage is longer than K tokens, adding one more coarse stage will help since the model will receive more information from the source text compared with simply truncating them.", "On the contrary, if the input is smaller than K , there is no need to add more stages, because there is only one segment.", "SUMMN also boosts the performance of a backbone model by a large margin.", "As shown in Table 5, it improves the BART-large model by 6.87 , 3.89 , 6.78 ROUGE-1/2/L on AMI.", "This indicates the capability of SUMMN to boost the performance of a weak learner on long summarization tasks.", "In particular, when the backbone model is well pretrained on short input texts and performs well on short summarization tasks, SUMMN could greatly increase the capability of the backbone model to process and read long source texts.", "Also, the backbone of SUMMN can be easily replaced by some other models, and models do not necessarily have to be identical at every stage.", "For example, one can try different learners such as T5 as the backbone model and replace the model in stage 1 with a dialogue-to-document model.", "To demonstrate our framework can generalize to different backbone summarization models, we replace the BART-large-cnn model in previous experiments with other neural summarization models including T5 (Raffel et al., 2020) and PEGASUS (Zhang et al., 2019) using Hugging Face.", "Table 6 shows the ROUGE scores of three different models that are trained and evaluated on AMI.", "In all models, SUMMN improves the performance of backbone models by a large margin.", "For instance, although BART-base is a weaker summarizer compared with the BART-large model, the framework is still able to improve the ROUGE-1 score by 5.06.", "Table 7 shows the ablation study results of SUMMN on the AMI test set.", "Removing stage 2 (using the first segment of the coarse summary C 11 as the generated summary) leads to a 5.23 ROUGE-1 score drop.", "Without data segmentation, the ROUGE-1 score decreases by 6.61 using the same fine-grained stage.", "Removing both stage 2 and target matching (use duplication algorithm instead) further decreases the performance.", "It even hurts the perfor-R-1 R-2 R-L SUMMN 53.44 20.30 51.39 stage 2 48.21 18.59 46.46 data seg.", "mance of the original BART model because the duplication of targets will introduce some biases towards the common part of the targets.", "We conduct a human evaluation to assess the following: Readability takes into account word and grammatical error rate to evaluate how fluent the summary language is; Conciseness measures how well the summary discards the redundant information; Coverage measures how well the summary covers each part of the dialogue.", "We compare the results of SUMMN and HMNet because HMNet is a baseline model with the good capability to read whole input.", "For each meeting in AMI and ICSI dataset, we ask 3 different annotators with English expertise to label the summaries.", "Each annotator was asked to read the meeting transcript, gold summaries, and generated summaries using the SummVis (Vig et al., 2021) toolkit.", "They were asked to rate each summary from 1 to 5 (higher is better) for each metric.", "We also shuffle the summaries of two models to reduce the bias.", "Table 8 shows that SUMMN achieves higher scores in Readability , Conciseness , and Coverage than HMNet in both AMI and ICSI dataset.", "Specifi-cally, the Readability of SUMMN greatly surpasses the baseline by around 0.5/1 point on AMI/ICSI dataset.", "This is because BART is well-pretrained and is able to generate more readable text and SUMMN successfully maintains this capability.", "To gain more understanding of the multi-stage mechanism of SUMMN , we analyze the number of coarse stages and the compression rate through statistics of intermediate stages.", "Early Stopping of the Coarse Stage Although the ideal input of the final fine-grained stage should be shorter than K , the experiment results show that compressing input from 2 K to 1 K tokens usually hurts the performance of the model.", "This is probably because generating too many short segments which are hard to summarize confuses the model.", "Thus, we increase the length of input to the final fine-grained stage from K to 2 K to prevent noises in the training set.", "The modified formula to estimate the number of coarse stages N is shown as follows (details in Appendix A).", "N val = 1 + log K log d 1 log c 1 log K N = (cid:100) N val (cid:101) Number of Coarse Stages To verify that our estimation N is close to the empirical number of coarse stages N , we use GovReport to compare the two as shown in Table 9.", "We choose this dataset because it contains the most number of samples among all five datasets, with completely three coarse stages as well.", "Table 9 shows the empirical/estimated number of coarse stages.", "To clearly show the N value, we display the float number N val as the estimated number, and N as the empirical number of re-maining coarse stages (Table 1).", "As can be seen, N = N = (cid:100) N val (cid:101) holds for all stages, meaning that the estimated N is capable of estimating the correct N value.", "It is worth noting that, for stage 2 and stage 3, using this formula can also estimate how many additional coarse stage do we need.", "Compression Rate We analyze the change of compression rate across different stages.", "In SUMMN , compression rate R i is defined as the averaged source length of stage i divided by source length of stage i 1 .", "As shown in Table 9, both compression rates in stage 2 and stage 3 of GovReport are around 0.4, this shows that the compression rate of SUMMN across different stages are stable, meaning that the number of segments will decrease to around 40% of the previous stage steadily.", "Table 10 shows the time cost of inferring one sample using vanilla transformer versus SUMMN .", "Although the SUMMN needs to generate more tokens due to multi-stage pipeline, SUMMN reduces the inference time from quadratic to lower, i.e., from O ( n 2 ) to O ( Cn ) , C = K/ (1 R ) .", "Regarding training the model, SUMMN also need to infer O ( n ) additional tokens on the train/dev/test sets (details in Appendix B).", "In this paper, we propose SUMMN , a simple, flexible, and effective framework for long dialogue and document summarization.", "It consists of multiple coarse stages and one fine-grained stage to iteratively compress the long source input.", "It enjoys the full power of backbone models while ensuring the full receptive field of the summarization model.", "We evaluate the model on various datasets and improve the baselines by a large margin.", "The authors would like to thank Tao Yu, Ming Zhong, Yixin Liu, and Asli Celikyilmaz for their valuable discussions.", "We also would like to thank the anonymous reviewers for their helpful comments.", "This work is supported in part by a grant from Microsoft Research." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "method", "abstain", "abstain", "abstain", "objective", "objective", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "other", "other", "other" ]
[ "We present a method for combining multi-agent communication and traditional data-driven approaches to natural language learning, with an end goal of teaching agents to communicate with humans in natural language.", "Our starting point is a language model that has been trained on generic, not task-specific language data.", "We then place this model in a multi-agent self-play environment that generates task-specific rewards used to adapt or modulate the model, turning it into a task-conditional language model.", "We introduce a new way for combining the two types of learning based on the idea of reranking language model samples, and show that this method outperforms others in communicating with humans in a visual referential communication task.", "Finally, we present a taxonomy of different types of language drift that can occur alongside a set of measures to detect them.", "In this work, we aim at making agents communicate with humans in natural language.", "Our starting point is a language model that has been trained on generic, not task-specific language data.", "We then place this model in a multi-agent communication environment that generates task-specific rewards, which are used to adapt or modulate the model, making it task-conditional.", "We thus propose to decompose the problem of learning language use into two components: learning what to say based on a given situation, and learning how to say it.", "The what is the essence of communication that underlies our intentions and is chosen by maximizing any given utility, making it a functional , utility-driven process.", "On the other hand, the how is a surface realization of our intentions, i.e., the words we use All authors contributed equally.", "to communicate this what successfully.", "This factorization into content planning (here, what) and surface realization (here, how) moves us away from end-to-end neural generation systems and is in line with traditional methods of natural language generation (Reiter and Dale, 1997).", "More importantly, it enables us to bring together two different strands of research: traditional data-driven natural language learning and multi-agent communication.", "Traditional approaches to natural language learning (Kneser and Ney, 1995; Mikolov et al., 2010; Sutskever et al., 2014; Vinyals and Le, 2015; Radford et al., 2019) are based on inferring structural properties of language from text corpora, often in a passive regime, dissociated from communication.", "While this type of learning is great for learning general statistical associations between symbols (e.g., adjectives come before nouns) and even inferring semantic relations, it ignores the functional aspects of communication, i.e., the fact that people use words to coordinate with others and make things happen in the world (Wittgenstein, 1953; Austin, 1975; Clark, 1996).", "On the other hand, multi-agent communication research (Foerster et al., 2016; Lazaridou et al., 2017; Havrylov and Titov, 2017; Evtimova et al., 2017; Lee et al., 2019) puts communication at the heart of agents' (language) learning.", "Implemented within a multi-agent reinforcement learning setup, agents start tabula rasa and form communication protocols that maximize task rewards.", "While this purely utilitarian framework results in agents that successfully learn to solve the task by creating a communication protocol, these emergent communication protocols do not bear core properties of natural language.", "Chaabouni et al. (2019) show that protocols found through emergent communication, unlike natural language, do not conform to Zipf's Law of Abbreviation; Kottur et al. (2017) find that communication protocols do not follow composi-tionality patterns of natural language, and Lazaridou et al. (2018) find emerged protocols to be sensitive to different experimental conditions.", "This growing set of alarming results on emergent communication raises doubts about the use of this type of functional learning as a viable alternative to language learning.", "Concluding that neither approach on its own is adequate for learning language use, we propose a method for combining the best of both worlds.", "Generic language data can be used effectively as a good prior model of language, encapsulating its intrinsic structural properties, i.e., are only used for the how in the form of generic language models.", "Conversely, multi-agent interactions, that provide rewards specific to the task of interest, now only need to be used for the functional learning of language use, i.e., learning the what.", "1 The contributions of this paper are as follows.", "First, we propose a general research program of language learning that combines two learning signals coming from multi-agent communication and traditional data-driven natural language learning techniques.", "We present a concrete study in the context of a referential communication game (see Section", "2) between a speaker and a listener, where the traditional data-driven language learning takes the form of image captioning, and the functional learning takes the form of agent self-play (see Section 3).", "We then present a new approach for combining the two learning signals, i.e., reward-learned rerankers (see Section 4), and compare this to existing approaches using a human study (see Section 5).", "We discuss shortcomings of this program with respect to different types of language drift that can occur, and introduce a number of automatic measures to detect them (see Section 6).", "Finally, we show how such a program under oracle rewards can be a viable approach moving towards learning language use from human rewards (see Section 7).", "1 About the terminology: by traditional data-driven natural language learning', we mean language modelling of the nextword-prediction variety.", "This type of learning does not involve any use of the language or other context, and as such only focuses on word statistics.", "Since the structure of the language is a large part of those statistics, and the role of the generic language models in our proposed combined systems is to provide structural knowledge of language, we also use the term structural learning'.", "We contrast this with the purely usage-driven, reward-based learning of the type seen in emergent communication research.", "Since the function, rather than the structure or statistics, is the only thing that matters for such a learner, we also use the term functional learning'.", "Our research can be framed in the following scenario.", "An agent needs to perform a functional communication task in a natural language (in this work, English).", "However, examples of linguistic communication about this functional task are not available the only natural language data that can be used consist of examples of generic natural language , which are not grounded in the functional task.", "Recasting the task as a multi-agent language game provides a way to obtain a reward that judges whether an utterance elicited the correct behaviour by a listener.", "In this work, we instantiate the research in the following way: the functional task is a visual referential communication game for a target image in the context of a distractor, the reward is based on success in referential communication where a listener is tasked to pick the correct image within distractors guided by the speaker's description, and the generic natural language data are captioning data.", "Visual referential communication game.", "There are two players, the speaker and the listener.", "The speaker sees a target object and needs to communicate an utterance about it in the context of distractors; both target and distractors are represented as images.", "The listener is presented with the same set of images, but without the knowledge of which is the target, and needs to identify the target image relying on the utterance being communicated by the speaker.", "The utterance takes the form of sequences of word-like units.", "If the listener's choice is correct they both receive a positive reward, else they receive the same negative reward.", "2 Dataset and referential splits.", "For playing the visual referential communication game, we use a multi-modal dataset, the Abstract Scenes (Zitnick and Parikh, 2013) which contains 10k synthetic images accompanied with descriptive captions (on average 6 per image) (see Figure 1).", "3 The cap-2 The task we consider is essentially discriminative image captioning (Vedantam et al., 2017; Dai and Lin, 2017; Andreas and Klein, 2016).", "Here we are using it as a placeholder of a communication task to illustrate our general framework.", "Thus, we are not incorporating any explicit bias in the model about this particular task.", "The only task-specific information we use is communicated via the reward.", "3 Other multi-modal datasets like MSCOCO (Lin et al., 2014) or Flickr (Thomee et al., 2016), while providing com-Jenny is scared of the bear Mike is scared of the bear Jenny and Mike sit by a fire Jenny and Mike are sitting A bear is scaring mike and jenny Figure 1: Example image and ground-truth captions from the Abstract Scenes dataset used in this study.", "tions typically refer to diverse aspects of the scene (characters and actions), providing a rich and challenging environment for an agent to evolve the captioning skills for successful communication.", "In our experiments, we split the dataset into 80/10/10 for train/validation/test sets.", "We use the test images to create two referential splits, i.e., easy and difficult , as a function of the similarity between the target and distractor images.", "Each split contains 1000 pairs of a target and a distractor.", "Human performance and setup validation.", "In order to assess the difficulty of the task in the presence of the particular data (images and captions) we perform a human study in the reference game with a human speaker and a human listener, where the human speaker can only communicate one of the existing captions of the target image.", "We perform the human study under two conditions.", "In the first condition, the human speaker has only access to the ground-truth captions and does not have access to the distractor image, thus has to pick a random caption.", "This corresponds to the perfect structural knowledge of English but no knowledge of the functional task and it is the human upper-bound of a captioning system performance on this task.", "In the second condition, the speaker has access to both the ground-truth captions and the distractor image, thus is able to pick a discriminative caption to communicate.", "For each condition, we collect 50 rounds plex naturalistic images, often have a repetitive set of captions, highlighting one particular aspect of the scene and suffer from a human reporting bias (Misra et al., 2016).", "By using Abstract Scenes, we have left certain visual challenges out of the scope of the work, obtained cleaner multi-modal associations between words and objects, and focused on the language use for referential communication.", "of games and present results in Table 1.", "We see that the task-specific condition outperforms the first condition, indicating that in our current setup there is enough space to improve upon models based on structural-only learning (i.e., captioning models).", "Moreover, the good performance of discriminative caption speaker demonstrates that (in principle) the captioning data can be used in a successful communication with a human for this task.", "The speaker is the primary learner in this research, aiming at creating a model that is able to use natural language in a communicative scenario, and consists of standard visual and language modules.", "To convert images to embeddings u , we use a pre-trained ResNet (He et al., 2016) (parametrized by resnet ) and feed its last layer output into a one-layer MLP (parametrized by MLPS ).", "To generate a message m , we use a one-layer LSTM (Hochreiter and Schmidhuber, 1997) (parametrized by LSTMS ), adding embeddings u at each time step as additional context.", "Section 4 presents different speaker models consisting of these modules.", "We also design two oracle speakers (with no weights) that have direct access to ground-truth captions of images at test time.", "The random caption speaker outputs one of the ground-truth captions for the target image at random.", "Since this speaker is not aware of the functional goal, their performance will indicate whether having only good grounded language skills is enough for communication success in our setup.", "We also build an oracle speaker that is task-aware; discriminative caption speaker uses a simple word-overlap heuristic to pick the target's caption that has the least word overlap with any of the distractor's captions (the score is normalized by the captions' length excluding stop-words).", "Throughout the experiments, we need a way to estimate performance on the functional communication task, either for evaluation or to provide rewards during training acting as a scaffolding to learn the speaker model.", "Ideally, this performance signal should be provided by a human who is interacting online with the speaker agent.", "While we do so for evaluation reasons, for training we approximate this quantity with a learned component, an agent listener.", "To convert images to embeddings u , we use the same pre-trained ResNet as for the speaker and feed its last layer output into a one-layer MLP (parametrized by MLPL ).", "Following that, the listener uses an LSTM (parametrized by LSTML ) to embed the utterance m received by speaker, creating embedding v .", "Finally, the listener picks the image with the highest dot-product similarity between the embedded message v and the embeddings u t and u d for target and distractor.", "Since we know which image candidate is the intended referent, we cast this problem as supervised learning and update the listener's weights L = { MLPL , LSTML } optimizing cross-entropy.", "Finally, the listener assigns reward 1 to the speaker if they identified the correct image, else reward -1.", "We consider two different setups: a joint listener, which is trained together with the speaker, as commonly done in the emergent communication literature, and a fixed listener that is pre-trained to perform best-response to the oracle discriminative caption speaker and stays fixed throughout the learning of the speakers with the sole use of providing them rewards.", "We expect the latter setup to be less prone to language drift issues due to the grounding of the discriminative caption speaker to language data.", "thus potentially resulting in better communication with human listeners.", "We also use the fixed listener for evaluation of all speakers.", "We describe ways to estimate the speaker's generative model p S ( m | u, t ) for message m , conditioned on target and distractor embeddings u = [ u t , u d ] and target image index t { 0 , 1 } .", "This type of learning language use is identical to experiments commonly conducted in the literature of emergent communication (Lazaridou et al., 2017; Havrylov and Titov, 2017; Bouchacourt and Baroni, 2018; Evtimova et al., 2017; Graesser et al., 2019), i.e., the speaker learns to emit communication utterances m in order to maximize the communication task reward (see Section 3.2 for a discussion on how this reward is computed).", "Concretely, the weights S = { MLPS , LSTMS } of the speaker policy S ( m | u, t ) are updated via the REINFORCE update rule (Williams, 1992) using rewards r L provided by the listener, i.e., we optimize L functional = r L ( m, u, t ) I (cid:88) i =1 log p LSTMS ( m i | m <i , u ) , where u = [ u t ; u d ] , m i V , vocabulary size | V | = 100 , and message length I = 10 .", "4 Note, that while this type of learning results in a language that is maximally functionally correct for the given task reward, this language is not natural language, i.e., the symbols are not grounded to natural language.", "This type of learning ignores the functional aspect of communication and communicates utterances that reflect intrinsic structural properties of language, i.e., that are fluent, grammatical and related to the target.", "Here, we used paired data in the form (cid:104) u, c (cid:105) , where u is a visual embedding and c is the associated caption, and learn an image captioning model.", "The speaker's parameters S = { MLPS , LSTMS } are optimized using cross-entropy, i.e., L structural = I (cid:88) i =1 log p LSTMS ( c i | c <i , u ) , where u = u t , c i V , | V | = 2685 (the vocabulary size) and I = 25 , i.e., the longest caption in the dataset.", "We approximate the speaker model p S ( m | u, t ) with the captioning one, which ignores distractor, thus the communication task.", "We construct two speakers with different decoding schemes: greedy uses greedy decoding, while sample picks the highest probability message among k = 20 stochastic samples (temperature = 2 . 0) .", "We now describe several ways in which both types of learning are used to learn language use.", "In all cases, we equip the speaker with a base image captioning model similar to the one presented in Section 4.2, which is used to calculate p LSTM S ( c i | c <i , u t ) .", "The functional part is learned via the REINFORCE update rule optimizing the task reward (i.e., listener's accuracy in the referential task).", "However, speakers differ in how they parametrize p S ( m | u, t ) and whether the task reward is used to update the weights { MLPS , LSTMS } of the base captioning model.", "The simplest approach is to first use existing pre-trained components for which we have available corpora in order to learn the statistical properties", "4 In all experiments using REINFORCE we add an entropy regularization term to the loss.", "of language, and then steer the language use to be functionally appropriate using reward finetuning for the given task.", "We use paired data in the form (cid:104) u, c (cid:105) to learn the weights S = { MLPS , LSTMS } of a base image captioning model following Section 4.2, and then we perform functional learning by using the listener's reward to optimize the weights S as in Section 4.1.", "While this method is conceptually simple, it becomes challenging when the task requires extending the conditioning part of the base model.", "Here, we need to change the conditioning of the base captioning model from u = u t to u = [ u t ; u d ] , to allow conditioning on the distractor.", "Since this is not trivial (the base image captioning model has been learned by conditioning only on one image embedding), we keep the conditioning u = u t also during finetuning with REINFORCE.", "Thus, similar to the image captioning model, we approximate p S ( m | u, t ) with p LSTMS ( m | u t ) .", "However, unlike image captioning , the information about distractors flows into the policy, since the weights S are optimized using the listener's reward which considers distractors.", "Since the gradients from optimizing the functional task are sent all the way into the base captioning model, this causes catastrophic forgetting of the core knowledge of language, leading to language drift .", "Thus, we use a language regularizer term in the form of Kullback-Leibler divergence between pre-trained and fine-tuned language modeling distributions (Havrylov and Titov, 2017).", "An alternative is to conduct both types of learning (i.e., image captioning and functional learning) at the same time (Lazaridou et al., 2017; Lee et al., 2019).", "This takes the form of multi-task learning optimizing f L functional + s L structural , where f = 1 .", "Like in reward finetuning , the gradients of the reward learning flow into the weights of a base captioning model, leaving us with questions about a trade-off between task success and quality of language.", "Therefore, we introduce two variants of this model depending on the importance of the language component, i.e., one variant with s = 0 .", "1 and a language-regularized one with s = 1 .", "Finally, we introduce a new way of learning language use in the multi-agent communication setup.", "As before, we train the core language capabilities of a speaker using the image captioning task objective, but after this pre-training phase, the weights of this model are frozen.", "The functional part is then viewed as learning to use this general knowledge of language grounded in images.", "This is opera-tionalized as learning to rerank samples obtained from the captioning model optimizing the listener's reward.", "The action space of this speaker are sentences, as opposed to words used commonly in the literature of emergent communication.", "We emphasize that by leveraging the idea of reranking, we are able to take a task-unconditional model, i.e., a captioning model that only conditions on the target, and extend its conditioning turning it into a task-conditional model, i.e., a discriminative captioning model that conditions also on the distractor.", "Below we consider two concrete reranker models.", "In both cases, the message generation happens in two steps.", "First, we sample | S | = 20 candidates from the pre-trained and fixed image captioning model p LSTMS ( m | u t ) .", "Then, we pick the best sample s using a task-conditional reranking score p ( s | u, t ) .", "The reranking score can be viewed as a new policy S ( s | u, t ) that operates in the space of samples S drawn from the task-unconditional model.", "This policy introduces an additional set of trainable parameters rerankS that are learned with REINFORCE.", "Thus, the full set of weights for this speaker is S = { MLPS , LSTMS , rerankS } .", "Crucially, the two learning signals, i.e., structural and functional , affect different set of weights, i.e., { MLPS , LSTMS } and rerankS respectively, allowing submodules to specialize.", "Product of experts reranker.", "In this model we parametrize the policy as a product of experts (PoE): S ( s | u, t ) p ( s | u, t ) f p ( s | u t ) s , where u = [ u t ; u d ] and f = 1 .", "The second term is the image captioning message probability, renormalized over the samples space, thus bringing general language knowledge grounded in images.", "The first term adjusts for the task specifics.", "To model that, we re-embed the samples using transformed bag-of-words, thus the trainable parameters of the reranker rerankS are word embeddings and additional MLP weights.", "We combine target and distractor embeddings into a single vector and compute the dot-product similarity between this vector and each of the bag-of-words representations of samples.", "Finally, these scores are passed through a softmax layer to obtain p ( s | u, t ) .", "We introduce two variants of the model, one with s = 0 and a language-regularized one with s = 1 .", "Noisy channel reranker.", "Following Bayes rule, we factorize the speaker's policy as follows: S ( s | u, t ) p ( t | s, u ) p ( s | u ) , where u = [ u t ; u d ] .", "We omit the distractor vector u d in the conditioning of the prior, arriving to p ( s | u t ) from the PoE reranker above.", "The crucial difference is that the first term now represents the speaker's approximation of the listener's behaviour.", "As before, we represent samples with the transformed bag-of-words, but then compute their dot-product similarities with each image separately and normalize with softmax across the images to obtain the probability of the target p ( t | s, u ) .", "This reranker model is closely related to pragmatic speakers in Rational Speech Act (RSA) framework (Andreas and Klein, 2016; Monroe and Potts, 2015; Cohn-Gordon et al., 2018; Fried et al., 2018).", "However, while the RSA model assumes a given and fixed listener model, here we are learning the model of the listener that the speaker is using by optimizing end-to-end the listener's reward.", "Thus, when doing multi-agent communication using the noisy channel model, there exist two components that produce probability distributions of the same type p ( t | s, u ) ; one belongs to the listener, thus the speaker has no access to it (e.g., this listener in the future could be a human sending rewards), while the other belongs to the speaker corresponding to their model of the listener.", "Table 2 presents referential success when speakers are trained with rewards from a joint listener, i.e., listener being learned jointly with the speaker.", "We conduct three different evaluations: at test time we play against the fixed listener, human listeners and the joint listener the speaker was trained with.", "While fixed listener is the same for all speakers, the joint listener is speaker-specific.", "We report results on two splits: for the easy and difficult split we report referential success of the joint listener, and for the latter split, we also report results of the fixed and human listener.", "To compute referential success using human listeners, we collect 400 annotations for each speaker model.", "To avoid annotators adapting to model-specific strategies, we group predictions of similar models and collect annotations in three sessions (one for each group), during which we present annotators with predictions from a model sampled from that group.", "5 5 Group 1: image captioning (greedy/sample), noisy chan-Easy split Difficult split joint joint fixed human Functional-only learning emergent ( 4.1) 0.99 0.98 -0.5 Structural-only learning image captioning ( 4.2) sample 0.92 0.78 0.77 0.77 greedy 0.91 0.77 0.73 0.78 Structural & functional learning Gradients from reward affect base captioning model reward finetuning ( 4.3.1) no KL 0.95 0.82 0.63 0.62 with KL 0.93 0.79 0.77 0.69 multi-task learning ( 4.3.2) s = 0 .", "All models perform quite similarly in the easy split, whereas we observe larger gaps in the difficult split.", "In terms of joint accuracy results in the difficult split, reward finetuning has the lowest performance among models that are optimizing rewards, perhaps due to its large action space (i.e., the vocabulary size | V | = 2685 ), making it a hard RL exploration problem.", "multi-task , despite having the same action space performs better, probably due to the captioning objective being optimized concurrently facilitating the learning dynamics.", "Finally, the best results in both splits are obtained by the emergent communication model, that achieves near perfect performance.", "We believe this is the case since this speaker is the least constrained of all, since we can think of all other speakers (i.e., the ones that combine both types of learning) as being regularized towards producing natural language.", "nel, PoE).", "Group 2: multi-task, reward finetuning.", "Group 3: random, discriminative, PoE and noisy channel with ground-truth captions.", "Somewhat alarmingly, we observe the joint performance is not predictive of the human 's one across the board, hinting to issues regarding pragmatic drift (we will further discuss this in Section 6).", "In the most extreme case, while the emergent communication speaker achieved the highest results when playing against a listener jointly learned with the speaker, this comes in the expense of human performance: functional learning alone results in maximally uninterpretable protocols, and as such humans are at random when playing against such a model.", "Speakers that combine both types of learning achieve good human performance, with reward-learned reranker models, i.e., noisy channel and PoE being the best.", "In their case, they outperform the image captioning baselines, even approaching the discriminative oracle speaker based on ground-truth captions.", "This indicates their effectiveness in extending the conditioning of the underlying image captioning to the distractor image with the reward coming from the listener , turning like this the base image-captioning model into a task-specific referential captioning model.", "Moreover, when giving the rerankers a perfect captioning model in the form of ground-truth captions of target images, performance of noisy channel and PoE surpass the oracles' (see last two columns of Table 2); as the community improves the base language models, we should expect this to also result in net improvement in the reranker models.", "Finally, we also observe that the fixed grounded listener is significantly predictive of the human performance ( p < 0 . 005 , t-test).", "6 This is encouraging, since as we will show in Section 7, we can use this listener as a fixed model that provides rewards to the speaker model.", "We show that the multi-agent communication framework is prone to language drift (Lee et al., 2019), i.e., when protocols diverge from human language.", "We present a taxonomy of different types that occur in this framework, alongside a set of automatic measures to detect it.", "The most basic type of drift that manifests in the emergent communication setup relates to the core", "structural properties of the generated language, i.e., its fluency and grammaticality with respect to natural language (this is also referred to by Lee et al. (2019) as syntactic).", "Looking at Table 3, a clear example of this type of drift happens when models update the base captioning model.", "reward finetuning (no KL) does not produce at all grammatical sentences, while multi-task ( s = 0 . 1 ) appears to suffer less, only occasionally producing slightly ungrammatical sentences by repeating consecutive words.", "We term this structural drift and we quantify it as the log probability of the generated message under a pre-trained unconditional language model (col-umn log p(m) in Table 4).", "The second type of drift is the semantic drift.", "This relates to whether the generated message is grounded with regards to the target object, i.e., its adequacy with respect to the literal semantics of the target (this is also referenced by Lee et al. (2019) as semantic).", "We have qualitatively observed instances of this type of drift in the PoE , which occasionally shifts the semantics of words, e.g., using the word tree to refer to ground as seen in Table 3.", "To measure it, we use a pre-trained image-conditional language model and compute the target-image conditional log probability of the generated log p(m) log p(m | i) 1-gram 3-gram Structural-only learning image captioning ( 4.2) sample -8.71 -7.77 0.81 0.37 greedy -8.63 -7.72 0.73* 0.30* Structural & functional learning Gradients from reward affect base captioning model reward finetuning ( 4.3.1) no KL -442.00 -279.55 0.33 0.00 with KL -11.75 -10.78 0.70* 0.22* multi-task learning ( 4.3.2) s = 0 .", "These two log probability-based measures do not assume access to language data for the target objects, and as such can be computed from general unconditional and domain-specific conditional language models.", "In this particular case though, since we also have access to language data for the target images (i.e., captions in English), and assuming that these data describe everything that is true about the target, we can use simple n-gram statistics as proxies of semantic drift (i.e., in this case 1-gram word overlap ignoring stop word and 3-gram word overlap between the ground-truth captions and the speaker-generated message).", "Moreover, all these measures do not take into account the specific communication task the speaker has to perform, i.e., our measures do not consider any information about the distractor object, making them easily adaptable to other tasks.", "In Table 4 we report performance of different models under these automatic measures.", "The structural score log p(m) reflects the qualitative observations made from Table 3, i.e., multi-task and reward finetuning , have the highest structural drift, with the latter performing significantly worse than all the models.", "In contrast, the reranker models that do not update the base captioning model, i.e., PoE and noisy channel , perform the best on the semantic score by construction; both models directly incorporate in their models a component associated with the semantic score (i.e., the samples taken from the image-conditional model alongside the associated proba-bilities).", "Moreover, they also perform well on all other measures, indicating their robustness against language drift.", "Finally, all the model-specific language regularizers (KL for reward finetining , s = 1 for multi-task and s = 1 for PoE ) we introduced were effective in limiting both types of language drift (as also seen in Table 3).", "Finally, we identify a novel type of drift, i.e., pragmatic drift, which relates to the divergence between a human's interpretation of the message from the interpretation a speaker will assume.", "Unfortunately, this type of drift is perhaps the most difficult to capture in an automatic way as it is task specific and requires access to the exact interpretation that the human would ascribe to the message.", "As a proxy of pragmatic drift, we use the difference between the agentand human-listener referential success; if the joint referential success is higher than the human's one, then the speaker assumes an interpretation of the message that is different from the human's one, resulting in lower human performance.", "An extreme example of this drift manifests when the joint listener achieves almost perfect referential success whereas a human listener is at random, as in the case of emergent communication .", "However, in this case the messages are maximally uninterpretable with the lowest possible performance in both structural and semantic scores.", "Hence, a natural question to ask is to what degree (if at all possible) pragmatic drift can manifest in the absence of the other two types of language drift.", "Or, put differently, does the emergent communication for learning language use hide any other pathological behaviour for models that do not suffer a lot from structural and semantic drift, as in the case of PoE and noisy channel ?", "To study this, we create a setup where PoE is guaranteed to have a perfect knowledge of (grounded) language.", "Namely, it uses the reward to rerank ground-truth captions associated with the target image (note, our dataset provides up to five captions per image).", "Moreover, we perform several ablations where we allow the updating of different parameters in the speaker's and listener's model by unfreezing components.", "Table 5 presents the results of the joint and human referential success.", "The main finding is that Weights learned with RL joint human reranker 0.88 0.92 -0.04 reranker + speaker ResNet 0.92 0.90 +0.02 reranker + both agent ResNets 0.96 0.88 +0.08 Table 5: Referential success for PoE with gold captions when updating different components during training with joint listener.", "by increasing the number of components that get updated using the joint reward, the margin between the referential success of the two types of listeners increases.", "Despite the fact that the speaker is using human language that is perfectly fluent and accurate with respect to the target image (since the reranker operates on captions associated with the target image), while the joint listener is able to communicate with the agent speaker, the human listener achieves significantly lower performance.", "In one test example, the speaker said Mike has a hat , which was equally true for both images making the human pick at random.", "So, how could the listener pick correctly?", "The speaker had reached a pact with the listener that the interpretation of this message will be something beyond what the phrase means (e.g., Mike has a yellow hat or the intensity of the pixels in the target image is lower).", "Since speaker and listener learn together, they co-adapt, forming conventions (or conceptual pacts (Brennan and Clark, 1996)) that differ from humans', even in the presence of fluent and grounded language.", "In the previous section we showed that learning a speaker using a learned reward module as a scaffolding (i.e., the joint listener) can lead to pragmatic drift.", "In this section, we use a grounded reward as scaffolding.", "In the absence of a human listener to provide rewards for learning, we use the oracle fixed listener, which was found in Section 5 to be predictive of human referential success.", "It is pre-trained, stays fixed and just provides rewards for training the speaker.", "As speakers, we use the models that scored the highest in Table 2 and retrain them against fixed .", "Table 6 presents the results of referential success against fixed and human listeners.", "Using a grounded reward results in better performance for the weaker models.", "The small gap between the rerankers in the two experimental setups points that using a learned reward module ( joint ) holds promise, despite the different types of language drift.", "Moreover, we show that our mod-Model fixed human reward finetuning, with KL 0.81 0.75 multi-task learning, s = 0 .", "els for learning language can be used against fixed reward models, potentially learning directly from human rewards (Ziegler et al., 2019).", "We presented a method for teaching agents to communicate with humans in natural language, by combining two learning signals coming from multi-agent communication and traditional data-driven natural language learning techniques , which adds on recent efforts of blending emergent communication with natural language (Lowe et al., 2020; Lu et al., 2020).", "Self-play between speakers and listeners can result in language drift, the most severe of which being pragmatic drift .", "Since speakers and listeners are learning concurrently, they can co-adapt to pair-specific policies that deviate from the policies that humans learn.", "This pathological behaviour of self-play is not specific to language and extends to other policies (Carroll et al., 2019).", "Finally, we introduced the reward-learned reranker approach which alleviates language drift and achieves the highest human performance , by constraining the functional learning to happen on the level of utterances generated by a pre-trained language model.", "However, since the functional signal is not currently influencing the sampling from the language model, this will lead to poor performance when using more general language models with weaker conditioning (e.g. GPT-2 (Radford et al., 2019)) whose samples potentially do not fit the functional context.", "Moving towards integrating our findings into more realistic applications of self-play, e.g., user simulation in dialogue (Schatzmann et al., 2006; Shah et al., 2008), these shortcomings need to be addressed.", "We thank Kris Cao, Laura Rimell, Chris Dyer, Phil Blunsom, Aida Nematzadeh and the DeepMind Language Team for useful feedback, Susie Young and Adam Liska for help with annotations." ]
[ "objective", "method", "method", "objective", "method", "objective", "method", "method", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "result", "abstain", "objective", "result", "result", "abstain", "abstain", "objective", "objective", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "method", "other", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other" ]
[ "Abstract Meaning Representations (AMRs) capture sentence-level semantics structural representations to broad-coverage natural sentences.", "We investigate parsing AMR with explicit dependency structures and interpretable latent structures.", "We generate the latent soft structure without additional annotations, and fuse both dependency and latent structure via an extended graph neural networks.", "The fused structural information helps our experiments results to achieve the best reported results on both AMR 2.0 (77.5% Smatch F1 on LDC2017T10) and AMR 1.0 (71.8% Smatch F1 on LDC2014T12).", "Abstract Meaning Representations (AMRs) (Ba-narescu et al., 2013) model sentence level semantics as rooted, directed, acyclic graphs.", "Nodes in the graph are concepts which represent the events, objects and features of the input sentence, and edges between nodes represent semantic relations.", "AMR introduces re-entrance relation to depict the node reuse in the graphs.", "It has been adopted in downstream NLP tasks, including text summarization (Liu et al., 2015; Dohare and Karnick, 2017), question answering (Mitra and Baral, 2016) and machine translation (Jones et al., 2012; Song et al., 2019).", "AMR parsing aims to transform natural language sentences into AMR semantic graphs.", "Similar to constituent parsing and dependency parsing (Nivre, 2008; Dozat and Manning, 2017), AMR parsers mainly employ two parsing techniques: transition-based parsing (Wang et al., 2016; Damonte et al., 2017; Wang and Xue, 2017; Liu et al., 2018; Guo and Lu, 2018) use a sequence of transition actions Part of work was done when the author was visiting Westlake University * Corresponding author.", "to incrementally construct the graph, while graph-based parsing (Flanigan et al., 2014; Lyu and Titov, 2018; Zhang et al., 2019a; Cai and Lam, 2019) divides the task into concept identification and relation extraction stages and then generate a full AMR graph with decoding algorithms such as greedy and maximum spanning tree (MST).", "Additionally, reinforcement learning (Naseem et al., 2019) and sequence-to-sequence (Konstas et al., 2017) have been exploited in AMR parsing as well.", "Previous works (Wang et al., 2016; Artzi et al., 2015) shows that structural information can bring benefit to AMR parsing.", "Illustrated by Figure 1, for example syntactic dependencies can convey the main predicate-argument structure.", "However, dependency structural information may be noisy due to the error propagation of external parsers.", "Moreover, AMR concentrates on semantic relations, which can be different from syntactic dependencies.", "For instance, in Figure 1, AMR prefers to select the coordination (i.e. and ) as the root, which is different from syntactic dependencies (i.e. came ).", "Given the above observations, we investigate the effectiveness of latent syntactic dependencies for AMR parsing.", "Different from existing work (Wang et al., 2016), which uses a dependency parser to provide explicit syntactic structures, we make use of a two-parameter distribution (Bastings et al., 2019) to induce latent graphs, which is differentiable under reparameterization (Kingma and Welling, 2014).", "We thus build a end-to-end model for AMR parsing with induced latent dependency structures as a middle layer, which is tuned in AMR training and thus can be more aligned to the need of AMR structure.", "For better investigating the correlation between induced and gold syntax, and better combine the strengths, we additionally consider fusing gold and induced structural dependencies into an align-free AMR parser (Zhang et al., 2019a).", "Specifically, we first obtain the input sentence's syntactic dependencies 1 and treat the input sentence as prior of the probabilistic graph generator for inferring the latent graph.", "Second, we propose an extended graph neural network (GNN) for encoding above structural information.", "Subsequently we feed the encoded structural information into a two stage align-free AMR parser (Zhang et al., 2019a) for promoting AMR parsing.", "To our knowledge, we are the first to incorporate syntactic latent structure in AMR parsing.", "Experimented results show that our model achieves 77.5% and 71.8% SMATCH F1 on standard AMR benchmarks LDC2017T10 and LDC2014T12, respectively, outperforming all previous best reported results.", "Beyond that, to some extent, our model can interpret the probabilistic relations between the input words in AMR parsing by generating the latent graph 2 .", "We adopt the parser of Zhang et al. (2019a) as our baseline, which treats AMR parsing as sequence-to-graph transduction.", "Our baseline splits AMR parsing into a two-stage procedure: concept identification and edge prediction .", "The first task aims to identify the concepts (nodes) in AMR graph from input tokens, and the second task is designed to predict semantic relations between identified concepts.", "We employ Stanford CoreNLP (Manning et al., 2014) to get the dependencies.", "2 Our code will be available at: https://github.", "com/zhouqiji/ACL2020_AMR_Parsing .", "Formally, for a given input sequence of words w = (cid:104) w 1 , ..., w n (cid:105) , the goal of concept identification in our baseline is sequentially predicting the concept nodes u = (cid:104) u 1 , ..., u m (cid:105) in the output AMR graph, and deterministically assigning corresponding indices d = (cid:104) d 1 , ..., d m (cid:105) .", "Our baseline extends the pointer-generator network with self-copy mechanism for concept identification (See et al., 2017; Zhang et al., 2018a).", "The extended model can copy the nodes not only from the source text, but also from the previously generated list of nodes on the target side.", "The concept identifier firstly encodes the input sentence into concatenated vector embeddings with GloVe (Pennington et al., 2014), BERT (Devlin et al., 2019), POS (part-of-speech) and character-level (Kim et al., 2016) embeddings.", "Subsequently, we encode the embedded sentence by a two-layer bidirectional LSTM (Schuster and Paliwal, 1997; Hochreiter and Schmidhuber, 1997): h li = [ f l ( h l 1 i , h li 1 ); f l ( h l 1 i , h li +1 )] , where h li is the l -th layer encoded hidden state at the time step i and h 0 i is the embedded token w i .", "Different from the encoding stage, the decoder does not use pre-trained BERT embeddings, but employs a two-layer LSTM to generate the decoding hidden state s lt at each time step: s lt = f l ( s l 1 t , s lt 1 ) , where s l 1 t and s lt 1 are hidden states from last layer and previous time step respectively, and s l 0 is the concatenation of the last bi-directional encoding hidden states.", "In addition, s 0 t is generated from the concatenation of the previous node u t 1 embedding and the attention vector (cid:101) s t 1 , which combine both source and target information: (cid:101) s t = tanh ( W c [ c t ; s lt ] + b c ) , where W c and b c are trainable parameters, c t is the context vector calculated by the attention weighted encoding hidden states and the source attention distribution a t src following Bahdanau et al. (2015) The produced attention vector (cid:101) s is used to generate the vocabulary distribution: P vocab = softmax ( W vocab (cid:101) s t + b vocab ) , as well as the target attention distribution: e t tgt = v (cid:62) tgt tanh ( W tgt (cid:101) s 1: t 1 + U tgt (cid:101) s t + b tgt ) , a t tgt = softmax ( e t tgt ) , The source-side copy probability p src , target-side copy probability p tgt and generation probability p gen are calculated by (cid:101) s , which can be treated as generation switches: [ p src , p tgt , p gen ] = softmax ( W switch (cid:101) s t + b switch ) , The final distribution is defined below, if v t is copied from existing nodes: P (node) ( u t ) = p tgt t 1 (cid:88) i : u i = u t a t tgt [ i ] , otherwise: P (node) ( u t ) = p gen P vocab ( u t ) + p src n (cid:88) i : w i = u t a t src [ i ] , where a t [ i ] is the i -th element of a t , and then deterministically assigned the existing indices to the identified nodes based on whether the node is generated from the target-side distribution.", "Our baseline employs a deep biaffine attention classifier for semantic edge prediction (Dozat and Manning, 2017), which have been widely used in graph-based structure parsing (Peng et al., 2017; Lyu and Titov, 2018; Zhang et al., 2019a).", "The overall structure of our model is shown in Figure 2.", "First, we use an external dependency parser (Manning et al., 2014) to obtain the explicit structural information, and obtain the latent structural information via a probabilistic latent graph generator.", "We then combine both explicit and latent structural information by encoding the input sentence through an extended graph neural network.", "Finally, we incorporate our model with an align-free AMR parser for parsing AMR graphs with the benefit of structural information.", "We generate the latent graph of input sentence via the HardKuma distribution (Bastings et al., 2019), which has both continuous and discrete behaviours.", "HardKuma can generate samples from the closed interval [0 , 1] probabilisitcally .", "This feature allows us to predict soft connections probabilities between input words, which can be seen as a latent graph.", "Specifically, we treat embedded input words as a prior of a two-parameters distribution, and then sample a soft adjacency matrix between input words for representing a dependency.", "HardKuma Distribution The HardKuma distribution is derived from the Kumaraswamy distribution ( Kuma ) (Kumaraswamy, 1980), which is a two-parameters distribution over an open interval (0 , 1) , i.e., K Kuma( a , b ) , where a R > 0 and b R > 0 .", "The Kuma distribution is similar to Beta distribution, but its CDF function has a simpler analytical solution and inverse of the CDF is: C 1 K ( u ; a , b ) = (cid:16) 1 (1 u ) 1 / b (cid:17) 1 / a , We can generate the samples by: C 1 K ( U ; , ) Kuma( , ) , where U U (0 , 1) is the uniform distribution, and we can reconstruct this inverse CDF function by the reparameterizing fashion (Kingma and Welling, 2014; Nalisnick and Smyth, 2017).", "In order to include the two discrete points 0 and 1, HardKuma employs a stretch-and-rectify method with support (Louizos et al., 2017), which leads The boy came and left Latent Graph Generator 1 2 3 4 5 Graph Encoder Align-Free Concept Identifier Explicit Graph Latent Graph 0.0 1.0 1.0 HardKuma 3 2 4 1 5 1 4 GELU Graph Fusion Layer 3 5 2 GELU GELU GELU GELU GELU GELU GELU GELU GELU l1 l Figure 2: Stretch of the model which has four main components: (1) A latent graph generator for producing the soft-connected latent graph (3.1); (2) An extended syntactic graph convolutional network for encoding the structural information (3.2); (3) An align-free concept identification for concept node generation (2.2); (4) A deep biaffine classifier for relation edge prediction (2.3).", "the variable T Kuma( a , b , l, r ) to be sampled from Kuma distribution with an open interval ( l, r ) where l < 0 and r > 0 .", "The new CDF is: CT ( t ; a , b , l, r ) = CK ( ( t l ) / ( r l ) ; a , b ) , We pass the stretched variable T Kuma( a , b , l, r ) through a hard-sigmoid function (i.e., h = min(1 , max(0 , t )) ) to obtain the rectified variable H HardKuma( a , b , l, r ) .", "Therefore, the rectified variable covers the closed interval [0 , 1] .", "Note that all negative values of t are deterministically mapped to 0 .", "In contrast, all samples t > 1 are mapped to 1 3 .", "Because the rectified variable is sampled based on Kuma distribution, HardKuma first sample a uniform variable over open interval (0 , 1) from uniform distribution U U (0 , 1) , and then generate a Kuma variable through inverse CDF: k = C 1 K ( u ; a , b ) , Second, we transform the Kuma variable for covering the stretched support: t = l + ( r l ) k, 3 Details of derivations can be found at (Bastings et al., 2019).", "Latent Graph We generate the latent graph of input words w by sampling from HardKuma distribution with trained parameters a and b .", "We first calculate the prior c of ( a , b ) by employing multihead self-attention (Vaswani et al., 2017): c a = Transfomer a ( v ) , c b = Transfomer b ( v ) , where v = (cid:104) v 1 , ..., v n (cid:105) is the embedded input words.", "Subsequently, we compute a and b as: a = Norm ( c a c Ta ) , b = Norm ( c b c Tb ) , where a i = (cid:104) a i 1 , ..., a in (cid:105) and b i = (cid:104) b i 1 , ..., b in (cid:105) , c a , c b R n n and Norm ( x ) is the normalization function.", "Hence, the latent graph L is sampled via learned parameters a and b : l ij HardKuma( a ij , b ij , l, r ) .", "For a syntactic graph with n nodes, the cell A ij = 1 in the corresponding adjacent matrix represents that an edge connects word w i to word w j .", "An L-layer syntactic GCN of l -th layer can be used to represent A, where the hidden vector for each word w i at the l th layer is: h ( l ) i = ( n (cid:88) j =1 A ij W ( l ) h ( l 1) j /d i + b ( l ) ) , where A = A + I with the n n identity matrix I , d i = (cid:80) nj =1 A ij is the degree of word w i in the graph for normalizing the activation to avoid the word representation with significantly different magnitudes (Marcheggiani and Titov, 2017; Kipf and Welling, 2017), and is a nonlinear activation function.", "In order to take benefits from both explicit and latent structural information in AMR parsing, we extend the Syntactic-GCN (Marcheggiani and Titov, 2017; Zhang et al., 2018b) with a graph fusion layer and omit labels in the graph (i.e. we only consider the connected relation in GCN).", "Specifically, we propose to merge the parsed syntactic dependencies and sampled latent graph through a graph fusion layer: F = L + (1 ) D where is trainable gate variables are calculated via the sigmoid function, D and L are the parsed syntactic dependencies and generated latent graph respectively, and F represent the fused soft graph.", "Furthermore, F is a n n adjacent matrix for the input words w , different from the sparse adjacent matrix A , F ij denote a soft connection degree from word w i to word w j .", "We adapt syntactic-GCN with a fused adjacent matrix F , and employ a gate mechanism: h ( l ) i = GELU ( L norm ( n (cid:88) j =1 G j ( F ij W ( l ) h ( l 1) j + b ( l ) ))) , We use GELU (Hendrycks and Gimpel, 2016) as the activation function, and apply layer normalization L norm (Ba et al., 2016) before passing the results into GELU.", "The scalar gate G j is calculated by each edge-node pair : G j = ( h ( l 1) j v ( l 1) + b ( l 1) ) , where is the logistic sigmoid function, v and b are trainable parameters.", "Similar to our baseline (Zhang et al., 2019a), we linearize the AMR concepts nodes by a pre-order traversal over the training dataset.", "We obtain gradient estimates of E ( , ) through Monte Carlo sampling from: E ( , ) = EU (0 ,I ) [log P ( node | u t , g ( u , w ) , ) + log P t ( head | u k , g ( u , w ) , ) + log P k,t ( label | l, g ( u , w ) , )] + covloss t where u t is the reference node at time step t with reference head u k and l is the reference edge label between u k and u j .", "The form g ( u , w ) is short for the latent graph samples from uniform distribution to HardKuma distribution (3.1).", "Different from Bastings et al. (2019), we do not limit the sparsity of sampled latent graphs, i.e. we do not control the proportion of zeros in the latent graph, because we prefer to retain the probabilistic connection information of each word in w .", "Finally, we introduce coverage loss into our estimation due to reduce duplication of node generation (See et al., 2017).", "We directly generate the latent graph by the PDF function of HardKuma distribution with trained parameters a and b .", "In the concept identification stage, we decode the node from the final probability distribution P (node) ( u t ) at each time step, and apply beam search for sequentially generating the concept nodes u and deterministically assigning corresponding indices d .", "For edge prediction, we use a bi-affine classifier to calculate the edge scores under the generated nodes u and indices d : S = { score (edge) i,j | 0 i, j m } .", "Similar to Zhang et al. (2019a), we apply a maximum spanning tree (MST) algorithm (Chu, 1965; Edmonds, 1967) to generate complete AMR graph and restore the re-entrance relations by merging the receptive nodes via their indices.", "We use two standard AMR corpora: AMR1.0 (LDC2014T12) and AMR 2.0 (LDC2017T10).", "AMR 1.0 contains 13051 sentences in total.", "AMR Data Parser F1(%) AMR 2.0 Cai and Lam (2019) 73.2 Lyu and Titov (2018) 74.4 0.2 Lindemann et al. (2019) 75.3 0.1 Naseem et al. (2019) 75.5 Zhang et al. (2019a) 76.3 0.1 w/o BERT 74.6 Zhang et al. (2019b) 77.0 0.1 Ours 77.5 0.2 w/o BERT 75.5 0.2 AMR 1.0 Flanigan et al. (2016) 66.0 Pust et al. (2015) 67.1 Wang and Xue (2017) 68.1 Guo and Lu (2018) 68.3 0.4 Zhang et al. (2019a) 70.2 0.1 w/o BERT 68.8 Zhang et al. (2019b) 71.3 0.1 Ours 71.8 0.2 w/o BERT 70.0 0.2 Table 1: Main results of SMATCH F1 on AMR 2.0 (LDC2017T10) and 1.0 (LDC2014T12) test sets.", "2.0 is larger which is split into 36521, 1368 and 1371 sentences in training, development and testing sets respectively.", "We treat in AMR 2.0 as the main dataset in our experiments since it is larger.", "We tune hyperparameters on the development set, and store the checkpoints under best development results for evaluation.", "We employ the pre-processing and post-processing methods from Zhang et al. (2019a), and get the syntactic dependencies via Stanford Corenlp (Manning et al., 2014).", "We train our model jointly with the Adam optimizer (Kingma and Ba, 2015).", "The learning rate is decayed based on the results of development set in training.", "Training takes approximately 22 hours on two Nivida GeForce GTX 2080 Ti.", "Main Results We compare the SMATCH F1 scores (Cai and Knight, 2013) against previous best reported models and other recent AMR parsers.", "Table 1 summarizes the results on both AMR 1.0 and AMR 2.0 data sets.", "For AMR 2.0, with the benefit from the fused structural information, we improve our baseline (Zhang et al., 2019a) by 1.2% F1 in the full model, and 0.9% F1 Metric N'19 Z'19 a Z'19 b Ours SMATCH 75.5 76.3 77 77.5 Unlabeled 80 79 80 80.4 No WSD 76 77 78 78.2 Reentrancies 56 60 61 61.1 Concepts 86 85 86 85.9 Named Ent.", "In addition, our model outperform the best reported model (Zhang et al., 2019b) by 0.5% F1.", "On AMR 1.0, there are only about 10k sentences for training.", "We outperform the best results by 0.5% Smatch F1.", "We observe that for the smaller data set, our model has a greater improvement of 1.6% F1 than for the larger data set (1.2% F1 comparing with our baseline.) Fine-grained Results Table 2 shows fined-grained parsing results of each sub-tasks in AMR 2.0, which are evaluated by the enhance AMR evaluation tools (Damonte et al., 2017).", "We notice that our model brings more than 1% average improvement to our baseline (Zhang et al., 2019a) for most sub-tasks, in particular, the unlabeled is gained 1.4% F1 score increasing with the structural information, and the sub-task of no WSD, reentrancies, negation and SRL are all improved more than 1.0% score under our graph encoder.", "In addition, our model achieves comparable results to the best reported method (Zhang et al., 2019b) for each subtask.", "Ablation Study We investigate the impacts of different structural information in our model on AMR 2.0 with main sub-tasks 5 .", "Table 3 shows the fused structure perform better in most sub-task than explicit and latent structure.", "In particular, the model with explicit structures (i.e. both explicit and fused) 4 We use pre-tained bert-base embedings without fine-tuning.", "outperform the model with only latent structure by 0.5% F1 in Reentrancies sub-task, which demonstrates that the explicit dependencies information can improve the this sub-task.", "Latent structure perform better in concepts sub-task, and fused structure brings more information to the negation subtask which obtain 0.5% and 1.0% improvement than explicit and latent structure respectively.", "Additionally, we can notice that both latent and explicit models outperform the previous best reported Smatch F1 score, and fused model reach the best results.", "It shows that different types of structural information can help the AMR parsing, we discuss the connection tendencies of each structure in (4.3).", "Experiment results show that both the explicit structure and latent structure can improve the performance of AMR parsing, and latent structural information reduces the errors in sub-tasks such as concept and SRL.", "Different from the discrete relation of explicit structures, the internal latent structure holds soft connection probabilities between words in the input sentence, so that, each fully-connected word receive information from all the other words.", "Figure 3 depicts the latent and fused soft adjacent matrix of the input sentence The boy came and left respectively.", "It can be seen that the", "la-(a) Latent Matrix", "tent matrix (Figure 3a) tries to retain information from most word pairs, and the AMR root and holds high connection probabilities to each word in the sentence.", "In addition, the main predicates and arguments in the sentence tend to be connected with high probabilities.", "The fused matrix (Fig-ure 3b) holds similar connection probabilities to predicates and arguments in the sentence as well, and it reduces the connection degrees to the determiner The which does not appear in corresponding AMR graph.", "Moreover, the syntactic root came and semantic root and reserve most connection probabilistic to other words.", "We compare the connections in different structures in Figure 4.", "The latent graph (Figure 4a) prefers to connect most words, and the main predicates and arguments in the graph have higher connection probabilities.", "The fused graph (Figure 4c) shows that our model provides core structural information between interpretable relations.", "Specifically, it holds similar potential relations to annotated AMR graph, and tries to alleviate the connection information to the words which are not aligned in AMR concept nodes.", "Beyond that, we calculate the Unlabeled Attachment Score (UAS) for fused and latent graph in Table 4, the unsupervised latent graph captures less explicit edges than fused graph, and both fused and latent graph ignore some arcs on explicit graph.", "It shows that a lower UAS does not mean lower AMR parsing score and some arcs are more useful to AMR parsing but not in explicit gold trees.", "Consequently, we preserve the explicit and latent structure information simultaneously.", "The latent structure can not only improve AMR parsing, but also have ability to interpret the latent connections between input words.", "Transition-based AMR parsers (Wang et al., 2016; Damonte et al., 2017; Wang and Xue, 2017; Liu et al., 2018; Guo and Lu, 2018; Naseem et al., 2019) suffer from the lack of annotated alignments between words and concept notes is crucial in these models.", "Lyu and Titov (2018) treat the alignments as an latent variable for their probabilistic model, which jointly obtains the concepts, relations and alignments variables.", "Sequence-to-sequence AMR parsers transform AMR graphs into serialized sequences by external traversal rules, and then restore the generated the AMR sequence to avoid aligning issue (Konstas et al., 2017; van Noord and Bos, 2017).", "Moreover, Zhang et al. (2019a) extend a pointer generator (See et al., 2017), which can generate a node multiple times without alignment through the copy mechanism.", "With regards to latent structure, Naradowsky et al. (2012) couples syntactically-oriented NLP tasks to combinatorially constrained hidden syntactic representations.", "Bowman et al. (2016); Yo-gatama et al. (2017) and Choi et al. (2018) generate unsupervised constituent tree for text classification.", "The latent constituent trees are shallower than human annotated, and it can boost the performance of downstream NLP tasks (e.g., text classification).", "Guo et al. (2019) and Ji et al. (2019) employ self-attention and bi-affine attention mechanism respectively to generate soft connected graphs, and then adopt GNNs to encode the soft structure to take advantage from the structural information to their works.", "GCN and its variants are increasingly applied in embedding syntactic and semantic structures in NLP tasks (Kipf and Welling, 2017; Marcheggiani and Titov, 2017; Damonte and Cohen, 2019).", "Syntactic-GCN tries to alleviate the error propagation in external parsers with gates mechanism, it encodes both relations and labels with the gates, and filters the output of each GCN layer over the dependencies.", "(Marcheggiani and Titov, 2017; Bastings et al., 2017).", "Damonte and Cohen (2019) encodes AMR graphs via GCN to promote the AMR-to-text generation task.", "We investigate latent structure for AMR parsing, and we denote that the inferred latent graph can interpret the connection probabilities between input words.", "Experiment results show that the latent structural information improve the best reported parsing performance on both AMR 2.0 (LDC2017T10) and AMR 1.0 (LDC2014T12).", "We also propose to incorporate the latent graph into other multi-task learning problems (Chen et al., 2019; Kurita and Sgaard, 2019).", "We thank the anonymous reviewers for their detailed comments.", "We are grateful to Zhiyang Teng's discussions and suggestions.", "This work is supported by the National Natural Science Foundation of China (NSFC-61772378), the National Key Research and Development Program of China (No.2017YFC1200500) and the Major Projects of the National Social Science Foundation of China (No.11&ZD189).", "We also would like to acknowledge funding support from the Westlake University and Bright Dream Joint Institute for Intelligent Robotics." ]
[ "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "objective", "objective", "method", "objective", "result", "method", "method", "method", "abstain", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "method", "other", "other", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "other", "abstain", "method", "method", "method", "method", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "other", "other", "other", "other" ]
[ "Siamese Neural Networks have been widely used to perform similarity classification in multi-class settings.", "Their 4 architecture can be used to group the 5 clinical trials belonging to the same drug6 development pathway along the several 7 clinical trial phases.", "Here we present an 8 approach for the unmet need of drug9 development pathway reconstruction, 10 based on an Enhanced hybrid Siamese11 Deep Neural Network (EnSidNet).", "The 12 proposed model demonstrates significant 13 improvement above baselines in a 1-shot 14 evaluation setting and in a classical 15 similarity setting.", "EnSidNet can be an 16 essential tool in a semi-supervised 17 learning environment: by selecting 18 clinical trials highly likely to belong to the 19 same drug-development pathway it is 20 possible to speed up the labelling process 21 of human experts, allowing the check of a 22 consistent volume of data, further used in 23 the model's training dataset.", "Siamese Neural Networks (SNN) were developed 26 in the early 1990s (Bromley et al., 1994) to obtain 27 a similarity score from examples of signatures 28 with the goal of identifying forgery.", "From then 29 many applications used SNN, primarily on image 30 recognition tasks (Chopra et al., 2005).", "The basic 31 architecture of SNN consists of two identical 32 networks able to learn the hidden representation 33 of the inputs.", "A similarity function would then 34 compare the inputs hidden representations.", "The 35 similarity score was taken advantage of in 36 contexts like 1-shot learning in multiclass37 classification problems, where a single example 38 of a class was seen by the algorithm only once 39 before making inference (Koch et al., 2015).", "40 Different architectures of SNN were developed in 41 time: Simo-Serra and colleagues developed a 342 inputs SNN (Simo-Serra et al., 2015), where the 43 neural network learned to rank the outputs and 44 identify whether the reference's hidden 45 representation is more similar to a positive or a 46 negative sample.", "47 Another example involves the insertion of an 48 intermediate stage between the similarity score 49 layer and the final prediction layer (Subramaniam, 50 Chatterjee, and Mittal, 2016), allowing to increase 51 performance in person re-identification task 52 despite partial occlusion and difference in point of 53 view or illumination.", "54 The first applications of SNN were based on 55 Convolutional Neural Networks (CNN) to obtain 56 similarity score on images (Simo-Serra et al., 57 2015), seeing SNN involved in different tasks 58 such as patch identification (Simo-Serra et al., 59 2015), person identification (Ahmed et al., 2015), 60 image matching from different angles (Vo and 61 Hays, 2016).", "SNN was also explored in Natural 62 Language Processing (NLP) contexts in tasks like 63 identifying sentence similarity (Mueller and 64 Thyagarajan, 2016) and support relation for 65 argumentation (Gema et al., 2017).", "These 66 applications highlight the flexibility of SNN to 67 identify similarities in different contexts.", "Here we 68 apply this architecture on an unmet healthcare 69 task: grouping clinical trials belonging to the same 70 drug-development pathway.", "71 Before being released on the market a new drug 72 needs to go through several expensive and time73 consuming experiments, involving testing the 74 pharmacological characteristics of the drug in 75 EnSidNet: Enhanced Hybrid Siamese-Deep Network for grouping clinical trials into drug-development pathways Lucia Pagani Analytics Center of Excellence, IQVIA Via Fabio Filzi 29, Milan, Italy [email protected] 255 2 biochemical, cellular, and animal models 76 (preclinical phase) and then on human volunteers 77 (clinical stage).", "The clinical stage is divided into 78 3 pre-approval phases (safety, efficacy, regulatory 79 proof) and a fourth post-market phase (Corr and 80 Williams, 2009).", "The experiments performed by 81 research or pharmaceutical companies to study a 82 drug in human subjects are called clinical trials.", "A 83 drug-development pathway is defined as all the 84 clinical studies performed on a drug for an 85 indication to obtain approval from the regulatory 86 agency.", "Example of a drug-development pathway 87 is presented in Supplementary Table 1. From 88 starting a phase 1 clinical trial to obtaining 89 approval from a regulatory agency, a drug can be 90 tested for over 10 years, and the process can cost 91 hundreds of millions of dollars, involving 92 thousands of subjects, including patients, doctors, 93 nurses and other personnel, with an approval rate 94 of around 10% (Wong, Siah, and Lo, 2019).", "95 Information on most clinical trials is publicly 96 available.", "Pharmaceutical companies are asked to 97 share their information on ClinicalTrials.gov, a 98 U.S. National Library of Medicine resource.", "99 Other companies such as DrugBank (Wishart et 100 al., 2006) or Citeline (Wong, Siah, and Lo, 2019) 101 parse the information from ClinicalTrials.gov and 102 add a hand-curation process in which human 103 labellers cross-reference certain information and 104 add additional labels to the trials, resulting in a 105 similar but more accurate database.", "106 Although having information on the clinical 107 trials related to the development of a drug may 108 seem a very straightforward process, there are 109 many confounding factors: 110 Very often several trials of the same phase 111 are run, to obtain statistical power or on 112 slightly different protocols (country, 113 population, sample size, ) 114 The same trial can belong to two different 115 phases (e.g. phase 1-2 or 2-3) 116 The company may not share on public 117 databases the information of the trials it is 118 performing, or may share partial 119 information or not update them 120 Some phases may be skipped 121 Often subsequent trial phases from the 122 same drug-development pathway may 123 address slightly different diseases 124 The disease and the drug can be referred to 125 from different nomenclatures in different 126 trials 127 Grouping of clinical trials to the same drug128 development pathway is a requirement for many 129 different applications, such as analyzing the 130 success of a pharmaceutical company performing 131 trials and marketing new drugs, or calculating the 132 probability of success of a drug for a therapeutic 133 area, evaluating the number of pathway in a 134 therapeutic area, and investigating the futility of a 135 pathway.", "136 Although there is a strong need for a large 137 freely-available dataset, only proprietary hand 138 curated datasets exist (Wong, Siah, and Lo, 2019).", "139 A relatively small dataset of regulatory agency 140 approved pivotal trials could be parsed from Food 141 and Drug Administration Drug Trials Snapshots 142 (FDA Snapshot) 143 (https://www.fda.gov/drugs/drug-approvals-and144 databases/drug-trials-snapshots).", "The lack of 145 large publicly available datasets may be one of the 146 reasons why to our knowledge no algorithms to 147 group clinical trials in drug-development 148 pathways have been described in the literature.", "149 The contributions of this paper are:", "(a) a novel 150 approach to group clinical trials in drug151 development pathways;", "(b) an iterative semi152 supervised learning pipeline to optimize the 153 grouping of clinical trials to the pathway.", "154 The model proposed here is based on a SNN 155 architecture.", "The model learned the similarity of 156 trials belonging to the same pathway.", "The 157 advantage of using the proposed model in a semi158 supervised learning pipeline would lead to 159 decreased human-labelling effort; the proposed 160 pipeline can work in a de-novo mode (fresh start) 161 and in a primed mode (adding data to previously 162 scored pathways).", "163 2 Methods 164 2.1 Data used to train and validate model 165 The ground truth pathways considered in this 166 experiment were pathways extracted by the 167 pivotal trials from the FDA Snapshot and 168 manually identified pathways (hand-curated).", "For 169 more details on the datasets composition and other 170 methods considered here see Supplementary 171 Methods.", "172 256 3 2.2 Neural Network architectures 173 Three architectures were compared in the current 174 research, schematized in Supplementary Figure 1: 175 pure Siamese Neural Network architecture (SNN) 176 where only Siamese branches were present, a 177 hybrid Siamese and Deep Neural Network (SiD 178 NN) consisting of Siamese character-based 179 branches and an additional input branch, and an 180 enhanced version of the SiD NN, having a fully 181 connected layer before the prediction layer 182 (EnSidNet).", "Supplementary Methods contain the 183 detailed description of the 3 architectures.", "184 2.3 Inputs of the model 185 The input features of the networks were: the drugs 186 used in the clinical trial (intervention), the disease 187 considered (condition), the phase of the trial 188 (phase), the countries where the clinical trial was 189 conducted (country), the sponsors of the trial 190 (sponsor), the start and end date of the trial 191 (expressed in days compared to an arbitrary 192 reference date, January 1 st 2000).", "Details of the 193 preprocessing of the inputs can be found on 194 Supplementary Methods.", "195 2.4 Prediction Algorithm 196 Algorithm 1 contains the pipeline to apply the 197 Neural Network to group trials into pathways.", "198 The details of the pipeline are reported in 199 Supplementary Methods.", "For schematic example 200 of the matching pipeline see Supplementary 201 Figure 2. 202 3 Experiments 203 In Supplementary Table 2 we report the number 204 of parameters of the networks and training time.", "205 The three neural models have different number of 206 parameters to train, and the complexity of SNN 207 compared to the hybrid models made the training 208 time per epoch longer.", "In terms of time per epoch 209 the other two hybrid models had comparable time 210 per epoch, despite the slightly higher complexity 211 of EnSidNet compared to SiD NN.", "212 3.1 Balanced datasets 213 Accuracy was tested on a balanced validation 214 dataset (see dataset splitting for details on 215 balanced dataset creation).", "It can be seen from 216 Table 1 that the best performing algorithm was 217 EnSidNet.", "218 3.2 32-way 1-shot evaluation performances 219 One-shot evaluation was used to predict whether 220 a new trial belongs to established pathways.", "221 The score expected from a random classifier is 222 3.125, due to the unbalanced 1:32 ratio of positive 223 couples versus negative.", "It can be seen in Table 2 224 that all neural models scored significantly higher 225 than a random classifier in a 32-way 1-shot 226 evaluation assay.", "227 Algorithm 1 Input: trials to group in pathways and previously scored pathways Output: pathways containing development trials 1: divide trials in therapeutic areas 2: for every therapeutic area do 3: for every existing pathway do 4: predict similarity between 2 trials of a present pathway and a new trial 5: if probability > 0.8 for both couples do 6: add trial to present pathway 7: sort trials (common lead sponsor or condition) 8: divide trials into batches 9: for every trial in batch do 10: match all versus all and predict similarity 11: if probability > 0.8 do 12: group the trials in a pathway 13: group pathways with common trial 14: select 1 trial per pathway and repeat steps 9-13 15: return pathways Algorithm 1 Balanced dataset Accuracy SNN 0.763393 SiD NN 0.907738 EnSidNet 0.91369 Table 1: Accuracy of the best model on a balanced dataset 32-way 1-shot evaluation assay Neural Network 1-Nearest Neighbor Random Classifier SNN 66.67 81.82 6.06 SiD NN 93.94 69.70 0 EnSidNet 96.97 69.70 3.03 Table 2: Results of 1-shot evaluation assay 257 4 EnSidNet was the model with the highest 228 performance in the test set.", "On the contrary, the 229 SNN had the lowest performance between the 230 neural models.", "Surprisingly the input format of 231 SNN tested on the heuristic 1-Nearest Neighbor 232 gave a relatively high performance.", "233 To understand the contribution of the different 234 features on the final EnSidNet prediction a SHAP 235 analysis was performed.", "As Supplementary 236 Figure 3 shows the most important feature to 237 distinguish between couples from the same or 238 different pathway is the number of common 239 sponsors.", "It is interesting to note that the most 240 contributing features belong to the additional 241 inputs branch of the NN, features that increased 242 the performance of the 32-way 1-shot learning 243 metric of almost 30% (see Table 2).", "244 3.3 Metrics on imbalanced dataset 245 Table 3 shows the other metrics considered in this 246 research, calculated on the 1:32 unbalanced 247 dataset.", "248 SNN had the worst performance on all metrics.", "249 Despite Sid NN had performances comparable to 250 EnSidNet on precision and recall, ROC AUC and 251 PR AUC showed the higher performance of the 252 Enhanced model.", "253 Figure 1 shows the probabilities associated to 254 couples belonging or not to the same drug255 development pathway for EnSidNet.", "The figure 256 shows that the algorithm can distinguish with 257 great certainty whether the trials belong to the 258 same pathway or not, and the higher recall than 259 precision.", "260 3.4 Trials grouping in pathways 261 Algorithm 1 for grouping the trials in possible 262 pathways was applied to clinical trials present in 263 the DrugBank database.", "The clinical trials 264 included were those in phases 1, 2 and 3, with 265 industry lead sponsors and treatment' as the 266 purpose of the trial.", "Trials to match into drug267 development pathways were 34188.", "The 268 algorithm took less than 4 hours to run.", "269 The therapeutic areas included in these 270 pathways were 27.", "271 As presented in Table 4 the statistics of the 272 possible pathways obtained from Algorithm 1 is 273 overlapping with the statistics of the datasets used 274 to train the neural networks (Supplementary Table 275 3).", "276 Despite the input of Algorithm 1 was more than 277 34,000 trials, less than 600 were matched in 278 pathways.", "However, the possible pathways 279 obtained were about 1.5 times the number of total 280 pathways in the dataset, suggesting new possible 281 pathways were discovered running Algorithm 1, 282 highlighting the potential of this semi-supervised 283 approach for the grouping of clinical trials in 284 pathways.", "285 Unbalanced dataset F1 P R ROC AUC PR AUC SNN 0.16 0.09 0.76 0.85 0.61 Sid NN 0.90 0.86 0.94 0.97 0.89 EnSidNet 0.90 0.86 0.94 0.99 0.92 Table 3: Metrics of the neural models.", "The 73 predicted 287 pathways (2-49 trials long), for a total of 264 288 trials, gave rise to 165 different trials (1-11 trials 289 long).", "The different distribution of the predicted 290 versus confirmed pathways can be seen in 291 Supplementary Table 4.", "A total of 112 trials (42%) 292 were confirmed being assigned by the algorithm 293 to proper pathways.", "Only two of the trials selected 294 for human scoring were found also on the ground 295 truth datasets.", "Specifically, both trials belonged to 296 the FDA snapshot dataset and were single-trial 297 pathways.", "Interestingly, one of these trials was 298 assigned to 2 other trials, and this 3-trial pathway 299 was then confirmed by the human experts scoring.", "300 This is a good example of the capability of 301 EnSidNet and the proposed algorithm to find the 302 contributing trials to a drug-development 303 pathway.", "304 4 Conclusion 305 We present a new approach for the grouping of 306 clinical trials into drug-development pathways.", "To 307 meet this objective, we proposed 3 different 308 neural network architectures.", "The best performing 309 model was EnSidNet, an enhanced hybrid 310 Siamese-Deep Neural Network.", "311 EnSidNet was used to develop a semi312 supervised learning pipeline using 1-shot 313 evaluation and classification to group trials into 314 existing or new pathways.", "Human scoring would 315 lead to the increase of the training size with ad316 hoc positive and negative samples.", "317 Acknowledgements 318 We thank Gregory Lever and Joseph Heenan for 319 useful feedback.", "320 References 321 Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard 322 Sckinger, Roopak Shah." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "The prevalence of the COVID-19 pandemic in day-to-day life has yielded large amounts of stance detection data on social media sites, as users turn to social media to share their views regarding various issues related to the pandemic, e.g. stay at home mandates and wearing face masks when out in public.", "We set out to make use of this data by collecting the stance expressed by Twitter users, with respect to topics revolving around the pandemic.", "We annotate a new stance detection dataset, called COVID-19-Stance.", "Using this newly annotated dataset, we train several established stance detection models to ascertain a baseline performance for this specific task.", "To further improve the performance, we employ self-training and domain adaptation approaches to take advantage of large amounts of unlabeled data and existing stance detection datasets.", "The dataset, code, and other resources are available on GitHub.", "1 1 Introduction We live in unprecedented times caused by a global COVID-19 pandemic, which has forced major changes in our daily lives.", "Given the developments concerning COVID-19, communities and governments need to take appropriate action to mitigate the effects of the novel coronavirus, which is at the root of the pandemic.", "For example, states in the United States that have imposed strict social distancing mandates were able to slow the growth of the virus within their communities (Courtemanche et al., 2020).", "For such measures to work, however, it is important that the public fully adhere to these guidelines and mandates.", "Pandemic fatigue, or when people become tired of pandemic mandates and begin to ease in adherence, can lead to 1 https://github.com/kglandt/ stance-detection-in-covid-19-tweets resurgences of the novel coronavirus (Feuer and Rattner, 2020).", "To reduce the spread of COVID-19, it is essential to understand the public's opinion on the various initiatives, such as stay at home orders, wearing a face mask in public, school closures, etc.", "Understanding how the public feels about these mandates could help health officials better estimate the expected efficacy of their mandates, as well as detect pandemic fatigue before it leads to a serious resurgence of the virus.", "In the era of Web 2.0, and especially during a pandemic in which people often resort to online communications, social media platforms provide an astounding amount of data relating to the stance and views held by various populations with respect to a variety of current and important topics.", "However, the total amount of data that is being generated each second makes it impossible for humans alone to fully make use of them.", "Fortunately, recent developments in deep learning have yielded state-of-the-art performance in text classification.This makes deep learning an ideal solution for extracting and making sense of the large amounts of data currently in circulation on social media sites.", "In particular, given the current events, it is evident that automated approaches for detecting the stance of the population towards targets, such as health mandates related to COVID-19, using Twitter posts, or tweets, can help gauge the level of cooperation with the mandates.", "Stance detection is a natural language processing (NLP) task in which the goal is for a machine to learn how to automatically determine from text alone an author's stance , or perspective/view, towards a controversial topic, or target .", "Research in the area of stance detection has yielded accurate results, especially in the United States politics (Mohammad et al., 2017; Ghosh et al., 2019; Xu et al., 2020).", "However, research on stance detection for targets relevant to COVID-19 health mandates lags behind, due to the Tweet Target Stance Opinion Sentiment Idc what you say, you're selfish if you refuse to wear a mask.", "recency of the pandemic and a lack of benchmark datasets.", "We set out to address this problem by constructing a COVID-19 stance detection dataset (called COVID-19-Stance), which includes tweets that express views towards four targets, specifi-cally Anthony S. Fauci, M.D., Keeping Schools Closed, Stay at Home Orders, and Wearing a Face Mask.", "This is a challenging task, which is related but different from sentiment analysis.", "A tweet may express support for a target, while using a negative language, and expressing a negative sentiment overall.", "Furthermore, the opinion expressed in a tweet may not be explicitly towards the target of interest, while the stance can be implicitly inferred.", "Some examples of tweet/target pairs labeled with respect to stance , target of opinion and sentiment are shown in Table 1 to illustrate the above mentioned challenges.", "To address the stance detection task, carefully designed approaches are needed to extract language patterns informative with respect to stance.", "We provide a comprehensive set of baseline results for the newly constructed COVID-19-Stance dataset, including results with established supervised baselines for stance detection tasks, and also baselines that employ approaches for handling small amounts of labeled data, including self-training and domain adaptation approaches.", "In summary, the contributions of this work are as follows: We construct a COVID-19-Stance dataset that consists of 6,133 tweets covering user's stance towards four targets relevant to COVID-19 health mandates.", "The tweets are manually annotated for stance according to three categories: in-favor , against , and neither .", "We establish baseline results using state-of-the-art supervised stance detection models, including transformer-based models.", "We also establish baselines for self-training and domain adaptation approaches that use unlabeled data from the current task, or labeled data from a related task, to complement for limited labeled data for the current task.", "We discuss related work in terms of existing datasets and approaches for stance detection.", "Recent work on stance detection in social media data has been facilitated by Mohammad et al. (2016, 2017), who constructed a manually annotated stance detection dataset, shared publicly as SemEval2016 Task 6.", "The dataset was based on tweets about United States politics, collected during the lead up to the United States 2016 presidential election.", "Given a set of politics-relevant targets (e.g., politicians, feminism, climate change), the initial selection of tweets to be included in the dataset was done using query hashtags, which are Twitter hashtags within a manually curated short-list that had been observed to correlate stances and targets on Twitter.", "Subsequently, tweet/target pairs were annotated by CrowdFlower 2 workers, who were provided with a generic, but detailed questionnaire regarding the stance of a tweet's author toward a target, as well as the sentiment of the tweet (Mohammad et al., 2016, 2017).", "Several other datasets for stance detection have become available in the last few years, including a large dataset (containing approximately 50,000 tweets) focused on the stance towards financial transactions that involve mergers and acquisition (Conforti et al., 2020), a dataset for identifying the stance in Twitter replies and quotes (Villa-Cox 2 http://www.crowdflower.com/ et al., 2020), datasets in languages different from English (Hercig et al., 2017; Vychegzhanin and Kotelnikov, 2019; Evrard et al., 2020), and multilingual datasets (Zotova et al., 2020; Vamvas and Sennrich, 2020; Lai et al., 2020).", "Furthermore, the global prevalence and impact of the COVID-19 pandemic has led to the quick development, concurrently with our work, of several COVID-19 stance-related Twitter datasets (Mutlu et al., 2020; Miao et al., 2020; Hossain et al., 2020).", "Mutlu et al. (2020) published a dataset of approximately 14,000 tweets (called COVID-CQ), which were manually annotated with respect to the author's stance regarding the use of hydroxychloroquine in the treatment of COVID-19 patients.", "Miao et al. (2020) constructed a dataset focused on author's stance towards lockdown regulations in New York City.", "The authors used keywords related to lockdown and New York City and extracted approximately 31,000 relevant tweets from a large COVID-19 tweet dataset published by Chen et al. (2020).", "They manually annotated 1629 tweet with respect to stance, while the remaining tweets were used as unlabeled.", "Our dataset construction procedure is similar to the one followed by Miao et al. (2020), but we label data for four targets using global English tweets, as opposed to Miao et al. (2020) who label data for just one target (lockdown) in one location (New York City).", "In terms of approaches used for stance detection, strong baseline results based on support vector machines (SVM) with manually engineered features were provided for the SemEval2016 Task 6 by Mohammad et al. (2016, 2017).", "Deep learning approaches used in SemEval2016 Task 6 included recurrent neural networks (RNNs) (Zarrella and Marsh, 2016) and convolutional neural networks (CNNs) (Vijayaraghavan et al., 2016; Wei et al., 2016).", "Such approaches used the tweets as input, but did not use any target-specific information, and did not outperform the SVM baselines.", "Later approaches were provided with both target and tweet representations as input, and employed RNNs and/or CNNs, together with the attention mechanism (Augenstein et al., 2016; Du et al., 2017; Zhou and Cristea, 2017; Sun et al., 2018; Siddiqua et al., 2019) to improve the performance of the SVM baselines.", "Given the dominance of transformers (Vaswani et al., 2017), especially bidirectional encoder representations from transformers (BERT) (Devlin et al., 2019), in NLP tasks, some recent works (Slovikovskaya and Attardi, 2020; Li and Caragea, 2021; Ghosh et al., 2019) have focused on investigating the use of BERT models for stance detection.", "For example, Ghosh et al. (2019) explored the reproducibility of approaches for stance detection and compared them to BERT.", "They found BERT to be the best model overall for stance detection on the SemEval2016 Task 6.", "Li and Caragea (2021) also explored BERT based models with data augmentation and found BERT to be a powerful model for stance detection.", "Thus, we have selected BERT as a strong baseline for our paper.", "Several works have shown that auxiliary information, such as sentiment and emotion information, or the subjective/objective nature of a text (provided as additional inputs or presented in the form of auxiliary tasks in a multi-task framework), can help improve the performance obtained from the tweet/target information alone (Mohammad et al., 2017; Sun et al., 2019; Li and Caragea, 2019; Hosseinia et al., 2020; Xu et al., 2020).", "Other approaches to improve the performance, especially when the amount of labeled data for the task of interest is small, include weak supervision (Wei et al., 2019) and knowledge distillation (Miao et al., 2020); transfer learning through distant supervision (Zarrella and Marsh, 2016) or pre-trained models (Ebner et al., 2019; Hosseinia et al., 2020); and domain adaptation from a source task to the target task (Xu et al., 2018, 2020).", "In particular, the Dual-view Adaptation Network (DAN) (Xu et al., 2020) learns to predict the stance of a tweet by combining the subjective and objective views/representations of the tweet, while also learning to adapt them across domains.", "We use an adaptation of the DAN model as a strong baseline in this work.", "Most relevant to our work on COVID-19-Stance, Miao et al. (2020) compared a supervised in-domain BERT model trained and tested on lockdown tweets, with cross-domain models, and knowledge distillation variants.", "The results showed significantly improved performance for the knowledge distillation variants, and emphasized the importance of having a small amount of data for the task of interest (as a better alternative to zero-shot learning).", "Similar to Miao et al. (2020), we also use BERT together with knowledge distillation/self-training as a strong baseline.", "The recency of the COVID-19 pandemic means there was no established stance detection dataset for this broader topic, when we began our research.", "Therefore, we set out to construct our own dataset, called COVID-19-Stance, by following the methodology introduced by Mohammad et al. (2016, 2017), which is generic and applicable for any controversial topic discussed on Twitter.", "Data collection.", "We began crawling Twitter, using the Twitter Streaming API, on February 27 th, 2020.", "We collected tweets that contained general keywords pertaining to the novel coronavirus (e.g. coronavirus, covid-19, corona virus, #covid19, etc.).", "As new hashtags emerged, we iteratively added additional, more specific keywords to the search (e.g., #lockdown, stay at home, #socialdistancing, #washhands, etc.).", "We continued crawling until August 20 th, 2020.", "The full list of keywords that was used over this time period is provided in Appendix A. We only stored original tweets (not a retweet or quoted tweet) that contained no hyperlinks, and ended up collecting a grant total of 30,331,993 tweets.", "Target selection.", "After being able to analyze the initial tweets, and following the developments of the COVID-19 events, we began to identify controversial topics that arose as the virus continued its spread in the United States (US).", "Four topics that we found to be among the most prevalent in our collection of tweets, and are understood by a large number of people in the US, were Stay at Home Orders, Wearing a Face Mask, Keeping Schools Closed, and Anthony S. Fauci, M.D..", "Data selection.", "Similar to Mohammad et al. (2016), we identified query hashtags to encompass Target #In-favor #Against Anthony S. Fauci, M.D. 2,417 6,641 Keeping Schools Closed 5,345 5,665 Stay at Home Orders 8,437 5,323 Wearing a Face Mask 27,600 12,064 All 43,799 29,693 Table 3: The number of tweets selected using in-favor and against hashtags for each target.", "the four main targets/topics selected, and began to collect and organize the tweets according to topic and likely labels.", "For example, if #FireFauci is contained within a tweet, it is likely that the author of that tweet is posting information indicating they do not support the current director of the National Institute of Allergy and Infectious Diseases (NIAID), Anthony S. Fauci, M.D. For each of the four selected targets, we identified two types of query hashtags, specifically, in-favor hashtags and against hashtags (stance-neutral hashtags were very rare).", "The exact query hashtags identified for each target are shown in Table", "2. Using the in-favor and against query hashtags, we selected a noisy stance set of tweets for each target, as shown in Table", "3. Out of the total number of tweets corresponding to a target, we further selected a relatively balanced (in terms of in-favor and against noisy labels) dataset to be manually labeled, and another relatively balanced dataset of tweets to be used as unlabeled in the self-training approach.", "The exact number of tweets to-label and to be used unlabeled are shown in Table", "4. Data Annotation.", "Although query hashtags are great for selecting likely relevant tweets, they are noisy and not reliable enough to accurately identify the stance towards a target for a tweet (see Table 5 for some examples illustrating this point).", "There-Target # to-label # unlabeled Anthony S. Fauci, M.D. 2,085 2,443 Keeping Schools Closed 1,479 2,703 Stay at Home Orders 1,717 15,488 Wearing a Face Mask 1,921 9,006 All 7,122 29,640 Table 4: The number of tweets selected to be labeled (#to-label) and the number of tweets to be used as unlabeled in self-training (#unlabeled) for each target.", "fore we used Amazon Mechanical Turk (AMT) to enlist the help of gig workers to analyze and label our collection of 7,122 tweets selected to be labeled (the exact number of tweets for each target is shown in Table 4).", "We removed the hashtags that appeared at the end of a tweet to exclude obvious cues, without making the tweet syntactically ambiguous.", "This increases the chance that our collection contains tweets that do not explicitly mention the target, and potentially some tweets with neutral stance towards the target.", "Each tweet was labeled by three annotators.", "At one time, each annotator was shown a page with a tweet and a target, and asked to answer a questionnaire designed and detailed by Mohammad et al. (2017).", "The questionnaire, shown in Appendix B, contains detailed questions and multi-choice answers that allow us to annotate each tweet with respect to three criteria:", "1. the stance of the tweet's author/user towards the given target: in favor , against or neither", "; 2. the way the opinion is expressed, which captures whether the text of the tweet reveals the stance explicitly , implicitly , or neither", "; 3. the sentiment of the tweet, which essentially captures the language used in the tweet: positive , negative , both , sarcasm , or neither .", "Our final COVID-19-Stance dataset contains only tweets for which at least two out of the three annotators agreed on the stance category.", "The Cohen's Kappa scores that we obtained for inter-annotator agreement for the final dataset were 0.82 for stance, 0.83 for target of opinion, and 0.60 for sentiment.", "According to (Cohen, 1960), the scores for stance and target of represent almost perfect agreement, while the score for sentiment shows substantial agreement.", "Table 1 shows several examples of annotated tweets in our dataset.", "Dataset statistics.", "The number of tweets for each target and the stance distribution for each target are shown in Table 6.", "The number of tweets for Figure 1: The number of tweets by target over the March to August 2020 months.", "each target over the months when data was crawled is graphically displayed in Figure 1, which shows that a large number of the tweets in our dataset were posted in July 2020.", "The distribution of the type of opinion is shown in Tables 7 and 8, for each target and each stance, respectively.", "Similarly, the distribution of the sentiment (or tweet language) is shown in Tables 9 and 10, for each target and each stance, respectively.", "As can be seen from these tables, our dataset contains a good mix of in-favor , against and neutral categories, and also a good mix of tweets with implicit and explicit opinion towards the target.", "However, the sentiment is generally negative or in the other category (which includes both positive and negative, sarcastic language and nei-ther).", "Together, these characteristics make our task both realistic and challenging.", "While we only use the stance label in this work, the other labels will be explored in future works, as auxiliary information potentially useful for stance detection.", "Benchmark subsets.", "To enable progress on COVID-19 stance detection, and facilitate comparisons between models developed for this task, we randomly split our COVID-19-Stance dataset (using stratified sampling) into training ( Train ), development ( Val ) and test ( Test ) subsets, respectively.", "We used the training subset to train our models, the development to select hyperparameters and the test to evaluate the final performance of the models.", "Statistics for the dataset in terms of number of tweets in the Train , Test and Val subsets, respectively, are shown in Table 11.", "To get a baseline understanding of how established stance detection networks perform on our dataset, we used the following models:", "BiLSTM : Bi-Directional Long Short Term Memory Networks (Schuster and Paliwal, 1997) take tweets as input, and are trained to predict the stance towards a target, without explicitly using the target information.", "Kim-CNN : Convolutional Neural Networks for text, proposed by Kim (2014), are also provided with tweets as input, and trained to predict the stance towards a target, without explicitly using the target information.", "TAN : Target-specific Attention Networks (Du et al., 2017) represent an attention-based BiLSTM model that identifies features specific to the target of interest, by explicitly incorporating the target information.", "ATGRU : The Bi-Directional Gated Recurrent Unit Network with Token-Level Attention Mechanism (Zhou and Cristea, 2017) is an attention-based Bi-GRU model that also uses Opinion Towards Target (%) Stance Explicit Implicit Neither Favor 81.61 17.64 0.75 Against 79.25 20.49 0.26 Neither 4.29 69.80 25.91 Table 8: The distribution of opinion for each stance.", "the target information explicitly, and identifies specific target features using the attention.", "GCAE : The Gated Convolutional Network with Aspect Embedding (Xue and Li, 2018) is based on a CNN model.", "In addition to tweets, it also has information about the target, and uses a gating mechanism to block target-unrelated information.", "BERT : Bidirectional Encoder Representations from Transformers (Devlin et al., 2019) represent language models that are pre-trained on a large unlabeled corpus to encode sentences and their tokens into dense vector representations.", "We used the pre-trained COVID-Twitter-BERT model 3 (Muller et al., 2020).", "Given that a large amount of unlabeled data is available for each target included in our COVID-19-Stance dataset, we explored the use of a self-training approach that can make use of unlabeled data, as described below:", "BERT-NS : Self-training with Noisy Student (Xie et al., 2020) is a semi-supervised learning approach that employs self-training and knowledge distillation (Hinton et al., 2015) to improve the performance of a teacher model using unlabeled data.", "More specifically, a teacher is originally trained from the available labeled data, and is used to predict pseudo-labels for the unlabeled data.", "Subsequently, a noisy student model is trained using the labeled and pseudo-labeled data.", "By replacing the teacher with the student, the process can be iterated several times.", "In our work, we performed just one iteration.", "Both the teacher and the student models were COVID-Twitter-BERT, with a softmax layer at the top.", "To understand the benefits of using a prior stance detection dataset, in addition to the dataset we constructed, we experimented with a domain adaptation model, as described below:", "BERT-DAN : Dual-view Attention Networks (Xu et al., 2020) capture explicitly subjective and objective information contained in tweets, and also enable the use of labeled data for a prior, related task to train a model for a current task of interest.", "The original DAN model proposed by Xu et al. (2020) makes use of BiLSTM networks and domain adversarial networks to learn the subjective and objective representations and make them domain invariant.", "At the same time, DAN learns to predict the stance using labeled data from the prior task (under the assumption that no labeled data is available for the task of interest).", "Compared to the original DAN model, we replaced the BiLSTM networks with pre-trained COVID-Twitter-BERT models, and trained the network to predict the stance using both labeled data from the prior task and from the current task.", "The prior data was the whole SemEval2016 Task 6 data.", "Data Pre-processing Before the tweets in our dataset were used for training, they were preprocessed and transformed to embedded tensors.", "For every tweet in the dataset, we removed any emojis, URLs, and reserved words.", "We then used the pre-trained COVID-Twitter-BERT to tokenize and embed each tweet, truncating the sequence length to 128 as needed.", "Hyperparameters.", "The validation set was used to determine generally good hyperparameters for the models.", "For each non-BERT supervised model, Adam optimizer was used with a learning rate of 1 e 5 , weight decay of 4 e 5 , and gradient clipping with a max norm of 4 .", "0 .", "Each model was trained for 120 epochs, with a mini-batch size of 16 in each iteration.", "A dropout of 0.5 was used for each network.", "Other specific hyper-parameters for each network are shown below: RNN Networks : BiLSTM, ATGRU, and TAN each had a hidden LSTM dimension of 512 with a dropout of 0 .", "2 .", "CNN Networks : GCAE and Kim-CNN both used filters of width 2, 3, 4, and", "5. For each filter width, there were 25 feature maps.", "Following the convolutional layers was a linear classifier with a hidden dimension of 128.", "BERT : This model was initialized with the pre-trained COVID-Twitter-BERT model.", "It was optimized with AdamW with a learning rate of 1 e 5 over the course of 10 epochs, with 15 warmup steps.", "BERT-NS: The implementation of the student model is exactly the same as that of the supervised BERT.", "The teacher and the student models are set up in the same manner, except that the teacher has no dropout.", "BERT-DAN: The formation functions are the same as those of the supervised BERT model, Target: Anthony S. Fauci, M.D. BiLSTM Kim-CNN TAN ATGRU GCAE BERT BERT-NS BERT-DAN Acc 0.638 0.633 0.588 0.635 0.652 0.817 0.820 0.830 Pr 0.639 0.685 0.558 0.640 0.661 0.816 0.821 0.833 Re 0.631 0.612 0.564 0.613 0.634 0.830 0.823 0.839 Avg.", "except that there is no softmax layer on top.", "The discriminators and classifiers were all two layer neural nets with a hidden dimension of 1024.", "A dropout of 0.15 was used throughout the network.", "Optimization was performed by AdamW with a learning rate of 3 e 6 for first 7 epochs, and 3 e 7 for the final 3 epochs.", "The following weights were assigned to this net-work's loss functions: 0 .", "1 for the domain discriminators, 0 .", "05 for the objective and subjective classifiers, and 0 .", "4 for the source stance classifier.", "A mini-batch size of 4 was used due to GPU memory limitations.", "To evaluate the performance of the baseline models on our dataset, we used the following standard metrics: accuracy, (macro average) precision, recall, and F1 score 4 .", "We report the performance on the test set at the epoch in which the model recorded the highest F1 score on the validation data.", "We performed 3 independent runs for each model to account for variability, and report average results over the three runs.", "The results of the experiments are shown in Table 12 for the four targets in the COVID-19-Stance dataset, respectively.", "Between the two supervised baselines that do not explicitly use the target information, Bi-LSTM and Kim-CNN, the Bi-LSTM gives better results overall, in all metrics, except for the Wearing a Face Mask target.", "When comparing Kim-CNN with GCAE (a CNN-based models that explicitly uses the target), Kim-CNN gives better accuracy and F1 scores for two targets (An-thony S. Fauci, M.D. and Stay At Home Orders), while the GCAE model gives better results for the other two targets (Keeping Schools Closed and Wearing a Face Mask).", "Similarly, when comparing the two recurrent models with attention, TAN and ATGRU, TAN performs better on two targets, Keeping Schools Closed and Stay At Home Orders, while ATGRU performs better on Anthony S. Fauci, M.D. and Wearing a Face Mask.", "Surprisingly, these two models, which explicitly use the target information, perform worse than the BiLSTM model overall.", "Finally, we can see that among the supervised baselines, the BERT model performs significantly better than all the other models, a result that is in agreement with prior works (Ghosh et al., 2019; Miao et al., 2020).", "When comparing BERT with BERT-NS with BERT-DAN (models that use unlabeled data and SemEval2016 Task 6 data, respectively), we see that BERT performs better than the models that use additional information on the Stay At Home Orders target and comparable to the BERT-NS on the Keeping Schools Closed target specifically, the targets with smaller labeled datasets.", "On the other hand, BERT-DAN performs the best on the Anthony S. Fauci, M.D. target, and comparable to BERT-NS on the Wearing a Face Mask target, i.e., the targets with larger labeled datasets.", "This result suggests that a larger amount of labeled data is useful for the domain adaptation approach.", "However, when only a small amount of labeled data is available, BERT is better than the noisy student which may not start with a very good teacher.", "Error Analysis.", "To better understand how two of our best models would perform in the wild, we have included some of their predictions on examples from the Wearing A Face Mask test set, along with the gold-standard label in Table 13.", "As we can see, both models perform well on examples where the stance is presented explicitly, such as in tweets 1 and", "2. However, the models generally struggle with sarcasm and humor as seen in tweets 3, 5, and 6.", "They also both demonstrate a strong bias towards certain phrases such as form of government control which is a common phrase in AGAINST tweets for Wearing A Face Mask.", "Interestingly, the noisy student model seems to be more likely to incorrectly predict a FAVOR stance when the sentiment of the tweet is positive compared to the DAN model, as seen in tweets 7 and 8.", "In this work, we have constructed a COVID-19-Stance dataset that can be used to further the research on stance detection, especially in the context of COVID-19 pandemic.", "In addition to the dataset, we have established baselines using several supervised models used in prior works on stance detection, and also two models that can make use of unlabeled data and data from a prior stance detection task, respectively.", "Our results show the pre-trained COVID-Twitter-BERT model constitutes a strong baseline.", "When a larger amount of labeled data is available for a target, the BERT-NS and BERT-DAN can help further improve the performance.", "As part of future work, we plan to study the benefits of the opinion and sentiment data that we annotated towards the stance detection.", "We also plan to study the usefulness of multi-task learning, where we train models for all our targets concurrently.", "Other transfer learning approaches that can leverage existing datasets will also be explored.", "We thank the National Science Foundation and Amazon Web Services for support from grants IIS-1741345, IIS-1802284, IIS-1912887, and IIS-1903963 which supported the research and the computation in this study.", "Our dataset does not provide any personally identi-fiable information as only the tweet IDs and human annotated stance labels will be shared.", "Thus, our dataset complies with Twitter's information privacy policy.", "The research enabled by this dataset has the potential to help officials and health organizations understand the public's opinion on various initiatives, estimate the efficacy of their mandates and prevent serious resurgence of the novel coronavirus." ]
[ "abstain", "method", "objective", "method", "result", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "result", "method", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "method", "method", "other", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "method", "method", "abstain", "other", "method", "abstain", "abstain" ]
[ "Plains Cree (nhiyawwin) is an Indigenous language that is spoken in Canada and the USA.", "It is the most widely spoken dialect of Cree and a morphologically complex language that is polysynthetic, highly inflective, and agglutinative.", "It is an extremely low resource language, with no existing corpus that is both available and prepared for supporting the development of language technologies.", "To support nhiyawwin revitalization and preservation, we developed a corpus covering diverse genres, time periods, and texts for a variety of intended audiences.", "The data has been verified and cleaned; it is ready for use in developing language technologies for nhiyawwin.", "The corpus includes the corresponding English phrases or audio files where available.", "We demonstrate the utility of the corpus through its community use and its use to build language technologies that can provide the types of support that community members have expressed are desirable.", "The corpus is available for public use 1 .", "Recent work with Indigenous persons has shown that some want advanced technologies to support the learning and use of their languages.", "The Cree and Mtis persons involved in this study stated a desire for technologies such as an app to help with learning the structure of the language for conversation, translation, and AI agents that resemble a speaker (Lothian et al., 2019).", "Participants wanted these tools to support interaction in nhiyawwin (Plains Cree) or the learning of this language.", "All of these larger ideas are dependent on core language technologies such as language models, speech recognition, speech synthesis, or machine translation.", "However, a lack of 1 https://github.com/EdTeKLA/ IndigenousLanguages_Corpora publicly available corpora hinders the development of such technologies for low-resource languages like nhiyawwin.", "Government policies have contributed towards supporting the preservation and revitalization of some Indigenous languages, e.g., Inuktitut (Joa-nis et al., 2020).", "However, many have not benefited from this level of support for developing resources and technologies.", "Recently, some government informational material such as voter guides or COVID-19 pamphlets have been translated into nhiyawwin.", "Nevertheless, the availability of resources is still limited and short texts or other resources are distributed across libraries and the Internet.", "To understand why this is the case, we need to reflect on the colonial practices that have attempted to eradicate a language and people.", "Previous and on-going government policies and practices, such as the implementation of residential schools (Bom-bay et al., 2011), have left a small number of fluent speakers and language resources for nhiyawwin-speaking communities.", "These practices prevented and continue to prevent the development of language technologies because state-of-the-art statistical and neural models require large amounts of text.", "To work towards addressing this issue, we created a nhiyawwin corpus from various sources.", "Our corpus is composed of 49,038 words and 3,727 lines of text in Standard Roman Orthography (SRO), 10 texts in syllabics, and 1,026 lines of English-nhiyawwin parallel data.", "To the best of our knowledge, this is the first collection of processed nhiyawwin data ready for use to build language technologies.", "The most similar existing work includes a small collection of nhiyawwin text, lexical, and audio resources in their original formats (Open Language Archives Community).", "There is also a morphosyntactic tagged corpus (Arppe et al., 2020) which can be accessed by searching for words, lemmas, and mor-6354 phosyntactic information through a web interface.", "A targeted corpus of child-directed speech in Cree has been shared through the ACQDIV Database (Moran, 2016).", "However, this corpus contains materials in the northern dialect of East Cree (iyiyiu-Ayamiwin) rather than Plains Cree (nhiyawwin).", "In response to the limited availability of resources and tools, this work contributes a collection of ready to use resources to enable the development of language technologies that can support the preservation and revitalization of nhiyawwin.", "We demonstrate the practicality of the corpus through its use by community-based teachers of nhiyawwin.", "Using these materials has informed their lesson plans.", "Further, we describe the ongoing development of predictive language models using the contributed corpus.", "These models enable predictive text that is expected to provide some of the language support needs that have been expressed by nhiyawwin speakers.", "With this work, we aim to inspire future data collection and sharing of nhiyawwin resources that are aligned with community interests.", "Plains Cree is called nhiyawwin by its speakers, and it is not capitalized.", "nhiyawwin is a widely-spoken dialect of the Indigenous language that English-speakers call Cree: nhiyawwin is the mother tongue for approximately 3,655 speakers, and it is the language spoken most at home for approximately 2,165 persons (Statistics Canada, 2018).", "nhiyawwin is an extremely low resource language, with the official designation of being a developing language; it is at stage 5 on the Expanded Graded Intergenerational Disruption Scale (EGIDS) (Lewis and Simons, 2010) so it is in vigorous use, with literature in a standardized form being used by some though this is not yet widespread or sustainable.", "Therefore, the ability to create language technologies for nhiyawwin is limited due to the minimal amount of monolingual and parallel data available.", "Current language technologies for nhiyawwin include Finite State Transducers (FSTs) that have been used for tasks such generating word forms and conjugating verbs in online dictionaries (Arppe et al., 2016), representing nominal morphology (Snoek et al., 2014), and spell checking (Arppe et al., 2016).", "technologies that exist given that nhiyawwin is a polysynthetic, agglutinative, and highly inflective language, which complicates the task of creating language technologies.", "These characteristics allow the meaning of a single token or word to map to that of a full phrase or sentence in English.", "For example, kimciso' maps to you all eat' in English.", "nhiyawwin has two writing systems: SRO and syllabics.", "A single character in syllabics represents one or more SRO characters (e.g., is ni in SRO and is i).", "Complicating this, is the variability in how these writing systems have been used and continue to be used across regions and time.", "This variability means that choices must be made with respect to the writing systems and standards' that are followed when developing language technologies.", "These are difficult choices and each community may have different preferences, which means that tools for converting across varied writing systems would help to maintain community norms.", "An example of such a tool is the SRO-syllabics converter (Antonio Santos, 2021).", "While any one project cannot address all considerations, these considerations are an important part of developing language technologies to support the revitalization and use of this language.", "Our corpus contains text from several domains making it a diverse collection of nhiyawwin resources (see Table 1).", "We collected materials from different genres such as Bible hymns, educational resources, and children's stories as well as content from social media such as Twitter and Facebook.", "As such, our corpus spans several time periods.", "For example, Bible translations are based on a bible from 1908, whereas social media content and educational documents are from the 2000's, with some being from the last couple of years.", "The category Other' contain texts such as election pamphlets, voter guides, speaker stories, and a first year university nhiyawwin workbook.", "The material is organized into folders by category or source along with its copyright information for how the public can use them.", "Where nhiyawwin-English parallel texts exist, the folder contains a cleaned and aligned version of these texts; a given line in one language file corresponds to the same line in the other language file.", "Syllabics versions of texts are provided where available.", "Some texts also have an accompanying audio file.", "Before adding a text to our corpus, we checked the copyright and license or obtained permission from the content creator.", "We provide a bag-of-words (BoW) representation when text was under copyright or the content owners felt this was an acceptable alternative to sharing the original text.", "These BoW files contain a list of words from the original text and their usage counts.", "As these files only contain individual words, there is no nhiyawwin-English mapping because there is often no one-to-one translation between nhiyawwin and English words.", "Table 2 provides descriptive statistics for all text sources in the corpus.", "We provide mean ( M ) and standard deviation ( SD ) of data throughout this paper.", "To build this corpus we first identified sources of nhiyawwin text.", "We then extracted the text.", "Following extraction, we aligned the texts across languages and performed additional processing.", "We used Google search to find nhiyawwin text online and entered keywords such as nhiyawwin text' and plains cree text'.", "Please see Appendix A for a full list of keywords.", "Some websites continually updated their content with new material (e.g., Cree Literacy Network 2 ) so we returned and checked those sites for additional content.", "Data were identified as nhiyawwin by carefully inspecting the source and its description.", "The contents of the text were also checked by one of our team members who had been trained in how to differentiate between dialects of Cree.", "This step 2 https://creeliteracy.org/ Sentence Token Language Vocab.", "ensured the text was in the targeted dialect.", "If uncertainties arose, such as when facing unfamiliar accents, hyphens, or characters, a nhiyawwin speaker would verify whether the text was Plains Cree.", "Copyright information was verified to see if the text could be shared or perhaps if the copyright would allow BoW format.", "For texts that contain Elders' stories, described below, permission from speakers was obtained to share the stories.", "The resources in the corpus can be publicly used as allowed by the copyright information detailed on GitHub for a particular source.", "Text was extracted from the original sources (e.g., PDFs, webpages) and converted into plain text.", "Care was taken to ensure the text was properly copied and that it excluded irrelevant information (e.g., HTML markup or English annotations).", "Some data was collected by scraping websites, where licensing allowed it.", "When licensing did not permit scraping, we contacted site owners to obtain permission.", "In some cases, they shared the raw materials with us for inclusion in the corpus.", "Parallel phrases in English and nhiyawwin were extracted when available.", "The retrieved nhiyawwin texts 6356 Language Text nhiyawwin Before: wpk'skwtamn tn'si kphisikiskinohamsoyn nhiyawwin.", "may have used SRO or syllabics.", "Some were accompanied by an audio file.", "The availability of formats varied from resource to resource.", "Beyond these publicly available online resources, we collected resources from the field.", "These resources are recordings of Elders who chose a story to tell us.", "They gave us permission to use and share these stories for the purposes of supporting learning and developing language technologies that could do the same.", "Most of the shared stories relate to their personal lives or socio-political issues.", "These recordings were made over a summer by attending cultural events and interacting with community members.", "The recordings were transcribed and translated into English in some cases.", "Three speakers of nhiyawwin took part in the transcription, translation, and verification process.", "Where parallel texts were available, alignment was performed before other preprocessing or data cleaning.", "Most parallel texts contained some spacing markers, such as line breaks for paragraphs or spaces for phrases.", "In these scenarios, single sentences or phrases were easily aligned to each other.", "Challenges arose when a paragraph contained a different number of sentences across languages.", "Since we aimed to provide sentence or phrase alignments in the corpus, we needed to distinguish how a sentence in one language is expressed in the other.", "In longer texts, when multiple sentences in nhiyawwin mapped to one sentence in English, or vice versa, this mapping was used as the alignment to maintain the original meaning of the text.", "This situation was prominent in Biblical texts.", "In shorter texts, a nhiyawwin speaker reviewed the text and decided on the appropriate alignment.", "We note that this process of aligning paragraphs, then text within paragraphs is demonstrated to outperform alignment that does not account for paragraph boundaries (Joanis et al., 2020).", "We provide examples of aligning sentences in the simple case and more challenging case in Table 3 and Table 4.", "Preprocessing was only performed on texts that used the SRO writing system.", "Texts in syllabics did not undergo the below-described preprocessing.", "We focused on preprocessing SRO texts for several reasons.", "It was relatively easy to obtain texts in SRO, which meant that there were more of them.", "SRO representations of the language vary in their use of diacritics and other conventions, which means that combining sources requires some element of normalization so that the texts can be jointly used.", "Moreover, one of the intended uses for our corpus is to support instructional activities for local courses, and SRO is the first writing 6357 Before After nhiyawwin English nhiyawwin English -wpamikot S/he was seen by him/her wpamikot she was seen by him wpamikot she was seen by her wpamikot he was seen by him wpamikot he was seen by her kimmitonyimitinn We are thinking of you (one) kimmitonyimitinn we are thinking of you Piko tanima Knata Pimi-pahtwin atoskwkamik akmi Knata (pnipayi-win ihtakon) At any Elections Canada office across Canada (deadlines apply) piko tanima knata pimi-pahtwin atoskwkamik akmi knata pnipayi-win ihtakon at any elections canada office across canada deadlines apply Table 5: Manual preprocessing examples.", "The writing system used by speakers differs by community, where some use SRO and others use syllabics.", "This is also the case for the communities with which we have worked.", "The choice of writing systems and the considerations surrounding that choice are further discussed in Sections 5 and 7.", "Before running the processing script 1 , we manually identified the use of slashes or parentheses.", "When slashes were used, usually in English text to denote gender or possible alternative phrasings, we ensured that the nhiyawwin data would represent all possibilities (see Table 5).", "For example, we would remove the slash from the English sentence, generate a new English sentence with the alternative gender or phrase, and duplicate the nhiyawwin sentence to represent that the nhiyawwin text could have this alternative meaning in English.", "As the aim of this corpus is to develop language technologies, we wanted to ensure that all alternative genders or meanings from the text were included so that it would support the development of models that were as robust as possible given the data.", "See Table 5 for an example.", "Parentheses were mainly used in English sentences to provide additional context.", "If the text in parentheses provided alternative phrasing, the alternative sentence in English would be constructed with the same nhiyawwin meaning mapped to it.", "This follows a similar pattern to that used with slashes for options like he or she.", "If the parenthetical expression did not provide an alternative or additional context, it was removed.", "Parentheses were removed manually in this process and not considered as punctuation to be kept in the preprocessing script, which we describe below.", "This initial manual process addressed the varying nature of each case and our desire to extract as much information as possible from the text.", "Following the manual preprocessing, a Python script was run on the data files.", "The script follows a similar pattern for both nhiyawwin and English, with slight modifications for each.", "Since nhiyawwin can be written with different types of diacritics used to represent the same information in SRO (e.g., a, , ), we converted all accents to circumflex to maintain consistency within the corpus.", "A different choice could have easily been made.", "Because each community may have a different preference, we have included a script that can be modified so that the corpus can be re-standardized according to a specific community's preferences.", "All text was converted to lowercase.", "The only punctuation the script does not remove is periods, exclamation marks, question marks, colons, commas, apostrophes, and single quotes.", "Each of these punctuation markers are represented as a single token by inserting a space before them.", "Hyphens are preprocessed differently from other punctuation.", "Because nhiyawwin and English use hyphens differently, we applied rules specific to each language.", "In English, hyphens were removed and replaced with a space because the words surrounding the hyphen could often stand alone 6358 Language Text nhiyawwin Before: atiht kinosewak misikitiwak maka atiht apissisiwak After: tiht kinoswak misikitiwak mka tiht apissisiwak Before: k-pimwwhahk okakskhkmowina After: k pimwwhahk okakskhkmowina English Before: Some fish are big, but some are small.", "There are no vowel combinations in nhiyawwin; however combining morphemes can cause two vowels to border each other.", "To address this, some authors insert a hyphen and some insert an h.", "The justification for the latter is that the transition in speaking these vowels is not harsh, and an h indicates a softer transition.", "We chose to follow the h joiner standard.", "Consequently, the hyphen was replaced with the letter h when there was a vowel (i.e., a, i, o, , , , ) on both sides of the hyphen.", "In all other cases, the hyphen was removed.", "Any other remaining markers were removed, including ellipsis, double quotes, and numbers.", "See Table 6 for a text-cleaning example.", "Now that we have described how the corpus was created, we need to discuss ethical considerations around the creation and use of such resources.", "The process of creating language technologies for any community of speakers should be guided by the goals and interests of the respective community.", "Natural language processing (NLP) research should directly involve the language communities for which the technologies are being designed, as it will directly impact the speakers of the language.", "Further, the process of constructing these technologies should be clear to the community so there is an understanding of the data required for the model and how it will be used.", "For example, communities may wish to see language technologies such as text-to-speech to honor an oral tradition.", "However, these systems require an underlying model trained on corresponding audio and text for the language, which may or may not be in accordance with a community's wishes.", "itself is not an invitation to make Indigenous language models and technologies independently and without consultation.", "As discussed by Pine and Turin (2017), successful Indigenous language revitalization projects must be grounded in local understandings of impact and success, rooted in the lived experiences and aspirations of Indigenous communities.", "An important consideration when developing language technologies using corpora and language models is the nature of the language used to train those models.", "For example, language models trained on Internet texts (e.g., GPT-3) have been subject to scrutiny following the revelation of racist and generally offensive outputs (Floridi and Chiriatti, 2020).", "Those who use the developed nhiyawwin corpus should note the potential for problematic outcomes when the data is used to support certain types of language technologies.", "This potential comes from the inclusion of biblical texts.", "While biblical texts are widely used for tasks such as machine translation (Mohler and Mihalcea, 2008), they could advance the harmful legacies of Christianity-related efforts and government policies that used religion to control and harm Indigenous groups (Bradford and Horton, 2016).", "The translation of bibles into local Indigenous languages was a means of furthering colonization (Pine and Turin, 2017).", "In addition, there are certain bible passages within our corpus that may be considered violent or aggressive in nature, e.g., May sinners be destroyed from the earth. . . may the wicked be no more (Psalm 1).", "This kind of text, paired with the history between the church and Indigenous peoples, should be used with caution, especially when designing language technologies that produce language (e.g., machine translation).", "aligning texts and automatically extracting text from varied sources, it is possible for there to be mistakes or inconsistencies.", "This should be taken into consideration when using the corpus.", "We also welcome edits and contributions.", "Beyond the above considerations, each of the choices that we made during data cleaning has the potential to have normative effects on the language.", "Some may view norming and standardization as a benefit (Mager et al., 2018; United Nations, 2019).", "However, it also risks the loss of language variety that is often valued by community members.", "Consequently, we include our data cleaning scripts within the repository so that others may adapt them and transform the data into the version of SRO or syllabics that meets their needs.", "The corpus is already being used to support community needs as part of a broader project for developing language learning technologies and technologies to support language use.", "Within this context, corpus materials are being used to help people learn nhiyawwin.", "Materials are also being used to develop language models that support tasks that community members who are learning nhiyawwin would like supported.", "We briefly discuss these ongoing activities to demonstrate the utility of the corpus.", "As part of developing language-learning technologies, several teachers of nhiyawwin who work in and come from different nhiyawwin-speaking communities have joined our group.", "These teachers provide guidance on how to teach the language and help us to develop curricula and teaching materials.", "Upon listening to the recordings in the corpus, one of the teachers was struck by the richness of the language and thematic content of the personal stories that Elders told.", "As a result of this experience with the corpus materials, she decided to work with those recordings to develop learning materials.", "She started by identifying the relevant cultural themes and values that were conveyed through the recorded stories.", "She then developed lesson plans around those recordings, the thematic and cultural content, and the grammatical structures used within the stories.", "This resulted in up to four lessons per recording.", "allow students to practice the grammatical concepts she decided to add to her course.", "She also developed read-along activities.", "To do this, she had to convert the recordings from", ".m4a to .mp3 so that they could be played using technologies that are provided in her classroom, which demonstrates the potential barriers that file formats can introduce.", "Building on her work, we have developed interactive online learning activities using her newly created worksheets.", "These interactive learning activities provide students with feedback and have been integrated into a computer assisted language learning (CALL) system.", "In addition to the interactive worksheet activities, we have been developing a read-along activity as part of this CALL system.", "This read-along activity specifically uses the shadowing approach (Kadota, 2019), where a learner must read along while keeping pace with the audio.", "This approach helps to develop oral fluency among learners, which is a goal that many learners of nhiyawwin and their teachers have set.", "Since we are using the same karaoke-like approach that this teacher added to her classroom, we need to align the text with the audio.", "So, we are currently testing methods for supporting the automation of this alignment.", "As the above case illustrates, the corpus materials can be used to develop and expand teaching materials.", "As reported by collaborating teachers, these materials have also influenced how teachers approach their students and courses.", "One teacher decided to start teaching certain aspects of the language, such as the transitive animate verb paradigm, sooner.", "Before listening to the stories from Elders, she would only teach the transitive animate paradigm to more advanced students.", "She thinks it is not taught in many settings because of its inherent complexity.", "Listening to the stories helped her realize what a central part it was of fluent speak-ers' speech.", "This realization came after analyzing the recorded stories.", "Upon reflection, she recognized that the adults in her life would use it when speaking to her as a child.", "Consequently, she now teaches it to young children with the expectation that they will gain knowledge and familiarity with this paradigm even though they are unlikely to produce language using verbs in the transitive animate form soon after they learn it.", "She expects that they will start using the transitive animate paradigm once they are older and more fluent.", "materials and activities for use in person or online, the corpus has helped to identify gaps in existing materials.", "As part of preparing accompanying learning materials for students, language teachers often decompose new vocabulary items into their constituent morphemes because this helps students to learn the language and build upon their existing knowledge when they encounter new words (Wagner et al., 2007).", "One of the words that helped this teacher identify a gap in existing language support resources was intopakwanikamik'.", "As part of preparing instructional materials for her students, she wanted to provide a formal definition of the into' prefix.", "However, into' was not present in any of the dictionaries she had access to.", "As a result, she plans to take this word and others like it to a meeting with Elders so that she can formally document the deeper cultural and semantic connotations of the words and prefixes that are in our corpus and not documented elsewhere.", "Text prediction is a language technology that many people use daily without noticing it.", "For many, they rely on it when typing on their phones to compose an email or text.", "They also use it to help them fill in forms.", "This language technology may be taken for granted in high-resource languages.", "The absence of support tools like these for nhiyawwin speakers has been noted, and learners of nhiyawwin have expressed a desire for similar types of support (Lothian et al., 2019).", "The nhiyawwin language has a rich morphology, where words are often composed of several morphemes.", "Therefore, we chose to support text prediction at the morpheme level for nhiyawwin rather than at the word level, which is how predictions are usually made for English and French.", "Text prediction is a subtask of one of the projects that is being run out of the National Research Council Canada.", "This project aims to create software to assist Indigenous communities in preserving their languages and extending their use (Kuhn et al., 2020).", "The tasks they are working on have been derived from community needs and performed in collaboration with communities via the empowerment paradigm.", "A predictive text feature was enabled in the Keyman 3 keyboard software for those who wish to implement the model in a desired language when using the keyboard.", "However, Kuhn et al. 3 https://keyman.com/ Figure 1: The distribution for the number of morphemes per word in the corpus.", "(2020) note the predictive model is based on uni-grams since there is often not enough language data available to create more complex models based on longer sequences of text.", "To extend the work by Kuhn et al. (2020), we built n-gram models using the present corpus.", "These models consider what was typed previously when predicting text.", "To allow the model to learn sequences of morphemes, we first had to prepare the data so that it could be used to train such a model.", "We used an FST (Arppe et al., 20142019) to divide words in our corpus into their constituent morphemes.", "The corpus contains 3,650 unique morphemes and 45,220 morphemes in total.", "Figure 1 shows the distribution of the number of morphemes found in a single word.", "The corpus was divided into 90% for the training set and 10% for the development set.", "We used KenLM (Heafield, 2011) to train n-gram models on the sequences of morphemes within a word.", "Hyper-parameter tuning was then performed by training several models with different values of n , in the range of 2 to 7.", "We considered the model with the lowest average perplexity on the development set, as the best model.", "Although the models with different values of n performed similarly, the best performing model was the 5-gram model, with an average perplexity on the development set of 133.12 ( SD = 242.05).", "The COVID-19 pandemic has brought about informational materials translated into several languages in an attempt to reach as many members of the public as possible with general health guidance around this issue.", "Usually these pamphlets contain 6361 Data Set No.", "a small amount of text and are shared as PDF files.", "We selected 2 of the longer nhiyawwin texts from the provincial health ministry, Alberta Health Services, and Health Canada as testing material.", "The 5-gram model achieved an average perplexity of 181.75 ( SD = 325.08) on the test set.", "The training, development, and test set characteristics are shown in Table 7.", "This corpus can be used to support several lines of future work.", "An immediate next direction would be further supporting the development of nhiyawwin learning materials using the corpus.", "For example, creating additional read-along activities and other game-based learning activities.", "SoundHunters is one such game that aims to improve learner phonological awareness (Lothian et al., 2020).", "The frequency statistics of different sounds, syllables, and words could be used to select learning materials for use in this and other games.", "The corpus could also be used to provide additional content.", "Another avenue, would be applying the corpus to support the further creation of NLP technologies for nhiyawwin.", "As mentioned, predictive text models were created for nhiyawwin because this type of language technology is both desired and can be supported through the corpus.", "To determine if these models are helpful for nhiyawwin speakers when typing, we will perform user studies.", "From these studies, we aim to learn if the predictive models support text entry in a timely way and whether people perceive them to be useful.", "We will collect perceptual data and feedback from potential users after they have completed several text-entry tasks through the developed predictive-text system.", "We will use the same measures that are commonly employed to determine the performance of new text-entry techniques.", "These measures include response time, error rates, and key strokes per character (Soukoreff and MacKenzie, 2003).", "We will also analyze how often predictions are used and the ranking of the prediction selected.", "With this information, we can determine if the predictive text model meets a community's needs and preferences.", "It is simply not enough to rely on model performance metrics without obtaining feedback from potential users.", "We recognize that by preprocessing SRO text, we have enabled easier use of this writing system for developing language technologies compared to syllabics.", "Future work should create a similar pipeline for syllabics that aligns with language rules used by communities, so that it can receive the same status and attention in the development of language technologies.", "This work contributes a collection of nhiyawwin resources that have been cleaned, processed, and shared for creating language technologies.", "Care was taken to collect, align, and preprocess the material so it could be used by others.", "It is hoped that sharing these resources along with the documentation of how they have been prepared will support language preservation and revitalization efforts.", "The utility of this corpus was shown via its community use in teaching nhiyawwin and by building language models to enable the creation of language technologies desired by speakers.", "This preliminary and on-going work demonstrates the value of the developed corpus for this low-resource language.", "Through these efforts in developing the corpus we hope to pave the way for the future creation of language technologies for and by nhiyawwin speakers.", "We thank the many high school and summer students who helped to collect corpus materials.", "Anaka Sparrow, Kelly Shih, Divya Prasad, Sabrina Lou, and Adya Dutt helped to collect and align Cree text materials.", "Ronan Sandoval helped create the web scraper which allowed for more text collection.", "This project was supported, in part, by funding from the the National Research Council Canada (NRCC), Natural Sciences and Engineering Research Council of Canada (NSERC), and Social Sciences and Humanities Research Council (SSHRC)." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other" ]
[ "Since language models are used to model a wide variety of languages, it is natural to ask whether the neural architectures used for the task have inductive biases towards modeling particular types of languages.", "Investigation of these biases has proved complicated due to the many variables that appear in the experimental setup.", "Languages vary in many typological dimensions, and it is difficult to single out one or two to investigate without the others acting as confounders.", "We propose a novel method for investigating the inductive biases of language models using artificial languages.", "These languages are constructed to allow us to create parallel corpora across languages that differ only in the typological feature being investigated, such as word order.", "We then use them to train and test language models.", "This constitutes a fully controlled causal framework, and demonstrates how grammar engineering can serve as a useful tool for analyzing neural models.", "Using this method, we find that commonly used neural architectures exhibit different inductive biases: LSTMs display little preference with respect to word ordering, while transformers display a clear preference for some orderings over others.", "Further, we find that neither the inductive bias of the LSTM nor that of the transformer appears to reflect any tendencies that we see in attested natural languages.", "Modern neural architectures used for language modeling, e.g. Transformer-based language models (Vaswani et al., 2017) and language models based on long-short term memory (LSTM) (Hochreiter and Schmidhuber, 1997; Sundermeyer et al., 2012), are intrinsically black boxes.", "This makes it difficult to understand whether their structure leads to an inductive bias which results in certain types of language being easier to learn and model.", "To make this point more plainly, we cannot easily conclude 22.5 25.0 27.5 30.0 32.5 35.0 37.5 40.0 Average Perplexity 0.0 0.2 0.4 0.6 0.8 D e n s i t y Transformer LSTM Figure 1: Distribution of average perplexities achieved by transformerand LSTM-based language models on our artificial languages with varying word order.", "much about whether an LSTM language model will perform better on SVO or SOV languages by simply examining its structure.", "Moreover, satisfactorily investigating the inductive bias of neural models has the potential to yield useful insight into how they work.", "In this work, we explore whether neural language models exhibit biases towards certain types of languages in a novel causal framework through the use of artificial languages .", "One of the key problems involved in investigating the effect of typological features on language model performance is the difficulty in isolating only the features being investigated, without influ-ence from other features of the languages being investigated or the data being used.", "For example, if one were to compare language model performance on English, an SVO language, and Japanese, an SOV language, it would be difficult to directly attribute differences in performance to the difference in word ordering alone.", "This is because English and Japanese also differ in many other typological dimensions, such as how subjects are marked, the extent of subjectverb agreement and use of post-positions or prepositions, which could contribute to the difference in performance.", "Indeed, recent correlational studies have failed to find an effect between language model performance and typological features (Cotterell et al., 2018; Mielke et al., 2019).", "Moreover, the sentences used for training and testing may differ in content, style or information density, which could further contribute to differences in performance.", "Thus, we offer a study investigating the inductive biases of language models through the construction of artificial languages.", "Our approach involves creating small context-free grammars resembling subsets of attested languages, which we then use to train and evaluate language models.", "In an approach inspired by Chomsky's (1981) framework of principles and parameters, we imbue our grammars with switches that indicate how to permute the ordering of the non-terminals in a given production.", "Through generating grammars with all possible combinations of these switches, we can create artificial languages of differing typological profiles.", "This experimental paradigm allows us to conduct carefully controlled studies by varying only the typological parameter and make a causal claim.", "Using our method, we investigate inductive biases related to the head-directionality of several constructions.", "We find that LSTM-based architectures show little bias towards any particular ordering, achieving similar average perplexities on all grammar variations tested.", "This contradicts recent findings by Ravfogel et al. (2019) who find LSTMs have a preference for SVO word order.", "Conversely, we find that performance of transformer-based architectures varies significantly across our artificial languages; this is visualized in Figure 1. This indicates that some combinations of the switches result in languages with word orders that are harder for the transformer to model than others.", "Our analysis suggests that neither the performance of the transformer-based architectures nor of the LSTM-based architectures reflects any known tendencies in attested natural languages, with the best performance being achieved on languages with the rarely-attested OVS sentence ordering.", "Importantly, our method exposes that transformer-based language models and LSTM-based language models have vastly different inductive biases, a result that has not been clearly stated in the NLP literature.", "Artificial languages have previously been used to investigate the ability of neural architectures with", "respect to specific phenomenon, such as their ability to acquire hierarchical generalizations (McCoy et al., 2018) and whether they can use systematic composition skills to make generalizations (Lake and Baroni, 2018).", "Bowman et al. (2015) also used artificial languages to investigate the ability of LSTMs to learn compositional structure, and compare their ability to that of tree-structured models.", "The work most closely related to ours is that of Ravfogel et al. (2019).", "Taking methodological inspiration from Wang and Eisner (2016), they create artificial versions of English with modified word order and case systems, including a version with objectverb agreement.", "They use the task of predicting the number of the subject and object of a missing verb to examine language model performance across these variations.", "They find that the models perform better on this task for the language with SVO word order.", "What they leave unchanged in their experiment, however, is the original English ordering within the constituents, e.g. the adjectivenoun ordering in a noun phrase.", "However, constituent order correlates with ordering of other grammatical constituents typologically (Greenberg, 1963), and this could lead to unwarranted preferences for the original English ordering.", "Our work addresses this problem by using fully artificial languages rather than modifying English sentences.", "This allows for our experiment to be more controlled by eliminating possible confounders.", "Other work conducted on the topic of inductive biases of language models has tended to focus on correlational studies investigating the relationship between typological features extracted from the World Atlas of Language Structures (WALS; Dryer and Haspelmath, 2013), which have only found negative results (Cotterell et al., 2018; Mielke et al., 2019).", "Since this work looked exclusively at the features of attested natural languages, it is difficult to control for the multiple typological dimensions along which any two natural languages differ.", "Further, given the large number of typological features exhibited among the world's languages, there are simply not enough attested languages to make strong correlational claims.", "Mielke et al. (2019) ultimately concluded with a negative result; this negative result, in part, motivates our study.", "require artificial languages.", "Choosing languages to investigate the inductive bias of a language model requires a trade-off between the experiment being realistic and being controlled.", "Using attested natural languages gives us the most realistic representation of natural language and all its complexities, but this also reduces the level of control and makes it difficult to disentangle the various typological variables that differ between languages.", "Indeed, this was the conclusion of Mielke et al. (2019).", "Work such as Ravfogel et al. (2019) finds some mid-point by using artificial languages which have been modified from English.", "This means that the language is less natural and more controlled, but does not maximize either.", "In our experiments, we have chosen to maximize the level of control.", "This means that our grammars are simple and do not necessarily cover all possible constructions that one would expect to see in a natural language.", "However, our reward for this sacrifice is that we can precisely control and understand how two languages tested differ from one another.", "We argue that this provides a good base for the exploration of inductive bias, as when differences are observed under these conditions we may now make a causal claim about their origin.", "In future work, the base grammars could be changed and extended as much as necessary to test additional hypotheses.", "A context-free grammar (CFG) is a quadruple ( N , S , , R ) where N is a set of non-terminals, S N is a distinguished start non-terminal, is an alphabet and R is a set of production rules.", "An element r R takes the form N where ( N ) .", "A CFG defines a subset of .", "Probabilistic context-free grammars (PCFG) are a probabilistic generalization of CFGs.", "Rather than simply defining a subset of , a PCFG gives us a probability distribution of where the structure of the grammar gives us the structural zeros of the distribution.", "Given a PCFG, we can take samples from it in order to generate sentences.", "We set out to construct a set of PCFGs to expose the inductive bias of neural language models.", "These grammars are parametrized by several switches, which determine the ordering of constituents within the grammar.", "The switches used are described in more detail in 3.3.", "(c) Grammar 111111: strovokicizeda sa povicateda serds sub fusbenders rel povify me ob sub .", "1 In this base PCFG, the rules which are affected by the toggling of each switch are marked.", "From this, sentences are sampled.", "On generation, each production in these sentences is marked with the switch it is associated with.", "We then work through every combination of switches, replicating this same set of generated sentences and reversing productions as required by the switches, to produce 1 The choice of which permutation is on or off is arbitrary.", "In this case, off switches correspond to head-final orderings.", "multiple parallel corpora, identical in their content up to a reordering of constituents.", "This experimental set-up allows us to ensure that sentences in the corpus for each of our artificial languages differ only in the configuration of the switches.", "In this way we can be confident in attributing any differences in performance to a causal difference in these switches rather than any differences caused by confounders, e.g. content, style or complexity of the sentences.", "Now we describe the construction of the PCFG with which we experiment in this work.", "Example sentences from several of our generated languages are shown in Figure 2. The base grammar and the scripts for sampling from it and generating corpora for all switch configurations will be released at https://github.com/ rycolab/artificial-languages .", "The Alphabet .", "Open-class words were taken from a list of phonotactically plausible English pseudowords (Kharkwal, 2014).", "These pseudowords included verbs, nouns and adjectives.", "We inflected the nouns manually for English plurality (adding s or es ) depending on what English phonotactics requires.", "We conjugated the verbs for present and past tense, again, using the rules of English.", "Additional morphological markers that are not present in English, e.g. subject and object markers and an additional marker to denote a plural past tense verb form, were obtained by randomly sampling two-letters slices from the list of morphological plausible words.", "2 Pronouns and prepositions were also obtained in this fashion.", "The Non-Terminals N .", "Our grammar has a single distinguished start symbol S .", "It describes verb phrases ( VP ), containing transitive and intransitive verbs, as well as verbs that take a sentential complement (complementizers are denoted Comp ).", "Nouns are marked as being objects or subjects using a particle (denoted Obj or Subj ).", "Verbs in our grammar have two tenses (past and present).", "Noun phrases ( NP ), including those modified by adjectives ( Adj ), relative clauses (where relativizers are denoted Rel ) and prepositional phrases ( PP ), are described in our grammar.", "The Production Rules R .", "Our production rules R cover several common productions seen in natural language.", "We list the production rules which are subject to switching in our experiment in Table 1. Modeling Morphological Agreement.", "Our grammar models a simple form of morphological agreement: verbs agree with their subjects in number (singular or plural).", "This introduces an element of long-term dependencies into our languages if a language model is to correctly predict a verb form, it must carry information about the number of the subject.", "In order to enforce this agreement in our grammar, non-terminals are subscripted with their number (where applicable).", "Assigning Probabilities.", "Weights given to each production were chosen manually through experimentation.", "Some principles for choosing weights for a grammar in this manner are described by Eisner and Smith (2008).", "An automated method of assigning weights could be explored in future work.", "Our end goal is to construct a grammar parameterized by a binary vector of K switches.", "We denote such a vector of switches b { 0 , 1 } K .", "Toggling an individual switch in the grammar reverses the order of the right-hand sides of a set of production rules.", "For example, the switch that we term the S switch reverses the order of the production S NP VP to create S VP NP .", "3 2 K different grammars are possible from K binary switches.", "In the following paragraphs, we describe each of the switches we consider in this work.", "3 Details of all switches are shown in Table 1.", "If the switch has a value of 1 , the rule becomes S VP NP .", "This order is rare among attested natural languages, but can be seen in VOS languages such as Malagasy and OVS languages such as Hixkaryana.", "Position of verb in verb phrase ( VP Switch).", "This switch determines whether a direct object precedes or follows its verb.", "If the switch has a value of 0 , we use the head-final order, with the object preceding the verb.", "This is seen in languages such as Japanese and Turkish.", "If the switch has a value of 1 , the head-initial order is used, with the object following the verb.", "This is seen in languages such as English and Chinese.", "This switch, in combination with the S switch, determines the overall ordering of subject, object and verb within a sentence.", "If the values of these switches are (0 , 0) , the language will have SOV word order, like Japanese and Turkish.", "If they are (1 , 1) , the language will have VOS order, which is rare but can be seen in languages such as Malagasy.", "SVO languages such as English correspond to (0 , 1) .", "(1 , 0) corresponds to OVS order, which is attested in only a very small number of human languages.", "Position of complementizer in sentential complement ( Comp switch).", "This switch determines whether a complementizer begins or ends a sentential complement.", "If the switch has a value of 0 , the complementizer appears in head-final position, at the end of the complement.", "This is the order seen in Japanese.", "If the switch has a value of 1 , the complementizer appears in head-initial position, at the beginning of the complement.", "This is the order seen in English.", "prepositional phrase.", "If the switch has a value of 0 , the prepositional phrase precedes the noun it modifies, and the prepositional phrase ends with a preposition, in head-final order.", "This order is seen in Japanese.", "If the switch has a value of 1 , the prepositional phrase follows the noun it modifies, and the preposition begins the prepositional phrase, in head-initial order.", "This order is seen in English.", "Position of adjective in noun phrase ( NP Switch).", "This switch determines whether an adjective appears before or after the noun it modifies.", "If the switch is 0 , the adjective precedes the noun (as in English and Japanese) and if it is 1 , the adjective follows the noun (as in Spanish and Irish).", "Position of relative clause ( Rel switch).", "This switch determines the position of a relative clause with respect to the noun it modifies.", "If the switch has a value of 0 , a relative clause is followed by a relativizer and then the noun it modifies.", "This order is seen in Japanese.", "If the switch has a value of 1 , the noun being modified appears first, followed by a relativizer and the clause.", "This order is seen in French and English.", "The unmarked word order of some attested languages can be approximately identified with particular switch vectors.", "4 For example, standard English order corresponds approximately to (0 , 1 , 1 , 1 , 0 , 1) , Japanese to (0 , 0 , 0 , 0 , 0 , 0) and Spanish to (0 , 1 , 1 , 1 , 1 , 1) .", "5 This is demonstrated in Table 2. We note that our configurations cannot account for all possible word orders seen in attested languages (VSO languages are not represented, for example), but constitute a subset of possible orders.", "4 This is, of course, a simplification, since word order within a natural language can follow more complex rules, or allow for flexibility.", "5 From this point on, grammars will be referred to by their configuration of switches, sans brackets, e.g. Grammar 011101.", "Architectures and Data.", "In order to compare inductive biases across architectures, two neural architectures were tested: transformers and LSTMs.", "We used the implementation available as part of Fairseq (Ott et al., 2019).", "Our base grammar has K = 6 switches, i.e. 6 binary choice points as described in 3.3.", "This results in 2 6 = 64 possible grammars.", "For each of these grammars we generated 100,000 sentences, which were divided into 10 splits of 10,000.", "6 The sentences generated for each grammar differed only in the designated choice points, i.e. in the ordering of their constituents.", "This meant that each sentence appeared in an equivalent form in each grammar.", "As such, for each sentence, we can compare the perplexity of the 64 variants of the sentence as calculated by language models trained on the corresponding grammars.", "Each split of 10,000 sentences was divided into an 801010 traindevtest split.", "7 Procedure.", "We trained both a transformer-based and an LSTM-based language model on each train split and the models were evaluated on the test split.", "This procedure resulted in 10 language models per architecture for each possible grammar, each of which was evaluated on 1,000 sentences in their respective test set.", "The perplexity achieved on these test sets was averaged across the 10 splits, to give the average perplexity for that grammar.", "This approach helps to account for the variability between individual training runs.", "6 10,000 sentences may sound like a relatively small number, but we note that our artificial languages are simple with small vocabularies, so we consider this number to be sufficient.", "7 Equivalent sentences across grammars were assured to be in the equivalent splits for each grammar, so train, dev and test sets across grammars contained the same sentences up to reordering of constituents.", "The average perplexity on the test set was measured for each grammar.", "This measures how well a language model explains the held-out test set.", "The lower the perplexity the better the language model fits the held-out data.", "Average perplexity achieved across all grammars by the transformer-and LSTM-based models are shown in Figure 3. 8 5.2 Mixed-Effects Modeling We use a linear mixed-effects model to investigate the effects of each choice point in the grammar.", "This allows us to model the effect of each switch in the grammar, and first-order interaction terms between them, on the perplexity of a sentence, while controlling for the fact that perplexities for parallel sentences across grammars are related (by using a random intersect per sentence grouping).", "This model is explained in detail below.", "Assume we have N paired sentences from each of our 2 K grammars.", "Let L RN 2 K 0 be a nonnegative real matrix of the perplexity obtained for every test sentence across every grammar.", "Specifically, we have that L nk is the perplexity for the n th sentence under the k th grammar.", "Furthermore, let S { 0 , 1 } 2 K (cid:16) K ( K 1) 2 + K (cid:17) be the binary matrix containing the configuration of switches and the K ( K 1) 2 + K switchswitch interactions for each of the 2 K grammars in contrast coding (Wu, 2009).", "Thus, we have that the column vector S k is a binary vector of length K ( K 1) 2 + K .", "Let RK ( K 1) 2 + K be a vector of real coefficients to be estimated describing the effect of each switch and their interactions.", "Let u n N (0 , 2dif . ) be 8 Error bars are omitted, but across grammars the error on each measurement is generally between 0.25 and 0.5.", "a sentence-specific difficulty term (a random effect) and let N (0 , 2 ) be a sentencegrammar-specific noise term.", "Now, we model an individual perplexity L nk , which corresponds to the n th sentence and the k th grammar, as follows: L nk = S k + u n + (1) Importantly, we draw one u n for each unique sentence.", "It is in this sense that u n acts as a term for modeling sentence difficulty.", "We may write eq.", "(1) as the following L nk N ( S k , 2dif . + 2 ) (2) which reveals that it is no more than a simple Gaussian model with tied parameters.", "We estimate , 2dif .", "and 2 through maximum-likelihood estimation, which, in Gaussian models, is equivalent to least-squares estimation.", "A positive coefficient j for a given switch means that models perform worse with head-initial ordering for that switch, while a negative coefficient means the opposite.", "Since the fixed effects were input using contrast coding, the interaction terms in our model deal with the effects of two constituents sharing head-directionality .", "A positive coefficient for an interaction means that the models perform worse when they share head directionality, and a negative coefficient means the opposite.", "Head-directionality is commonly correlated between sentence constituents in attested natural languages, so if the biases of these architectures re-flected human languages, we would expect most interaction terms to be negative.", "The coefficients obtained for the transformers are shown in Figure 4a.", "Those for the LSTMs are shown in Figure 4b.", "Differences Between Architectures.", "It is clear from Figure 3 that the transformerand LSTM-based models do not show the same inductive biases with respect to the switches we investigated.", "Across all possible configurations of the switches, LSTMs achieve very similar average perplexities, suggesting that they have little preference for any particular set of constituent orderings.", "In contrast, the average perplexities achieved by the transformers vary considerably between grammars.", "This demonstrates clearly that the two models exhibit distinctly different preferences with regard to orderings of words within in a sentence.", "Further, the clear contrast between the coefficients obtained by the mixed-effects models for transformers and LSTMs (shown in Figure 4a and Figure 4b, respectively) demonstrates a stark contrast between the two models.", "None of the switches investigated, or their first-order interactions, appear to have a substantial effect on the scores obtained in the case of the LSTM-based models, whereas the transformer-based models are clearly affected to a much greater degree by the configuration of these switches.", "Given that these two architectures are both commonly used for similar tasks, such a difference in their inductive biases is noteworthy.", "Correlated Switches.", "Figure 4a shows the coefficients obtained by the mixed-effects model employed to investigate the effects of the switches SOV SVO OVS VOS Word Order 0 10 20 30 40 50 P e r c e n t a g e o f w o r l d l a n g u a g e s 0 5 10 15 20 25 30 A v e r a g e P e r p l e x i t y Transformer LSTM Figure 5: The prevalence of word orders across languages (Dryer, 2013), plotted with the average perplexities achieved on each of these groups of grammars by transformerand LSTM-based models on performance for the transformer-based models.", "The diagonal values (for single switches) are all negative coefficients, which indicates that the model performance is better when these have head-final ordering.", "Off-diagonal values are the coefficients obtained for the interaction terms between two switches.", "A positive value here indicates that when these two switches have the same value (ei-ther both head-initial or both head-final), the performance of the model is worse.", "A negative value means that when the two switches have the same value, the performance is better.", "Most of the off-diagonal elements have small values, with a few exceptions.", "The coefficients of the cross terms between the S and VP switches and the S and Comp switches are larger negative values, which indicates that when these constituents share their head-directionality the performance of the transformer-based models is better.", "The coefficients of the cross terms between the VP and Comp , VP and Rel and NP and Rel switches are larger postive values, indicating that the transformers perform worse when these constituents share head-directionality.", "Generally, attested natural languages tend to exhibit a tendency towards one head-directionality, but the transformer does not seem to have inductive biases that reflect this.", "The corresponding coefficients for the LSTM-based models, shown in Figure 4b, are all small, further demonstrating that the LSTMs are largely agnostic to word ordering.", "Tendencies in Attested Natural Languages.", "We wish to consider the question of whether the biases of these models are in any way reflective of word order tendencies that we see across attested natural languages.", "All word orders are not equally common among natural languages, and it is interesting to consider whether the word orders that these models are able to model more successfully are those which are more commonly seen in natural language.", "Some have speculated that the skew of word orders in human languages could possibly be reflective of human cognitive biases (Culbertson et al., 2012, 2019), so it would be interesting to see to what extent the inductive biases of these models reflects this skew.", "Since LSTMs appear to show no preference for any word order over the others, they are clearly not reflective of attested tendencies in word order.", "To attempt to answer this question for the transformers, we begin by comparing the performance of the models on subsets of grammars with the prevalence of similar languages among humans.", "In Figure 5, the grammars are grouped by how they order the verb, object and subject of a sentence, and the average perplexities achieved by the language models on each of these groups is shown.", "On the same figure, we display the estimated prevalence of these orderings among the world's languages (Dryer, 2013).", "It is clear that these two things are not correlated, with the transformer performing similarly on SOV languages, the most common among the world's languages, and OVS languages, which are rarely attested.", "This shows that the bias exhibited by transformers does not reflect tendencies among attested languages.", "A further indication of this is the lack of a strong preference for switches sharing head-directionality as shown in Figure 4a.", "In human languages, the head-edness of constituents is often correlated (Green-berg, 1963).", "We would expect to see this through negative coefficients for interaction terms in the mixed-effects model for constituents whose orders commonly correlate.", "However, we do not observe this for all correlations.", "For example, we would expect the PP switch to show a strong preference for shared head-directionality with other switches, which we do not observe.", "We propose a novel methodology for the investigation of the inductive bias of language models using the technique of creating carefully controlled artificial languages.", "This approach allows for the elimination of differences in corpora between languages and means that typological variation between languages can be restricted exclusively to the typological features being investigated.", "We use this methodology to investigate the inductive bias of two neural architectures which are commonly used for this task: LSTMs and transformers.", "We found that these two models have starkly different inductive biases with respect to word order, with the LSTM showing little variation in performance across word order, while the performance of the transformer varied significantly across artificial languages.", "We thank Simone Teufel for providing feedback on an early draft.", "The authors foresee no ethical concerns with the research presented in this paper." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "method", "objective", "result", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "objective", "abstain", "objective", "result", "other", "method" ]
[ "Template filling is generally tackled by a pipeline of two separate supervised systems one for role-filler extraction and another for template/event recognition.", "Since pipelines consider events in isolation, they can suffer from error propagation.", "We introduce a framework based on end-to-end generative transformers for this task (i.e., GTT ).", "It naturally models the dependence between entities both within a single event and across the multiple events described in a document.", "Experiments demonstrate that this framework substantially outperforms pipeline-based approaches, and other neural end-to-end baselines that do not model between-event dependencies.", "We further show that our framework specifically improves performance on documents containing multiple events.", "The classic template-filling task in information extraction involves extracting event-based templates from documents (Grishman and Sundheim, 1996; Jurafsky and Martin, 2009; Grishman, 2019).", "It is usually tackled by a pipeline of two separate systems, one for role-filler entity extraction extracting event-relevant entities (e.g., noun phrases) from the document; another for template/event recognition assigning each of the candidate role-fillers to the event(s)/template(s) that it participates in and identifying the type of each event/template.", "Simplifications of the task (Patwardhan and Riloff, 2009; Huang and Riloff, 2011, 2012; Du et al., 2020) assume that there is one generic template and focus only on role-filler entity extraction.", "However, real documents often describe multiple events (Figure 1).", "From the example, we can observe that between-event dependencies are important (e.g., a single organization can participate in multiple events) and can span the entire document (e.g., event-specific targets can be distant from their Several attacks were carried out in La Paz last night, one in front of government house ... The self-styled \" Zarate armed forces \" sent simultaneous written messages to the media, calling on the people to oppose ... The first attack occurred at 22:30 in front of the economic ministry, just before President Paz Zamora concluded his message to ... Roberto Barbery, has reported that dynamite stickswere hurled from a car. The second attack occurred at 23:35, just after the cabinet members had left government housewhere they had listened to the presidential message. A bombwas placed outside government house in the parking lot that is used by cabinet ministers. The police ... As of 5:00 today, people found that an old shack on the estate was set ablaze, Event 2 Template Bombing Perpetrator Indiv. Perpetrator Org Zarate armed forces Physical Target government house Weapon bomb Victim Event 1 Template Attack Perpetrator Indiv. Perpetrator Org Zarate armed forces Physical Target economic ministry Weapon dynamite sticks Victim Event 3 Template Arson Perpetrator Indiv. Perpetrator Org Zarate armed forces Physical Target old shack Weapon Victim Figure 1: The template-filling task. Role-filler entity extraction is shown on the left, and template recognition is shown on the right. Our system performs both of these document-level tasks with a single end-to-end model. shared perpetrator organization).", "Alternative end-to-end event extraction models, even those incorporating pretrained LM representations, only model events in isolation (Wadden et al., 2019; Du and Cardie, 2020), and are mainly evaluated on ACE-style (Doddington et al., 2004) event extraction from single sentences (Yang and Mitchell, 2016; Lin et al., 2020).", "To naturally model between-event dependencies across a document for template filling, we propose a framework called G TT based on generative transformers (Figure 2).", "To our best knowledge, this is the first attempt to build an end-to-end learning framework for this task.", "We build our framework upon GRIT (Du et al., 2020), which tackles role-filler entity extraction (REE), but not template/event recognition.", "GRIT performs REE by generating a sequence of role-filler entities, one role at a time in a prescribed manner.", "For the template-filling setting, we first extend the GRIT approach to include tokens representing event types Generative Transformers Target tokens Model [CLS] Attack, Bombing, Arson, Kidnapping, ... [SEP_T] (Document tokens): Several attacks were carried out in La Paz last night ... [SEP] Source tokens [CLS] Attack <T1 REEs> [SEP_T] Bombing <T2 REEs> [SEP_T] Attack <T1 REEs> [SEP_T] Bombing <T2 REEs> [SEP_T] ...", "(e.g., attack, bombing) as part of the input sequence.", "We further modify the decoder to attend to the event type tokens, allowing it to distinguish among events and associate event types to each role-filler entity that it generates.", "We evaluate our model on the MUC-4 (1992) template filling task.", "Empirically, our model substantially outperforms both pipeline-based and end-to-end baseline models.", "In our analysis, we demonstrate that our model is better at capturing between-event dependencies, which are critical for documents that describe multiple events.", "Code and evaluation scripts for the project is open-sourced at https://github.com/xinyadu/gtt .", "Assume we are given a set of m event types ( T 1 , ..., T m ).", "Each event template contains a set of k roles ( r 1 , ..., r k ).", "For a document consisting n words x 1 , x 2 , ..., x n , the system is required to extract d templates, where d 0 ( d is not given as input).", "Each template consists of k + 1 slots: the first slot represents the event type (one of T 1 , ..., T m ).", "The rest of the k slots correspond to an event role (one of r 1 , ..., r k ).", "The system is required to fill in entities for the corresponding role, which may be filled in as null.", "Our framework is illustrated in Figure 2.", "First we transform the template filling task into a sequence generation problem.", "Then, we train the base model on the source-target sequence pairs, and apply the model to generate the sequence; fi-nally the sequence is transformed back to structured templates.", "We first transform the task's input and output data into specialized source and target sequence pair encodings.", "As shown in Figure 2 and below, the source sequence consists of the words of the document ( x 1 , x 2 , ..., x n ) prepended with the general set of tokens representing all event/template types ( T 1 , ..., T m ); as well as a separator token denoting the boundary between event templates ([SEP_T]).", "We also add a classification token ([CLS]) and another separator token ([SEP]) at the beginning and end of this source sequence.", "[CLS] works as the start token, [SEP] denotes the boundary between REEs.", "[CLS] T 1 , ..., T m [SEP_T] x 1 , x 2 , ..., x n [SEP] The target sequence consists of the concatenation of template extractions, separated by the separator token ([SEP_T]).", "For template i , the subsequence consists of its event type T ( i ) and its role-filler entity extractions < Role-filler Entities > ( i ) : [CLS] T (1) , < Role-filler Entities > (1) [SEP_T] T (2) , < Role-filler Entities > (2) ... [SEP_T] T ( i ) , < Role-filler Entities > ( i ) ...", "For the < Role-filler Entities > of template i , following Du et al. (2020), we use the concatenation of target entity extractions for each role, separated by the separator token ([SEP]).", "Each entity is represented with its first mention's beginning ( b ) and end ( e ) tokens: e 11 b , e 11 e ,", "3.2 Base Model and Decoding Constraints Next we describe the base model as well as special decoding constraints for template filling.", "BERT as Encoder and Decoder Our model extends upon the GRIT model for REE (Du et al., 2020).", "The base setup utilizes one BERT (Devlin et al., 2019) model for processing both the source and target tokens embeddings.", "To distinguish the encoder / decoder representations, it uses partial causal attention mask on the decoder side (Du et al., 2020).", "The joint sequence of source tokens' em-beddings ( a 0 , a 1 , ..., a m ) and target tokens' embed-dings ( b 0 , b 1 , ..., b n ) are passed through BERT to obtain their contextualized representations, a 0 , a 1 , ..., a l src , b 0 ..., b l tgt = BERT ( a 0 , b 1 , ..., a l src , b 0 , ..., b l tgt ) Pointer Decoding For the final decoder layer, we replace word prediction with a simple pointer selection mechanism.", "For target time step t , we first calculate the dot-product between b t and a 0 , a 1 , ..., a m , c 0 , c 1 , ..., c l src = b t a 0 , b t a 1 , ..., b t a l src Then we apply softmax to c 0 , c 1 , ..., c l src to obtain the probabilities of pointing to each source token (which may be a word or an event type), test prediction is done with greedy decoding.", "At each time step, argmax is applied to find the source token which has the highest probability.", "We also add several special decoding constraints for template filling: (1) downweighting factor (0.01) to the probability of generating [SEP] and [SEP_T], in order to calibrate recall; (2) decoding cutoff stop when it ends the k th template ( k = maximum number of events in one document); (3) a constraint to ensure that the pointers for the start and end token for one entity are in order.", "We conduct evaluations on the MUC-4 dataset (1992).", "MUC-4 consists of 1,700 documents with associated templates.", "We follow prior work in split: 1,300 documents for training, 200 documents ( TST1+TST2 ) as the development set and 200 documents ( TST3+TST4 ) as the test set.", "We use the metric for template filling (Chinchor, 1992) and, as in previous work, map predicted templates to gold templates during evaluation so as to optimize scores.", "We follow content-based mapping restrictions, i.e., the event type of the template is considered essential for the mapping to occur.", "1 Missing template's slots are scored as missing, spurious template's slots are scored as spurious.", "Note that in our work, since we do not extract the set fillers other than the event/template type, they do not affect the performance.", "ablation baseline, we employ a pipeline, GRITPIPELINE , that first uses the GRIT model for role-filler entity extraction, and then assigns event types to each of the entities as a multi-label classification problem.", "We assign types by transforming the problem to multi-class classification (MCC) (Spolaor et al., 2013).", "As there are 6 event types (i.e., kidnapping , attack , bombing , robbery , arson , forced work stoppage ) in MUC-4, we use 2 6 labels for the MCC problem.", "We also compare to end-to-end baselines without modeling between-event dependencies, 1 The content-based mapping restrictions were added to MUC-4 to prevent fortuitous mappings which occurred in MUC-3 (Chinchor, 1992).", "DY GIE++ (Wadden et al., 2019) 2 is a span-enumeration based extractive model for information extraction.", "The model enumerates all the possible spans in the document and passes each representation through a classifier layer to predict whether the span represents certain role-filler entity and what the role is.", "SEQTAGGING is a BERT-based sequence tagging model for extracting the role-fillers entities.", "A role-filler entity can appear in templates of different event types (e.g., Zarate armed force appear in both attack and bombing event).", "For both baselines, the prediction goal is multi-class classification.", "More specially, we adapt the DY -GIE++ output layer implementation to first predict the role-filler entity's role class, and then predicts its event classes conditioned on the entity's role.", "Note that Chambers (2013) and Cheung et al. (2013) propose to do event schema induction with unsupervised learning.", "Given their unsupervised nature, empirically the performance is worse than supervised models (Patwardhan and Riloff, 2009).", "Thus we do not add these as comparisons.", "Results on the full test set are shown in Table 2.", "We report the micro-average performance (precision, recall and F1).", "We see that our framework substantially outperforms the baseline extraction models in precision, recall and F1, with approximately a 4% F1 increase over the end-to-end baselines.", "It outperforms the GRIT-PIPELINE system by around 3% F1 ( denotes p < 0 . 05 ).", "Per-slot F1 score is reported in Table 1.", "The results demonstrate that our framework more often predicts the correct event type, performs better on PERPIND and PERPORG , and achieves slightly worse performance with GRIT-PIPELINE on roles that appear later in the template (i.e., TARGET and VICTIM ).", "We also found that DY -GIE++ performs better on TARGET , mainly due to its high precision in role assignment for spans.", "Between-Event Dependencies We also show results (Table", "3) on the subset of documents that contains more than one gold event.", "We see the F1 score for all systems drops substantially, proving the difficulty of the task, as compared to the single/no event case.", "When compared to the Full Test setting in Table 2, the baselines all increase in precision and drop substantially in recall, while our approach's precision and recall drop a little.", "This change is understandable, as the baseline systems are more conservative and tend to predict fewer templates.", "As the number of gold templates increases, the fewer templates predictions have a better chance of getting matched, but their recall drops as well.", "How performance changes when E increases In Figure 3, we see that when the number of gold events in the document is smaller ( E = 1 , 2 ), our approach performs on par with the pipeline-based and DY GIE++ baselines.", "However, as E grows larger, the baselines' F1 drop significantly (e.g., over -10% as E grows from 2 to 3).", "Qualitative Case Analysis Consider the input document ( doc id TST3-MUC4-0080 ) 3 , which contains an attack and a bombing template.", "In the gold annotations, Farabundo Marti National Liberation Front acts as PERPORG in both events.", "Our model correctly extracts the two events and the PERPORG in each while DY GIE++ only predicts the attack event with its PERPORG role entity correctly.", "Although GRIT-PIPELINE gets both events correct, it failed to extract this PERPORG entity for the second event.", "We revisit the classic NLP problem of template filling and propose an end-to-end learning framework called GTT.", "Through modeling events relation, our approach better captures dependencies across the document and performs substantially better on multi-event documents.", "We thank the anonymous reviewers for helpful feedback and suggestions.", "The work of XD and AMR was supported by NSF CAREER 2037519; that of XD and CC was supported in part by DARPA LwLL Grant FA8750-19-2-0039." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "objective", "method", "abstain", "other", "abstain", "method", "method", "result", "objective", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "other", "other", "other", "other", "other", "objective", "other", "other", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "other", "other" ]
[ "Amir Pouran Ben Veyseh 1 , Franck Dernoncourt 2 , Dejing Dou 1 and Thien Huu Nguyen 1,3 1 Department of Computer and Information Science, University of Oregon, Eugene, Oregon, USA 2 Adobe Research, San Jose, CA, USA 3 VinAI Research, Hanoi, Vietnam { apouranb, dou, thien } @cs.uoregon.edu [email protected]", "Abstract This paper studies the task of Relation Extraction (RE) that aims to identify the semantic relations between two entity mentions in text.", "In the deep learning models for RE, it has been beneficial to incorporate the syntactic structures from the dependency trees of the input sentences.", "In such models, the dependency trees are often used to directly structure the network architectures or to obtain the dependency relations between the word pairs to inject the syntactic information into the models via multi-task learning.", "The major problems with these approaches are the lack of generalization beyond the syntactic structures in the training data or the failure to capture the syntactic importance of the words for RE.", "In order to overcome these issues, we propose a novel deep learning model for RE that uses the dependency trees to extract the syntax-based importance scores for the words, serving as a tree representation to introduce syntactic information into the models with greater generalization.", "In particular, we leverage Ordered-Neuron Long-Short Term Memory Networks (ON-LSTM) to infer the model-based importance scores for RE for every word in the sentences that are then regulated to be consistent with the syntax-based scores to enable syntactic information injection.", "We perform extensive experiments to demonstrate the effectiveness of the proposed method, leading to the state-of-the-art performance on three RE benchmark datasets.", "One of the fundamental tasks in Information Extraction (IE) is Relation Extraction (RE) where the goal is to find the semantic relationships between two entity mentions in text.", "Due to its importance, RE has been studied extensively in the literature.", "The recent studies on RE has focused on deep learning to develop methods to automatically induce sentence representations from data (Zeng et al., 2014; Nguyen and Grishman, 2015a; Verga et al., 2018).", "A notable insight in these recent studies is that the syntactic trees of the input sentences (i.e., the dependency trees) can provide effective information for the deep learning models, leading to the state-of-the-art performance for RE recently (Xu et al., 2015; Guo et al., 2019; Tran et al., 2019).", "In particular, the previous deep learning models for RE has mostly exploited the syntactic trees to structure the network architectures according to the word connections presented in the trees (e.g., performing Graph Convolutional Neural Networks (GCN) over the dependency trees (Zhang et al., 2018)).", "Unfortunately, these models might not be able to generalize well as the tree structures of the training data might significantly differ from those in the test data (i.e., the models are overfit to the syntactic structures in the training data).", "For instance, in the cross-domain setting for RE, the domains for the training data and test data are dissimilar, often leading to a mismatch between the syntactic structures of the training data and test data.", "In order to overcome this issue, the overall strategy is to obtain a more general representation of the syntactic trees that can be used to inject the syntactic information into the deep learning models to achieve better generalization for RE.", "A general tree representation for RE is presented in (Veyseh et al., 2019) where the dependency trees are broken down into their sets of dependency relations (i.e., the edges) between the words in the sentences (called the edge-based representation).", "These dependency relations are then used in a multitask learning framework for RE that simultaneously predicts both the relation between the two entity mentions and the dependency connections between the pairs of words in the input sentences.", "Although the dependency connections might be less specific to the training data than the whole tree structures, the major limitation of the edge-based representation is that it only captures the pairwise (local) connections between the words and completely ignores the overall (global) importance of the words in the sentences for the RE problem.", "In particular, some words in a given sentence might involve more useful information for relation prediction in RE than the other words, and the dependency tree for this sentence can help to better identify those important words and assign higher importance scores for them (e.g., choosing the words along the shortest dependency paths between the two entity men-tions).", "We expect that introducing such importance information for the words in the deep learning models might lead to improved performance for RE.", "Consequently, in this work, we propose to obtain an importance score for each word in the sentences from the dependency trees (called the syntax-based importance scores).", "These will serve as the general tree representation to incorporate the syntactic information into the deep learning models for RE.", "How can we employ the syntax-based importance scores in the deep learning models for RE?", "In this work, we first use the representation vectors for the words from the deep learning models to compute another importance score for each word (called the model-based importance scores).", "These model-based importance scores are expected to quantify the semantic information that a word contributes to successfully predict the relationship between the input entity mentions.", "Afterward, we propose to inject the syntax-based importance scores into the deep learning models for RE by enforcing that the model-based importance scores are consistent with the syntactic counterparts (i.e., via the KL divergence).", "The motivation of the consistency enforcement is to promote the importance scores as the bridge through which the syntactic information can be transmitted to enrich the representation vectors in the deep learning models for RE.", "In order to implement this idea, we employ the Ordered-Neuron Long Short-Term Memory Networks (ON-LSTM) (Shen et al., 2019) to compute the model-based importance scores for the words in the sentences for RE.", "ON-LSTM extends the popular Long Short-Term Memory Networks (LSTM) by introducing two additional gates (i.e., the master forget and input gates) in the hidden vector computation.", "These new gates controls how long each neuron in the hidden vectors should be activated across different time steps (words) in the sentence (i.e., higher-order neurons would be maintained for a longer time).", "Based on such controlled neurons, the model-based importance score for a word can be determined by the number of active neurons that the word possesses in the operation of ON-LSTM.", "To our knowledge, this is the first time ON-LSTM is applied for RE in the literature.", "One of the issues in the original ON-LSTM is that the master gates and the model-based importance score for each word are only conditioned on the word itself and the left context encoded in the previous hidden state.", "However, in order to infer the importance for a word in the overall sentence effectively, it is crucial to have a view over the entire sentence (i.e., including the context words on the right).", "To this end, instead of relying only on the current word, we propose to obtain an overall representation of the sentence that is used as the input to compute the master gates and the importance score for each word in the sentence.", "This would enrich the model-based importance scores with the context from the entire input sentences, potentially leading to the improved RE performance of the model in this work.", "Finally, to further improve the representations learned by the deep learning models for RE, we introduce a new inductive bias to promote the similarity between the representation vectors for the overall sentences and the words along the shortest dependency paths between the two entity mentions.", "The intuition is that the relation between the two entity mentions of interest in a sentence for RE can be inferred from either the entire sentence or the shortest dependency path between the two entity mentions (due to the demonstrated ability of the shortest dependency path to capture the important context words for RE in the prior work (Bunescu and Mooney, 2005)).", "We thus expect that the representation vectors for the sentence and the dependency path should be similar (as both capture the semantic relation) and explicitly exploiting such similarity can help the models to induce more effective representations for RE.", "Our extensive experiments on three benchmark datasets (i.e., ACE 2005, SPOUSE and SciERC) demonstrate the effectiveness of the proposed model for RE, leading to the state-of-the-art performance for these datasets.", "RE has been traditionally solved by the feature-based or kernel-based approaches (Zelenko et al.,", "2003; Zhou et al., 2005; Bunescu and Mooney, 2005; Sun et al., 2011; Chan and Roth, 2010; Nguyen and Grishman, 2014; Nguyen et al., 2015c).", "One of the issues in these approaches is the requirement for extensive feature or kernel engineering effort that hinder the generalization and applicability of the RE models.", "Recently, deep learning has been applied to address these problems for the traditional RE approaches, producing the state-of-the-art performance for RE.", "The typical network architectures for RE include the Convolutional Neural Networks (Zeng et al., 2014; Nguyen and Gr-ishman, 2015a; dos Santos et al., 2015; Wang et al., 2016), Recurrent Neural Networks (Nguyen and Grishman, 2016; Zhou et al., 2016; Zhang et al., 2017; Nguyen et al., 2019a), and self-attentions in Transformer (Verga et al., 2018).", "The syntactic information from the dependency trees has also been shown to be useful for the deep learning models for RE (Tai et al., 2015; Xu et al., 2015; Liu et al., 2015; Miwa and Bansal, 2016; Peng et al., 2017; Zhang et al., 2018; Guo et al., 2019; Tran et al., 2019; Song et al., 2019; Veyseh et al., 2019).", "However, these methods tend to poorly generalize to new syntactic structures due to the direct reliance on the syntactic trees (e.g., in different domains) or fail to exploit the syntax-based importance of the words for RE due to the sole focus on edges of the dependency trees (Veyseh et al., 2019).", "The RE problem can be formulated as a multi-class classification problem.", "Formally, given an input sentence W = w 1 , w 2 , . . . , w N where w t is the t -th word in the sentence W of length N , and two entity mentions of interest at indexes s and o ( 1 s < o N ), our goal is to predict the semantic relation between w s and w o in W .", "Similar to the previous work on deep learning for RE (Shi et al., 2018; Veyseh et al., 2019), we first transform each word w t into a representation vector x t using the concatenation of the three following vectors:", "(i) the pre-trained word embeddings of w t ,", "(ii) the position embedding vectors (to encode the relative distances of w t to the two entity mentions of interest w s and w o (i.e., t s and t o )), and", "(iii) the entity type embeddings (i.e., the embeddings of the BIO labels for the words to capture the entity mentions present in X ).", "This word-to-vector transformation converts the input sentence W into a sequence of representation vectors X = x 1 , x 2 , . . . , x N to be consumed by the next neural computations of the proposed model.", "There are three major components in the RE model in this work, namely (1) the CEON-LSTM component (i.e., context-enriched ON-LSTM) to compute the model-based importance scores of the words w t , (2) the syntax-model consistency component to enforce the similarity between the syntax-based and model-based importance scores, and (3) the similarity component between the representation vectors of the overall sentence and the shortest dependency path.", "The goal of this component is to obtain a score for each word w t that indicates the contextual importance of w t with respect to the relation prediction between w s and w o in W .", "In this section, we first describe the ON-LSTM model to achieve these importance scores (i.e., the model-based scores).", "A new model (called CEON-LSTM) that integrates the representation of the entire sentence into the cells of ON-LSTM will be presented afterward.", "ON-LSTM : Long-short Term Memory Networks (LSTM) (Hochreiter and Schmidhuber, 1997) has been widely used in Natural Language Processing (NLP) due to its natural mechanism to obtain the abstract representations for a sequence of input vectors (Nguyen and Nguyen, 2018b, 2019).", "Given the input representation vector sequence X = x 1 , x 2 , . . . , x N , LSTM produces a sequence of hidden vectors H = h 1 , h 2 , . . . , h N using the following recurrent functions at the time step (word) w t (assuming the zero vector for h 0 ): f t = ( W f x t + U f h t 1 + b f ) i t = ( W i x t + U i h t 1 + b i ) o t = ( W o x t + U o h t 1 + b o ) c t = tanh ( W c x t + U c h t 1 + b c ) c t = f t c t 1 + i t c t h t = o t tanh ( c t ) (1) where f t , i t and o t are called the forget, input and output gates (respectively).", "In order to compute the importance score for each word w t , ON-LSTM introduce into the mechanism of LSTM two additional gates, i.e., the master forget gate f t and the master input gate i t (Shen et al., 2019).", "These gates are computed and integrated into the LSTM cell as follow: f t = cummax ( W f x t + U f h t 1 + b f ) i t = 1 cummax ( W i x t + U i h t 1 + b i ) f t = f t ( f t i t + 1 i t ) i t = i t ( i t f t + 1 f t ) c t = f t c t 1 + i t c t (2) where cummax is an activation function defined as cummax ( x ) = cumsum ( softmax ( x )) 1 .", "The forget and input gates in LSTM (i.e., f t and i t ) are different from the master forget and input gates in ON-LSTM (i.e., f t and i t ) as the gates in LSTM assume that the neurons/dimensions in their hidden vectors are equally important and that these neurons are active at every step (word) in the sentence.", "This is in contrast to the master gates in ON-LSTM that impose a hierarchy over the neurons in the hidden vectors and limit the activity of the neurons to only a portion of the words in the sentence (i.e., higher-ranking neurons would be active for more words in the sentence).", "Such hierarchy and activity limitation are achieved via the function cumax ( x ) that aggregates the softmax output of the input vector x along the dimensions.", "The output of cumax ( x ) can be seen as the expectation of some binary vector of the form (0 , . . . , 0 , 1 , . . . , 1) (i.e., involving two consecutive segments: the 0's segment and the 1's segment).", "At one step, the 1's segments in the gate vectors represents the neurons that are activated at that step.", "In ON-LSTM, a word w i is more contextually important than another word w j if the master gates for w i have more active neurons than those for w j .", "Consequently, in order to compute the importance score for the word w t , we can rely on the number of active neurons in the master gates that can be estimated by the sum of the weights of the neurons in the master gates in ON-LSTM.", "Following (Shen et al., 2019), we employ the hidden vectors for the master forget gate in ON-LSTM to compute the importance scores for the words in this work.", "Specifically, let f t = f t 1 , f t 2 , . . . , f tD be the weights for the neu-rons/dimensions in h t (i.e., D is the dimension of the gate vectors).", "The model-based importance score mod t for the word w t W is then obtained by: mod t = 1 (cid:80) i =1", "..D f ti .", "For convenience, we also use H = h 1 , h 2 , . . . , h N to denote the hidden 1 cumsum ( u 1 , u 2 , . . . , u n ) = ( u (cid:48) 1 , u (cid:48) 2 , . . . , u (cid:48) n ) where u (cid:48) i = (cid:80) j =1", "vectors returned from the application of ON-LSTM over the input representation vectors X .", "One limitation of the ON-LSTM model is that it only relies on the representation vector of the current word x t and the hidden vector for the left context (encoded in h t 1 ) to compute the master gate vectors and the model-based important score for the word w t as well.", "However, this score computation mechanism might not be sufficient for RE as the importance score for w t might also depend on the context information on the right (e.g., the appearance of some word on the right might make w t less important for the relation prediction between w s and w o ).", "Consequently, in this work, we propose to first obtain a representation vector x (cid:48) t = g ( x 1 , x 2 , . . . , x N ) that has the context information about the entire sentence W (i.e., both the left and right context for the current word w t ).", "Afterward, x (cid:48) t will replace the input representation vector x t in the computation for the master gates and importance score at step t of ON-LSTM (i.e., in the formulas for f t and i t in Equation 2).", "In this way, the model-based importance score for w t will be able to condition on the overall context in the input sentence.", "In this work, we obtain the representation vector x (cid:48) t for each step t of ON-LSTM based on the weighted sum of the transformed vectors of the input representation sequence x 1 , x 2 , . . . , x N : x (cid:48) t = (cid:80) i ti ( W x x i + b x ) .", "The weight ti for the term with x i in this formula is computed by: ti = exp (( W h h t 1 + b h ) ( W x x i + b x )) (cid:80) Nj =1 exp (( W h h t 1 + b h ) ( W x x j + b x )) (3) where W h , b h , W x and b x are the learnable parameters.", "Note that in this formula, we use the ON-LSTM hidden vector h t 1 from the previous step as the query vector to compute the attention weight for each word.", "The rationale is to enrich the attention weights for the current step with the context information from the previous steps (i.e., encoded in h t 1 ), leading to the contextualized input representation x (cid:48) t with richer information for the master gates and importance score computations in ON-LSTM.", "The proposed ON-LSTM with the enriched input vectors x (cid:48) t is called CEON-LSTM (i.e., Context-Enriched ON-LSTM) in this work.", "As mentioned in the introduction, the role of the model-based importance scores obtained from CEON-LSTM is to serve as the bridge to inject the information from the syntactic structures of W into the representation vectors of the deep learning models for RE.", "In particular, we first leverage the dependency tree of W to obtain another importance score syn t for each word w t W (i.e., the syntax-based importance score).", "Similar to the model-based scores, the syntax-based scores are expected to measure the contextual importance of w t with respect to the relation prediction for w s and w o .", "Afterward, we introduce a constraint to encourage the consistency between the model-based and syntax-based importance scores (i.e., mod t and syn t ) for the words via minimizing the KL divergence L import between the normalized scores: mod 1 , . . . , mod N = softmax ( mod 1 , . . . , mod N ) syn 1 , . . . , syn N = softmax ( syn 1 , . . . , syn N ) L import = i mod i log mod i syn i (4) The intuition is to exploit the consistency to supervise the model-based importance scores from the models with the syntax-based importance scores from the dependency trees.", "As the model-based importance scores are computed from the master gates with the active and inactive neurons in CEON-LSTM, this supervision allows the syntactic information to interfere directly with the internal compu-tation/structure of the cells in CEON-LSTM, potentially generating representation vectors with better syntax-aware information for RE.", "To obtain the syntax-based importance scores, we take the motivation from the previous work on RE where the shortest dependency paths between the two entity mentions of interest have been shown to capture many important context words for RE.", "Specifically, for the sentence W , we first retrieve the shortest dependency path DP between the two entity mentions w s and w o and the length T of the longest path between any pairs of words in the dependency tree of W .", "The syntax-based importance score syn t for the word w t W is then computed as the difference between T and the length of the shortest path between w t and some word in DP in the dependency tree (i.e., the words along DP will have the score of T ).", "On the one hand, these syntax-based importance scores are able to capture the importance of the words that is customized for the relation prediction between w s and w o .", "This is better suited for RE than the direct use of the edges in the dependency trees in (Veyseh et al., 2019) that is agnostic to the entity mentions of interest and fails to encode the importance of the words for RE.", "On the other hand, the syntax-based importance scores syn t represent a relaxed form of the original dependency tree that might have a better chance to generalize over different data and domains for RE than the prior work (i.e., the ones that directly fit the models to the whole syntactic structures (Zhang et al., 2018) and run the risk of overfitting to the structures in the training data).", "In this component, we seek to further improve the representation vectors in the proposed deep learning model for RE by introducing a novel constraint to maximize the similarity between the representation vectors for the overall input sentence W and the words along the shortest dependency path DP (i.e., inductive bias).", "The rationale for this bias is presented in the introduction.", "In order to implement this idea, we first obtain the representation vectors RW and RDP for the sentence W and the words along DP (respectively) by applying the max-pooling operation over the CEON-LSTM hidden vectors h 1 , h 2 , . . . , h N for the words in W and DP : RW = max pooling w i W { h i } and RDP = max pooling w i DP { h i } .", "In the next step, we promote the similarity between RW and RDP by explicitly minimizing their negative cosine similarity 2 , i.e., adding the following term L path into the overall loss function: L path = 1 cos ( RW , RDP ) (5) 3.4 Prediction Finally, in the prediction step, following the prior work (Veyseh et al., 2019), we employ the following vector V as the overall representation vector to predict the relation between w s and w o in W : V = [ x s , x o , h s , h o , RW ] .", "Note that V involves the information at different abstract levels for W , i.e., the raw input level with x s and x o , the abstract representation level with h s and h o 2 We tried the KL divergence and the mean square error for this, but cosine similarity achieved better performance.", "from CEON-LSTM, and the overall sentence vector RW .", "In our model, V would be fed into a feed-forward neural network with the softmax layer in the end to estimate the probability distribution P ( . | W, w s , w o ) over the possible relations for W .", "The negative log-likelihood function is then obtained to serve as the loss function for the model: L label = log P ( y | W, w s , w o ) ( y is the golden relation label for w s and w o in W ).", "Eventually, the overall loss function of the model in this work is: L = L label + L import + L path (6) where and are trade-off parameters.", "The model is trained with shuffled mini-batching.", "We evaluate the models in this work using three benchmark datasets, i.e., ACE 2005, SPOUSE, and SciERC.", "For ACE 2005, similar to the previous work (Nguyen and Grishman, 2016; Fu et al., 2017; Shi et al., 2018; Veyseh et al., 2019), we use the dataset preprocessed and provided by (Yu et al., 2015) for compatible comparison.", "There are 6 different domains in this dataset, i.e., ( bc , bn , cts , nw , un , and wl ), covering text from news, conversations and web blogs.", "Following the prior work, the union of the domains bn and nw (called news ) is used as the training data (called the source do-main); a half of the documents in bc is reserved for the development data, and the remainder ( cts , wl and the other half of bc ) serve as the test data (called the target domains).", "This data separation facilitates the evaluation of the cross-domain generalization of the models due to the domain difference of the training and test data.", "The SPOUSE dataset is recently introduced by (Hancock et al., 2018), involving 22,195 sentences for the training data, 2,796 sentences for the validation data, and 2,697 sentences for the test data.", "Each sentence in this dataset contains two marked person names (i.e., the entity mentions) and the goal is to identify whether the two people mentioned in the sentence are spouses.", "Finally, the SciERC dataset (Luan et al., 2018) annotates 500 scientific abstracts for the entity mentions along with the coreferences and relations between them.", "For RE, this dataset provides 3,219 sentences in the training data, 455 sentences in the validation data and 974 sentences in the test data.", "We fine tune the hyper-parameters for the models in this work on the validation data of the ACE 2005 dataset.", "The best parameters suggested by this process include: 30 dimensions for the position embeddings and entity type embeddings, 200 hidden units for the CEON-LSTM model and all the other hidden vectors in the model (i.e., the hidden vectors in the final feed-forward neural network (with 2 layers) and the intermediate vectors in the weighted sum vector for x (cid:48) t ), 1.0 for both loss tradeoff parameters and , and 0.001 for the initial learning rate with the Adam optimizer.", "The batch size is set to 50.", "Finally, we use either the uncontextualized word embeddings word2vec (with 300 dimensions) or the hidden vectors in the last layer of the BERT base model (with 768 dimensions) (De-vlin et al., 2019) to obtain the pre-trained word embeddings for the sentences (Devlin et al., 2019).", "We find it better to fix BERT in the experiments.", "Note that besides this section, we provide some additional analysis for the models in the Appendix.", "We fist compare the proposed model (called CEON-LSTM) with the baselines on the popular ACE 2005 dataset.", "In particular, the four following groups of RE models in the prior work on RE with the ACE 2005 dataset is chosen for comparison:", "(i) Feature based models: These models hand-design linguistic features for RE, i.e., FCM, Hybrid FCM, LRFCM, and SVM (Yu et al., 2015; Hen-drickx et al., 2010).", "(ii) Deep sequence-based models: These models employ deep learning architectures based on the sequential order of the words in the sentences for RE, i.e., log-linear, CNN, Bi-GRU, Forward GRU, Backward GRU (Nguyen and Grishman, 2016), and CNN+DANN (Fu et al., 2017).", "(iv) Deep structure-based models: These models use dependency trees either as the input features or the graphs to structure the network architectures in the deep learning models.", "The state-of-the-art models of this type include: AGGCN (At-tention Guided GCN) (Guo et al., 2019), SACNN (Segment-level Attention-based CNN) (Tran et al., 2019) and DRPC (the Dependency Relation Prediction and Control model) (Veyseh et al., 2019).", "DRPC has the best reported performance on ACE System bc cts wl Avg.", "2005.", "Note that we obtain the performance of these models on the considered datasets using the actual implementation released by the original papers.", "Most of the prior RE work on the ACE 2005 dataset uses the uncontextualized word embeddings (i.e., word2vec ) for the initial word representation vectors.", "In order to achieve a fair comparison with the baselines, we first show the performance of the models (i.e., the F1 scores) on the ACE 2005 test datasets when word2vec is employed for the pre-trained word embeddings in Table 1.", "The first observation from the table is that the deep structured-based models (e.g., C-GCN, DRPC) are generally better than the deep sequence-based models (e.g., CNN, Bi-GRU) and the feature base models with large performance gaps.", "This demonstrates the benefits of the syntactic structures that can provide useful information to improve the performance for the deep learning models for RE.", "We will thus focus on these deep structure-based models in the following experiments.", "Among all the models, we see that the proposed model CEON-LSTM is significantly better than all the baseline models over different test domains/datasets.", "In particular, CEON-LSTM is 1.38% and 3.1% better than DRPC and SACNN (respectively) on the average F1 scores over different test datasets.", "These performance improvements are significant with p < 0 .", "01 and clearly demonstrate the effectiveness of the proposed CEON-LSTM model for RE.", "In order to further compare CEON-LSTM with the baselines, Table 2 presents the performance of the models when the words are represented by the contextualized word embeddings (i.e., BERT).", "For this case, we also report the performance of the recent BERT-based model (i.e., Entity-Aware BERT (EA-BERT)) in (Wang et al., 2019) for RE System bc cts wl Avg.", "on the ACE 2005 dataset.", "Comparing the models in Table 2 with the counterparts in 1, it is clear that the contextualized word embeddings can significantly improve the deep structure-based models for RE.", "More importantly, similar to the case with word2vec , we see that the proposed model CEON-LSTM still significantly outperforms all the baselines models with large performance gaps and p < 0 .", "01 , further testifying to the benefits of the CEON-LSTM model in this work.", "Finally, in order to demonstrate the generalization of the proposed model over the other datasets, we show the performance of the models on the two other datasets in this work (i.e., SPOUSE and SciERC) using either word2vec or BERT as the word embeddings in Table 3.", "The results clearly confirm the effectiveness of CEON-LSTM as it is significantly better than all the other models over different datasets and word embedding settings.", "The Effect of the Model Components : There are three major components in the proposed model: (1) the introduction of the overall sentence representation x (cid:48) t into the ON-LSTM cells (called SCG Sentence Context for Gates), (2) the consistency constraint for the syntax-based and model-based importance scores (called SMC Syntax-Semantic Consistency), and (3) the similarity constraint for the representation vectors of the overall sentence and the shortest dependency path (called SDPS Sentence-Dependency Path Similarity).", "In order to System P R F1 CEON-LSTM (Full) 74.51 67.29 71.08 SCG 74.00 66.98 70.45 SMC 72.87 66.85 69.89 SDPS 73.02 66.00 69.18 SCG SMC 71.52 64.62 68.08 SCG SDPS 70.33 64.22 67.17 SMC SDPS 71.02 63.95 67.58 SCG SMC SDPS 70.51 63.01 66.98 Table 4: Ablation study on the development set of ACE 2005.", "evaluate the contribution of these components for the overall model CEON-LSTM, we incrementally remove these components from CEON-LSTM and evaluate the performance of the remaining model.", "Table 4 reports the performance of the models on the ACE 2005 development dataset.", "It is clear from the table that all the components are necessary for the proposed model as excluding any of them would hurt the performance significantly.", "It is also evident that removing more components results in more performance drop, thus demonstrating the complementary nature of the three proposed components in this work.", "The Variants for CEON-LSTM : We study several variants of SCG , SMC , and SDPS in CEON-LSTM to demonstrate the effectiveness of the designed mechanisms.", "In particular, we consider the following alternatives for CEON-LSTM:", "(i) Bi-ON-LSTM : Instead of employing the attention-based representation vectors x (cid:48) t to capture the context of the entire input sentence for the model-based importance scores in SCG , we run two unidirectional ON-LSTM models (i.e., the forward and backward ON-LSTM) to compute the forward and backward importance scores for each word in W .", "The final model-based importance score for each word is then the average of the corresponding forward and backward scores.", "(ii) SA-ON-LSTM : In this method, instead of using the hidden vector h t 1 as the query vector to compute the attention weight ti in Equation 3 for SCG , we utilize the input representation vector x t for w t as the query vector (i.e., replace h t 1 with x t in Equation 3).", "Consequently, SA-ON-LSTM is basically a composed model where we first run the self-attention (SA) model (Vaswani et al., 2017) over X .", "The results are then fed into ON-LSTM to obtain the model-based importance scores mod t .", "(iii) CE-LSTM : This aims to explore the effec-System P R F1 CEON-LSTM (proposed) 74.51 67.29 71.08 Bi-ON-LSTM 72.65 67.17 69.28 SA-ON-LSTM 73.21 67.31 70.13 CE-LSTM 71.58 64.19 67.92 EP-ON-LSTM 71.03 65.16 68.45 SP-CEON-LSTM ( RW in V ) 73.58 66.92 70.13 SP-CEON-LSTM ( RW not in V ) 72.94 65.21 69.51 Table 5: Models' performance on the development dataset of ACE 2005.", "tiveness of ON-LSTM for our model.", "In CE-LSTM, we replace the ON-LSTM network with the usual LSTM model in CEON-LSTM.", "The SMC component is not included in this case as the LSTM model cannot infer the importance scores.", "(iv) EP-ON-LSTM : Before this work, the DRPC model in (Veyseh et al., 2019) has the state-of-the-art on ACE 2005.", "Both DRPC and CEON-LSTM apply a more general representation of the dependency trees in a deep learning model (i.e., avoid directly using the original trees to improve the generalization).", "To illustrate the benefit of the importance score representation for SMC , EP-ON-LSTM replaces the importance score representation for the dependency trees in CEON-LSTM with the dependency edge representation in DRPC.", "In particular, we replace the term L import in the overall loss function (i.e., Equation 6) with the dependency edge prediction loss (using the ON-LSTM hidden vectors) in DRPC for EP-ON-LSTM.", "(v) SP-CEON-LSTM : This model removes the SDPS component and includes the representation vector of the dependency path DP (i.e., RDP ) in the final representation V for relation prediction.", "We consider both retaining and excluding the sentence representation RW in V in this case.", "This model seeks to show that the use of RDP for the similarity encouragement with RW is more effective than employing RDP directly in V .", "Table 5 reports the performance of these CEON-LSTM variations on the ACE 2005 development dataset.", "As we can see from the table, all the considered variants have significantly worse performance than CEON-LSTM (with p < 0 . 005) .", "This clearly helps to justify the designs of the components SCG , SMC and SDPS for CEON-LSTM in this work.", "Baseline for the Model-Based Importance Scores : One of the contributions in our work is to employ the gates in the cells of ON-LSTM to obtain the model-based importance scores that are then used to promote the consistency with the syntax-based importance scores (i.e., in the SMC compo-System P R F1 CEON-LSTM (proposed) 74.51 67.29 71.08 HIS-CEON-LSTM 72.02 63.97 68.29 Table 6: Models' performance on the development dataset of ACE 2005.", "nent).", "In order to demonstrate the effectiveness of the master cell gates to obtain the model-based importance scores, we evaluate a typical baseline where the model-based importance score mod i for w i W is computed directly from the hidden vector h i of CEON-LSTM (i.e., by feeding h i into a feed-forward neural network with sigmoid activation function in the end).", "The model-based importance scores obtained in this way then replace the importance scores from the cell gates and are used in the SMC component of CEON-LSTM in the usual way (i.e., via the KL divergence in L import ) (note that we tried the alternatives for the KL divergence in L import (i.e., the mean square error and the cosine similarity between the syntax-based and model-based importance scores), but the KL divergence produced the best results for both CEON-LSTM and HIS-CEON-LSTM on the development data).", "The resulting model is called HIS-CEON-LSTM .", "Table 6 reports the performance of HIS-CEON-LSTM and the proposed model CEON-LSTM on the ACE 2005 development dataset.", "It is clear from this table that the proposed model CEON-LSTM achieves significantly better performance than HIS-CEON-LSTM (with large performance gap), thus testifying to the importance of the master gates to obtain the model-based importance scores for CEON-LSTM.", "We introduce a new deep learning model for RE (i.e., CEON-LSTM) that features three major proposals.", "First, we represent the dependency trees via the syntax-based importance scores for the words in the input sentences for RE.", "Second, we propose to incorporate the overall sentence representation vectors into the cells of ON-LSTM, allowing it to compute the model-based importance scores more effectively.", "We also devise a novel mechanism to project the syntactic information into the computation of ON-LSTM via promoting the consistency between the syntax-based and model-based importance scores.", "Finally, we present a novel inductive bias for the deep learning models that exploits the similarity of the representation vectors for the whole input sentences and the shortest dependency paths between the two entity mentions for RE.", "Extensive experiments are conducted to demonstrate the benefits of the proposed model.", "We achieve the state-of-the-art performance on three datasets for RE.", "In the future, we plan to apply CEON-LSTM to other related NLP tasks (e.g., Event Extraction, Semantic Role Labeling) (Nguyen et al., 2016a; Nguyen and Grishman, 2018a).", "This research has been supported in part by Vin-group Innovation Foundation (VINIF) in project code VINIF.2019.DA18, the NSF grant CNS-1747798 to the IUCRC Center for Big Learning, and a gift from Adobe Research.", "This research is also based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 2019-19051600006 under the Better Extraction from Text Towards Enhanced Retrieval (BETTER) Program.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, the Department of Defense, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.", "This document does not contain technology or technical data controlled under either the U.S. International Traffic in Arms Regulations or the U.S. Export Administration Regulations." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "method", "objective", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "other", "method", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "result", "method", "other", "other", "other", "other", "other" ]
[ "While state-of-the-art neural network models continue to achieve lower perplexity scores on language modeling benchmarks, it remains unknown whether optimizing for broad-coverage predictive performance leads to human-like syntactic knowledge.", "Furthermore, existing work has not provided a clear picture about the model properties required to produce proper syntactic generalizations.", "We present a systematic evaluation of the syntactic knowledge of neural language models, testing 20 combinations of model types and data sizes on a set of 34 English-language syntactic test suites.", "We find substantial differences in syntactic generalization performance by model architecture, with sequential models underper-forming other architectures.", "Factorially manipulating model architecture and training dataset size (1M40M words), we find that variability in syntactic generalization performance is substantially greater by architecture than by dataset size for the corpora tested in our experiments.", "Our results also reveal a dissociation between perplexity and syntactic generalization performance.", "A growing body of work advocates that assessment of neural language models should include both information-theoretic metrics, such as perplexity, as well as targeted linguistic evaluation.", "Benchmarks such as GLUE (Wang et al., 2019a,b) have demonstrated that neural language models trained on naturalistic corpora for next-word prediction learn representations that can yield remarkable performance on many semantic tasks.", "Targeted syntactic evaluations have shown that these models also implicitly capture many syntactic generalizations , ranging from subjectverb agreement Materials and code can be found at https://github.", "to long-distance fillergap dependencies (Linzen et al., 2016; Marvin and Linzen, 2018; Futrell et al., 2018; Wilcox et al., 2019b).", "This paper aims to bring targeted evaluations of syntactic performance to scale, complementing similar developments in semantic evaluation (McCoy et al., 2019).", "Because the most widespread currency of evaluation for language models is perplexityhow well, on average, a model predicts a word in its context a primary focus of this paper is the relationship between a model's perplexity and its performance on targeted syntactic evaluations.", "As perplexity improves, can we expect more human-like syntactic generalization?", "How do training dataset size and model architecture jointly affect syntactic generalization?", "And what picture of models' syntactic generalization emerges when evaluation is brought to scale, across dozens of controlled syntactic tests?", "In this paper we offer initial answers to these questions, systematically assessing the syntactic generalization abilities of neural language models on 34 targeted test suites (33 adapted from previously published work, and 1 novel) covering a wide range of syntactic phenomena.", "Test suites are written using a standard format that allows for flexible predictions which more closely resemble those used in psycholinguistic studies, specifically allowing for predictions about interactions among multiple testing conditions.", "Performance on each test suite is reported as a Syntactic Generalization (SG) score.", "We group test suites into six syntactic circuits based on the linguistic representations needed to achieve high performance on each suite.", "We train four classes of neural models and one baseline n -gram model on four datasets derived from a newswire corpus, consisting of 1, 5, 14, and 42 million tokens.", "While previous work has compared model architectures for a fixed dataset size (e.g. Wilcox et al., 2019b) and network sizes for a fixed architecture (e.g. van Schijndel et al., 2019), our controlled regime allows us to make an apples-to-apples comparison across model architectures on a range of sizes.", "In addition, we evaluate several off-the-shelf models which were trained on datasets ranging up to 2 billion tokens.", "Our results address the three questions posed above: First, for the range of model architectures and dataset sizes tested, we find a substantial dissociation between perplexity and SG score.", "Second, we find a larger effect of model inductive bias than training data size on SG score, a result that accords with van Schijndel et al. (2019).", "Models afforded explicit structural supervision during training outperform other models: One structurally supervised model is able to achieve the same SG scores as a purely sequence-based model trained on 100 times the number of tokens.", "Furthermore, several Transformer models achieve the same SG score as a Transformer trained on 200 times the amount of data.", "Third, we find that architectures have different relative advantages across types of syntactic tests, suggesting that the tested syntactic phenomena tap into different underlying processing capacities in the models.", "Standard language models are trained to predict the next token given a context of previous tokens.", "Language models are typically assessed by their perplexity , the inverse geometric mean of the joint probability of words w 1 , . . . , w N in a held-out test corpus C : PPL ( C ) = p ( w 1 , w 2 , . . . w N ) 1 N (1) Models with improved perplexity have also been shown to better match various human behavioral measures, such as gaze duration during reading (Frank and Bod, 2011; Fossum and Levy, 2012; Goodkind and Bicknell, 2018; Wilcox et al., 2020).", "However, a broad-coverage metric such as perplexity may not be ideal for assessing human-like syntactic knowledge for a variety of reasons.", "In principle, a sentence can appear with vanishingly low probability but still be grammatically well-formed, such as Colorless green ideas sleep furiously (Chomsky, 1957).", "While perplexity remains an integral part of language model evaluation, fine-grained linguistic assessment can provide both more challenging and more interpretable tests to evaluate neural models.", "Alternatively, a language model can be evaluated on its ability to make human-like generalizations for specific syntactic phenomena (Linzen et al., 2016; Lau et al., 2017; Gulordava et al., 2018).", "The targeted syntactic evaluation paradigm (Mar-vin and Linzen, 2018; Futrell et al., 2019) incorporates methods from psycholinguistic experiments, designing sentences which hold most lexical and syntactic features of each sentence constant while minimally varying features that determine grammaticality or surprise characteristics of the sentence.", "For example, given the two strings The keys to the cabinet are on the table and *The keys to the cabinet is on the table , a model that has learned the proper subjectverb number agreement rules for English should assign a higher probability to the grammatical plural verb in the first sentence than to the ungrammatical singular verb in the second (Linzen et al., 2016).", "Although some targeted syntactic evaluations, such as the example discussed above, involve simple comparisons of conditional probabilities of a word in its context, other evaluations are more complex.", "We can demonstrate this with an evaluation of models' garden-pathing behavior (Futrell et al., 2019).", "For example, the sentence The child kicked in the chaos found her way back home yields processing disruption for humans at the word found .", "This is because, up to right before that word, the part-of-speech ambiguous kicked is preferentially interpreted as the main verb of the sentence, whereas it turns out to be a passive participle in a reduced relative clause modifying child .", "This garden-path disambiguation effect is ameliorated by replacing kicked with forgotten , which is not part-of-speech ambiguous (B below; Trueswell et al., 1994) or by using an unreduced relative clause (C below; Ferreira and Clifton, 1986).", "In probabilistic language models, these garden-path disambiguation effects are well captured by word negative log probabilities, or SURPRISALS (Hale, 2001): S ( w | C ) = log 2 p ( w | C ) , which are independently well-established to predict human incremental processing difficulty over several orders of magnitude in word probability (Smith and Levy, 2013).", "A targeted syntactic evaluation for garden-pathing is provided by comparing surprisals at the disambiguating word found in the set of four examples below (Futrell et al., 2019): (A) The child kicked in the chaos found ... (B) The child forgotten in the chaos found ... (C) The child who was kicked in the chaos found ... (D) The child who was forgotten in the chaos found ...", "Successful human-like generalization involves three criteria:", "(i) found should be less surprising (i.e., more probable) in B than A;", "(ii) found should be more probable in C than A;", "(iii) the CD surprisal difference should be smaller than the AB surprisal differencea 2 2 interaction effect on surprisalbecause the syntactic disambiguation effect of not reducing the relative clause was achieved by using a part-of-speech unambiguous verb.", "We will use these controlled tests to help us describe and test for human-like syntactic knowledge in language models.", "The testing paradigm presented here differs in several crucial ways from recent, related syntactic assessments and provides complementary insights.", "Unlike Warstadt et al. (2019a), our approach does not involve fine-tuning, but rather assesses what syntactic knowledge is induced from the language modeling objective alone.", "The most closely related work is the Benchmark of Linguistic Minimal Pairs (Warstadt et al., 2020), which is a challenge set of automatically-generated sentence pairs also designed to test language models on a large set of syntactic phenomena.", "Our approach differs in important ways: we compare critical sentence regions instead of full-sentence probabilities, and employ a 2 2 paradigm with a strict, multi-fold success criterion inspired by psycholinguistics methodology.", "This allows us to factor out as many confounds as possible, such as the lexical frequency of individual tokens and low-level n -gram statistics.", "We designed a controlled paradigm for systematically testing the relationship between two design choices model class and dataset size and two performance metrics perplexity and syntactic generalization capacity.", "Section 3.1 describes the test suites collected for our evaluation, and Sections 3.2 and 3.3 describe the datasets and model classes investigated.", "We assemble a large number of test suites inspired by the methodology of experimental sentence-processing and psycholinguistic research.", "Each test suite contains a number of ITEMS (typically between 20 and 30), and each item appears in several CONDITIONS : across conditions, a given item will differ only according to a controlled manipulation designed to target a particular feature of grammatical knowledge.", "Each test suite contains at least one PREDICTION , which specifies inequalities between surprisal values at pairs of regions/conditions that should hold if a model has learned the appropriate syntactic generalization.", "We expect language models which have learned the appropriate syntactic generalizations from their input to satisfy these inequalities without further fine-tuning.", "We compute accuracy on a test suite as the proportion of items for which the model's behavior conforms to the prediction.", "Most of our test suites involve 2 2 designs and a success criterion consisting of a conjunction of inequalities across conditions, as in the garden-pathing example described in Section 2.2.", "1 Random baseline accuracy varies by test suite and is 25% overall.", "Most of these test suites and criteria are designed so that n -gram models cannot perform above chance for n = 5 (sometimes greater).", "Syntactic coverage In order to assess the coverage of our test suites, we manually inspected the phenomena covered in Carnie (2012), a standard introductory syntax textbook.", "Of the 47 empirical phenomena reviewed in the summary sections at the end of each chapter, our tests target 16 ( 34%).", "These are evenly distributed across the whole range of subject matter, with tests targeting phenomena in 11 of the 15 chapters ( 73%).", "2 Modifiers Five test suites include paired modifier versions, where extra syntactically irrelevant (but semantically plausible) content, such as a prepositional phrase or relative clause, is inserted before the critical region being measured.", "We use these paired test suites to evaluate models' stability to intervening content within individual syntactic tests.", "Circuits The test suites are divided into 6 syntactic circuits, based on the type of algorithm required to successfully process each construction.", "We give a brief overview of each circuit below.", "3 Agreement is a constraint on the feature values of two co-varying tokens.", "the number feature of a verb must agree with the number feature of its upstream subject.", "We include 3 Subject-Verb Number Agreement suites from Marvin and Linzen (2018).", "Licensing occurs when a particular token must exist within the scope of an upstream licensor token.", "Scope is determined by the tree-structural properties of the sentence.", "Test suites include Negative Polarity Item Licensing (NPI) (4 suites) and Reflexive Pronoun Licensing (6 suites), both from Marvin and Linzen (2018).", "Garden-Path Effects are well-studied syntactic phenomena that result from tree-structural ambiguities that give rise to locally-coherent but globally implausible syntactic parses.", "Garden-path test suites include Main Verb / Reduced Relative Clause (MVRR) (2 suites) and NP/Z Garden-paths (NPZ) (4 suites), both from Futrell et al. (2018).", "Gross Syntactic Expectation is a processor's expectation for large syntactic chunks such as verb phrases or sentences, and are often set up by subordinating conjunctions such as while , although and despite .", "Our tests for gross syntactic expectation include Subordination (4 suites) from Futrell et al. (2018).", "Center Embedding sentences are sentences recursively nested within each other.", "Subject and verbs must match in a first-in-last-out order, meaning models must approximate a stack-like data-structure in order to successfully process them.", "Our 2 suites of Center Embedding sentences come from the items presented in Wilcox et al. (2019a).", "Long-Distance Dependencies are co-variations between two tokens that span long distances in tree depth.", "Test suites include Filler-Gap Dependencies (FGD) (6 suites) from Wilcox et al. (2018) and Wilcox et al. (2019b), and 2 novel Cleft suites, described in detail below.", "Novel test suite: Cleft We introduce one novel test suite that assesses models' ability to process pseudo-cleft constructions, which are used to put a particular syntactic constituent into focus via passive transformation.", "Consider Example (1): BLLIP sizes: XS SM MD LG # sentences 40K 200K 600K 1.8M # tokens 1M 4.8M 14M 42M # non-UNK types 24K 57K 100K 170K # UNK types 68 70 71 74 Table 1: Statistics of training set for each corpus size.", "(1)", "a. What he did after coming in from the rain was eat a hot meal .", "[DO/VP]", "b.*What he devoured after coming in from the rain was eat a hot meal .", "[LEX/VP]", "c.*What he did after coming in from the rain was a hot meal .", "[DO/NP]", "d. What he devoured after coming in from the rain was a hot meal .", "[LEX/NP] When this constituent is a verb, it must be replaced in the wh-clause that heads the sentence with the DO verb, as in (1a), below.", "However, when it is a noun, the lexical verb for which it serves as an object must be preserved, as in (1d).", "If models have properly learned the pseudo-cleft construction, then DO verbs should set up expectations for VPs (the region in bold should have a lower surprisal in (1a) than in (1b)) and lexicalized verbs should set up expectations for NPs (the region in bold should have a lower surprisal in (1d) than in (1c)).", "Corpora We train and evaluate models on English newswire corpora of four different sizes, obtained by randomly sampling sections from the Brown Laboratory for Linguistic Information Processing 1987-89 Corpus Release 1 (BLLIP; Char-niak et al., 2000).", "The corpora are sampled such that the training set of each corpus is a proper subset of each larger corpus.", "We call these four corpora BLLIP-XS (40K sentences, 1M tokens); BLLIP-SM (200K sentences, 5M tokens); BLLIP-MD (600K sentences, 14M tokens); and BLLIP-LG (2M sentences, 42M tokens).", "Table 1 summarizes statistics of the training set for each corpus.", "To ensure consistency in perplexity evaluation across datasets, we report perplexity scores achieved by the models on a shared held-out test set.", "We additionally use a shared held-out validation for tuning and early stopping.", "We use the NLTK implementation of the Penn Treebank tokenizer to process all datasets (Bird and Loper, 2004; Marcus et al., 1993).", "Out-of-vocabulary tokens For each corpus, we designate a token as OOV if the token appears fewer than two times in the training set.", "Our larger training datasets thus contain larger vocabularies than our smaller training datasets.", "This allows larger-training-set models to learn richer word-specific information, but may also harm perplexity evaluation because they have vocabulary items that are guaranteed to not appear in the BLLIP-XS test set.", "This means that perplexity scores across training dataset sizes will not be strictly comparable: if a larger-training-set model does better than a smaller-training-set model, we can be confident that it has meaningfully lower perplexity, but the reverse is not necessarily the case.", "The exception to the above is GPT-2, which uses sub-words from byte-pair encoding and has no OOVs (see also Footnote 6).", "Unkification We follow the convention used by the Berkeley parser (Petrov and Klein, 2007), which maps OOVs to UNK classes which preserve fine-grained information such as orthographic case distinctions and morphological suffixes (e.g. UNK-ed , UNK-ly ).", "Before training, we verified that the UNK classes in the test and validation sets were all present in the training set.", "In order to study the effects of model inductive bias and dataset size, we trained a fleet of models with varying inductive biases on each corpus.", "Because many of our test suites exploit ambiguities that arise from incremental processing, we restrict evaluation to left-to-right language models; future BLLIP sizes: XS SM MD LGLSTM 98.19 65.52 59.05 57.09 ON-LSTM 71.76 54.00 56.37 56.38 RNNG 122.46 86.72 71.12 69.57 GPT-2 529.90 183.10 37.04 32.14 n -gram 240.21 158.60 125.58 106.09 Table 4: Perplexity averages achieved by each controlled model on each corpus.", "work could involve evaluation of bidirectional models (Devlin et al., 2018; Yang et al., 2019) on an appropriate subset of our test suites, and/or adaptation of our suites for use with bidirectional models (Goldberg, 2019).", "Training ran until convergence of perplexity on a held-out validation set.", "Wherever possible, we trained multiple seeds of each model class and corpus size.", "We use the model sizes and training hyperparameters reported in the papers introducing each model (Table 2).", "4 The full parameter counts and perplexity scores for each model corpus combination are given in Tables 3 and 4, respectively.", "LSTM Our baseline neural model is a vanilla long short-term memory network (LSTM; Hochreiter and Schmidhuber, 1997) based on the boilerplate PyTorch implementation (Paszke et al., 2017).", "Ordered-Neurons We consider the Ordered-Neurons LSTM architecture (ON-LSTM; Shen et al., 2019), which encodes an explicit bias towards modeling hierarchical structure.", "RNNG Recurrent neural network grammars (RNNG; Dyer et al., 2016) model the joint probability of a sequence of words and its syntactic structure.", "RNNG requires labeled trees that contain complete constituency parses, which we produce for BLLIP sentences with an off-the-shelf constituency parser (Kitaev and Klein, 2018).", "5 To compute surprisals from RNNG, we use word-synchronous beam search (Stern et al., 2017) to approximate the conditional probability of the current word given the context.", "4 Due to computational constraints, we performed only minimal tuning past these recommended hyperparameters.", "5 While the BLLIP corpus already contains Treebank-style parses, we strip the terminals and re-parse in order to obtain more accurate, up-to-date syntactic parses.", "Transformer Transformer models (Vaswani et al., 2017) have recently gained popularity in language processing tasks.", "We use GPT-2 (Radford et al., 2019) as a representative Transformer model and train it from scratch on our BLLIP corpora.", "6 n -gram As a baseline, we consider a 5-gram model with modified Kneser-Ney smoothing.", "We also test five off-the-shelf models: GRNN, trained on 90M tokens from Wikipedia (Gulordava et al., 2018); JRNN, trained on 800M tokens from the 1 Billion Word Benchmark (Jozefowicz et al., 2016); Transformer-XL, trained on 103M tokens from WikiText-103 (Dai et al., 2019); and the pre-trained GPT-2 and GPT-2-XL, trained on 40GB of web text (Radford et al., 2019).", "These models are orders of magnitude larger than our controlled ones in parameter count and/or training set size.", "Figure 1 shows the average accuracy of all models on the complete set of SG test suites.", "Asterisks denote off-the-shelf models.", "All neural models achieve a SG score significantly greater than a random baseline (dashed line).", "However, the range within neural models is notable, with the best-performing model (GPT-2-XL) scoring over twice as high as the worst-performing model (LSTM).", "Also notable are the controlled GPT-2 and RNNG models, which achieve comparable performance to Transformer-XL and JRNN, despite being trained on significantly smaller data sizes.", "Our GPT-2 code is based on nshepperd/gpt-2 .", "The model vocabulary consists of byte-pair encoded sub-words extracted from the GPT-2 pre-trained model, not from the BLLIP training corpora.", "To calculate GPT-2 perplexities, we divide the sum of all sub-word conditional log-probabilities by the total number of words in the corpus.", "We now return to the three major issues presented in Section 1.", "In 4.1 we present evidence that SG score is dissociated from perplexity.", "In 4.2 we argue that model architecture accounts for larger gains in SG score than amount of training data.", "And in 4.3 we show that this cross-architecture difference is due largely to variance on a handful of key test suites.", "Figure 2 shows the relationship between SG score and perplexity on the BLLIP test set across models and training set sizes.", "As expected, n -gram models never rise appreciably above chance in SG score.", "Among neural models, GPT-2 achieves both the worst (BLLIP-XS and BLLIP-SM ) and best (BLLIP-MD and BLLIP-LG ) performance; the impressive performance of these latter models comes with the caveat that the sub-words come from the pre-trained GPT-2 model, tacitly importing information from a larger training dataset (see further discussion in Section 4.5).", "For the remaining neural models, there is no simple relationship between perplexity and SG score, especially once training dataset size is controlled for (comparing points in Figure 2 of the same color).", "For example, there is a remarkable amount of variance in the SG score of models trained on BLLIP-LG not explained by perplexity.", "This suggests that targeted syntactic evaluation can reveal information that may be orthogonal to perplexity.", "In order to decouple the effects of model class and data scale from test suite difficulty, we represent a particular trained model's performance on each test suite as a delta relative to the average performance of all models on this test suite.", "Unless noted otherwise, the remainder of the figures in this section plot a score delta, aggregating these deltas within model classes or corpus types.", "Figure 3 tracks the influence of model class and data scale across the model types tested in our experiments, with SG score deltas on the y-axis.", "The left-hand panel shows the difference in SG score by model class.", "We find that model class clearly influences SG score: for example, the error bars (boot-strapped 95% confidence intervals of the mean) for RNNG and LSTM do not overlap.", "The right-hand panel shows the difference in SG score delta by training dataset, and shows a much more minor increase in mean SG score as training data increases.", "We tested the influence of these factors quantitatively using a linear mixed-effects regression model, predicting suite-level performance as a feature of model architecture and training dataset size (represented as log-number of words).", "Both features made statistically significant contributions to SG score (both p < 0 . 001 ).", "However, predictor ablation indicates that architecture affects regression model fit more (AIC=581 when dataset size is ablated; AIC=574 when architecture is ablated).", "7 Beyond the above analysis, our GPT-2 results offer another striking example of the influence of 7 n -grams and/or GPT-2 could arguably be expected to have qualitatively different sensitivity to training dataset size (the latter due to byte-pair encoding), so we repeated the analyses here and in Section 4.3 excluding both architectures individually as well as simultaneously.", "In all cases the same qualitative patterns described in the main text hold.", "model architecture relative to data scale.", "Figure 2 shows that our controlled BLLIP-MD and BLLIP-LG GPT-2 models achieve roughly the same SG score as the pre-trained GPT-2 model, despite being trained on less than 1% of the data used by the pre-trained model.", "This suggests diminishing returns to training data scale for syntactic generalization performance.", "Figure 4 shows the breakdown at the circuit level by model architecture (left) and training dataset size (right).", "The right panel demonstrates little effect of dataset size on SG score delta within most circuits, except for Agreement, on which the models trained on our smallest dataset fare poorly.", "In the left panel we find substantial between-circuit differences across architectures.", "Linear mixed-effects analyses support this finding: interactions with circuit are significant for both training dataset size and model architecture, but stronger for the latter (AIC=654 and AIC=623 when size and architecture are respectively ablated).", "While model inductive biases separate clearly in performance on some circuits, they have little effect on performance on Licensing.", "This minimally suggests that Licensing taps into a distinct syntactic process within language models.", "One potential explanation for this is that the interactions tested by Licensing involve tracking two co-varying tokens where the downstream token is optional (see", "e.g.", "Hu et al., 2020).", "We show the circuit-level breakdown of absolute SG scores for all models (including off-the-shelf) in Figure 5.", "In general, the models that obtain high SG scores on average (as in Figure 1) also perform well across circuits: pre-trained GPT-2 and GPT-A g r e e m e n t C e n t e r E m b e d d i n g G a r d e n P a t h E f f e c t s G r o s s S y n t a c t i c S t a t e L i c e n s i n g L o n g D i s t a n c e D e p e n d e n c i e s Circuit 0.6 0.4 0.2 0.0 0.2 0.4 SG sc o r e de l t a LSTM ON-LSTM RNNG GPT-2 n-gram A g r e e m e n t C e n t e r E m b e d d i n g G a r d e n P a t h E f f e c t s G r o s s S y n t a c t i c S t a t e L i c e n s i n g L o n g D i s t a n c e D e p e n d e n c i e s Circuit BLLIP-LG BLLIP-MD BLLIP-SM BLLIP-XS Figure 4: Controlled evaluation results, split across test suite circuits.", "2-XL outperform all other models on each circuit, including Licensing, on which JRNN, GRNN, and most of our custom-trained models perform particularly poorly.", "Again, we highlight the impressive performance of RNNG: it achieves comparable average performance to GRNN on all circuits, despite being trained on a fraction of the data size.", "We separately investigate the degree to which models' syntactic generalizations are robustly stored in memory.", "For five test suites (Center Embedding, Cleft, MVRR, NPZ-Ambiguous, NPZ-Object), we designed minimally edited versions where syntactically irrelevant intervening content was inserted before the critical region.", "An ideal model should robustly represent syntactic features of its input across these modifier insertions.", "In Figure 6 we plot models' average scores on these five test suites (dark bars) and their minimally edited versions (light bars), evaluating how robust each model is to intervening content.", "Among models in our controlled experiments, we see that model class clearly influences the degree to which predictions are affected by intervening content (compare", "e.g.", "the stability of RNNG to that of ON-LSTM).", "Some off-the-shelf models, such as GPT-2-XL, perform near ceiling on the original five test suites and are not affected at all by intervening content.", "The GPT-2 models trained and evaluated in this paper use a sub-word vocabulary learned by byte-pair encoding (BPE; Sennrich et al., 2016) to represent their inputs, while all other models represent and compute over word-level inputs.", "This byte-pair encoding was taken from the pre-trained GPT-2 model trained on a much larger corpus.", "The results reported for these models thus conflate a choice of model class (a deep Transformer architecture) and preprocessing standard (sub-word tokenization computed on a larger corpus).", "Some preliminary work suggests that sub-word tokenization is indeed responsible for much of the larger GPT-2 models' success: we find that GPT-2 models trained on word-level representations of BLLIP-LG and BLLIP-MD achieve good perplexity measures, but degrade sharply in SG score.", "Peculiarities of the GPT-2 training regime may be responsible for its particularly bad performance on the smaller corpora.", "Its sub-word vocabulary was held constant across training corpora, meaning that the model vocabulary size also remained constant across corpora, unlike the other models tested.", "The poor performance of GPT-2 models trained on smaller corpora may thus be due to overparame-terization, and not due to fundamental problems with the model architecture at small data scales.", "We leave a thorough investigation of the role of sub-word tokenization to future work.", "This work addresses multiple open questions about syntactic evaluations and their relationship to other language model assessments.", "Our results dissociate model perplexity and performance in syntactic generalization tests, suggesting that the two metrics capture complementary features of language model knowledge.", "In a controlled evaluation of different model classes and datasets, we find model architecture plays a more important role than training data scale in yielding correct syntactic generalizations.", "Our circuit-level analysis reveals consistent failure on Licensing but inconsistent behavior on other circuits, suggesting that different syntactic circuits make use of different underlying processing capacities.", "In addition to the insight these results provide about neural NLP systems, they also bear on questions central to cognitive science and linguistics, putting lower bounds on what syntactic knowledge can be acquired from string input alone.", "Targeted syntactic evaluation is just one in a series of complementary methods being developed to assess the learning outcomes of neural language processing models.", "Other methods include classifying sentences as grammatical or ungrammatical (Warstadt et al., 2019b), decoding syntactic features from a model's internal state (Belinkov et al., 2017; Giulianelli et al., 2018), or transfer learning to a strictly syntactic task such as parsing or POS tagging (Hewitt and Manning, 2019).", "As each task brings an explicit set of assumptions, complementary assessment methods can collectively provide greater insight into models' learning outcomes.", "Although this paper, together with Warstadt et al. (2020), report what is to our knowledge the largest-scale targeted syntactic evaluations to date, we emphasize that they are only first steps toward a comprehensive understanding of the syntactic capabilities of contemporary language models.", "This understanding will be further advanced by new targeted-evaluation test suites covering a still wider variety of syntactic phenomena, additional trained models with more varied hyperparameters and randomization seeds, and new architectural innovations.", "Humans develop extraordinary grammatical capabilities through exposure to natural linguistic input.", "It remains to be seen to just what extent contemporary artificial systems do the same.", "The authors would like to thank the anonymous reviewers and Samuel R. Bowman for their feedback, Miguel Ballesteros for advice and technical guidance, and Tristan Thrush for technical assistance.", "J.H. is supported by the NIH under award number T32NS105587 and an NSF Graduate Research Fellowship.", "J.G. is supported by an Open Philanthropy AI Fellowship.", "R.P.L. gratefully acknowledges support from the MIT-IBM Watson AI Lab, a Google Faculty Research Award, and a Newton Brain Science Award." ]
[ "abstain", "abstain", "method", "result", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "method", "result", "abstain", "abstain", "objective", "abstain", "abstain", "result", "method", "method", "method", "objective", "result", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "abstain", "other", "objective", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "The Transformer translation model employs residual connection and layer normalization to ease the optimization difficulties caused by its multi-layer encoder/decoder structure.", "Previous research shows that even with residual connection and layer normalization, deep Transformers still have difficulty in training, and particularly Transformer models with more than 12 encoder/decoder layers fail to converge.", "In this paper, we first empirically demonstrate that a simple modification made in the official implementation, which changes the computation order of residual connection and layer normalization, can significantly ease the optimization of deep Transformers.", "We then compare the subtle differences in computation order in considerable detail, and present a parameter initialization method that leverages the Lipschitz constraint on the initialization of Transformer parameters that effectively ensures training convergence.", "In contrast to find-ings in previous research we further demonstrate that with Lipschitz parameter initialization, deep Transformers with the original computation order can converge, and obtain significant BLEU improvements with up to 24 layers.", "In contrast to previous research which focuses on deep encoders, our approach additionally enables Transformers to also benefit from deep decoders.", "Neural machine translation has achieved great success in the last few years (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017).", "The Transformer (Vaswani et al., 2017), which has outperformed previous RNN/CNN based translation models (Bahdanau et al., 2015; Gehring et al., 2017), is based on multi-layer self-attention networks and can be trained very efficiently.", "The Corresponding author.", "multi-layer structure allows the Transformer to model complicated functions.", "Increasing the depth of models can increase their capacity but may also cause optimization difficulties (Mhaskar et al., 2017; Telgarsky, 2016; Eldan and Shamir, 2016; He et al., 2016; Bapna et al., 2018).", "In order to ease optimization, the Transformer employs residual connection and layer normalization techniques which have been proven useful in reducing optimization difficulties of deep neural networks for various tasks (He et al., 2016; Ba et al., 2016).", "However, even with residual connections and layer normalization, deep Transformers are still hard to train: the original Transformer (Vaswani et al., 2017) only contains 6 encoder/decoder layers.", "Bapna et al. (2018) show that Transformer models with more than 12 encoder layers fail to converge, and propose the Transparent Attention (TA) mechanism which combines outputs of all encoder layers into a weighted encoded representation.", "Wang et al. (2019) find that deep Transformers with proper use of layer normalization are able to converge and propose to aggregate previous layers' outputs for each layer.", "Wu et al. (2019) explore incrementally increasing the depth of the Transformer Big by freezing pre-trained shallow layers.", "Concurrent work closest to ours is Zhang et al. (2019).", "They address the same issue, but propose a different layer-wise initialization approach to reduce the standard deviation.", "Our contributions are as follows: We empirically demonstrate that a simple modification made in the Transformer's official implementation (Vaswani et al., 2018) which changes the computation order of residual connection and layer normalization can effectively ease its optimization; We deeply analyze how the subtle difference of computation order affects convergence in Process Dropout + Norm Input/Norm Process Dropout + Norm Input", "deep Transformers, and propose to initialize deep Transformers under the Lipschitz constraint; In contrast to previous works, we empirically show that with proper parameter initialization, deep Transformers with the original computation order can converge; Our simple approach effectively ensures the convergence of deep Transformers with up to 24 layers, and achieves +1 .", "50 and +0 .", "92 BLEU improvements over the baseline on the WMT 14 English to German task and the WMT 15 Czech to English task; We further investigate deep decoders for the Transformer in addition to the deep encoders studied in previous works, and show that deep decoders can also benefit the Transformer.", "In this paper we focus on the convergence of the training of deep transformers.", "To alleviate the training problem for the standard Transformer model, Layer Normalization (Ba et al., 2016) and Residual Connection (He et al., 2016) are adopted.", "The official implementation (Vaswani et al., 2018) of the Transformer uses a different computation order (Figure 1", "b) compared to the published version (Vaswani et al., 2017) (Figure 1", "a), since it (Fig-ure 1", "b) seems better for harder-to-learn models.", "1 Even though several studies (Chen et al., 2018; Domhan, 2018) have mentioned this change and although Wang et al. (2019) analyze the difference between the two computation orders during back-propagation, and Zhang et al. (2019) point out the effects of normalization in their work, how this modification impacts on the performance of the Transformer, especially for deep Transformers, has not been deeply studied before.", "Here we present both empirical convergence experiments (Table 1) and a theoretical analysis of the effect of the interaction between layer normalization and residual connection (Table 2).", "In order to compare with Bapna et al. (2018), we used the same datasets from the WMT 14 English to German task and the WMT 15 Czech to English task for our experiments.", "We applied joint Byte-Pair Encoding (BPE) (Sennrich et al., 2016) with 32k merge operations.", "We used the same setting as the Transformer base (Vaswani et al., 2017) except the number of warm-up steps was set to 8 k .", "Parameters were initialized with Glorot Initialization 2 (Glorot and Bengio, 2010) like in many other Transformer implementations (Klein et al., 2017; Hieber et al., 2017; Vaswani et al., 2018).", "We conducted experiments based on the Neutron implementation (Xu and Liu, 2019) of the Transformer translation model.", "Our experiments run on 2 GTX 1 https://github.com/tensorflow/ tensor2tensor/blob/v1.6.5/tensor2tensor/ layers/common_hparams.py#L110-L112 .", "1080 Ti GPUs, and a batch size of 25 k target tokens is achieved through gradient accumulation of small batches.", "We used a beam size of 4 for decoding, and evaluated tokenized case-sensitive BLEU with the averaged model of the last 5 checkpoints saved with an interval of 1,500 training steps.", "Results of the two different computation orders are shown in Table 1, which shows that deep Transformers with the computation order of the official implementation (v2) have no convergence issue.", "Since the subtle change of computation order results in large differences in convergence, we further analyze the differences between the computation orders to investigate how they affect convergence.", "We conjecture that the convergence issue of deep Transformers is perhaps due to the fact that layer normalization over residual connections in Figure 1", "(a) effectively reduces the impact of residual connections due to subsequent layer normalization, in order to avoid a potential explosion of combined layer outputs (Chen et al., 2018), which is also studied by Wang et al. (2019); Zhang et al. (2019).", "We therefore investigate how the layer normalization and the residual connection are computed in the two computation orders, shown in Table", "2. Table 2 shows that the computation of residual connection in v1 is weighted by w compared to v2, and the residual connection of previous layers will be shrunk if w < 1 .", "0 , which makes it difficult for deep Transformers to converge.", "Since the diminished residual connections (Table 2) may cause the convergence issue of deep v1 Transformers, is it possible to constrain w 1 .", "0 ?", "Given that w is initialized with 1 , we suggest that the standard deviation of in model + in res should be constrained as follows: 0 .", "in which case w will be greater than or at least equal to 1 .", "0 , and the residual connection of v1 will not be shrunk anymore.", "To achieve this goal, we can constrain elements of in model + in res to be in [ a, b ] and ensure that their standard deviation is smaller than 1 .", "0 .", "Let's define P ( x ) as any probability distribution of x between [ a, b ] : b (cid:90) a P ( x ) dx = 1 .", "then the standard deviation of x is:", "Thus, as long as b a 1 (the range of elements of the representation x ), the requirements for the corresponding described in Equation 1 can be satisfied.", "To achieve this goal, we can simply constrain the range of elements of x to be smaller than 1 and initialize the sub-model before layer normalization to be a k-Lipschitz function, where k 1 .", "Because if the function F of the sub-layer is a k-Lipschitz function, for inputs x , y [a , b] , | F ( x ) F ( y ) | < k | x y | holds.", "Given that | x y | b a , we can get | F ( x ) F ( y ) | < k ( b a ) , the range of the output of that sub-layer is constrained by making it a k-Lipschitz function with constrained input.", "The k-Lipschitz constraint can be satisfied effectively through weight clipping, 3 and we empirically find that deep Transformers are only hard to train at the beginning and only applying a constraint to parameter initialization is sufficient, which is more efficient and can avoid a potential risk of weight clipping on performance.", "Zhang et al. (2019) also show that decreasing parameter variance at the initialization stage is sufficient for ensuring the convergence of deep Transformers, which is consistent with our observation.", "We use the training data described in Section 2 to examine the effectiveness of the proposed Lipschitz constrained parameter initialization approach.", "In practice, we initialize embedding matrices and weights of linear transformations with uniform distributions of [ e, + e ] and [ l, + l ] respectively.", "We use (cid:113) 2 esize + vsize as e and (cid:113) 1 isize as l where esize , vsize and isize stand for the size of embedding, vocabulary size and the input dimension of the linear transformation respectively.", "4 Results for two computation orders with the new parameter initialization method are shown in Table", "3. v1-L indicates v1 with Lipschitz constrained parameter initialization, the same for v2-L.", "Table 3 shows that deep v1-L models do not suffer from convergence problems anymore with our new parameter initialization approach.", "It is also worth noting that unlike Zhang et al. (2019), our parameter initialization approach does not degrade the translation quality of the 6-layer Transformer, and the 12-layer Transformer with our approach already achieves performance comparable to the 20-layer Transformer in Zhang et al. (2019) (shown in Table 1).", "3 Note that the weight of the layer normalization cannot be clipped, otherwise residual connections will be more heavily shrunk.", "4 To preserve the magnitude of the variance of the weights in the forward pass.", "While previous approaches (Bapna et al., 2018; Wang et al., 2019) only increase the depth of the encoder, we suggest that deep decoders should also be helpful.", "We analyzed the influence of deep encoders and decoders separately and results are shown in Table", "4. Table 4 shows that the deep decoder can indeed benefit performance in addition to the deep encoder, especially on the Czech to English task.", "In contrast to previous works (Bapna et al., 2018; Wang et al., 2019; Wu et al., 2019) which show that deep Transformers with the computation order as in Vaswani et al. (2017) have difficulty in convergence, we show that deep Transformers with the original computation order can converge as long as proper parameter initialization is performed.", "We first investigate convergence differences between the published Transformer (Vaswani et al., 2017) and its official implementation (Vaswani et al., 2018), and compare the differences of computation orders between them.", "We conjecture that the convergence issue of deep Transformers is because layer normalization sometimes shrinks residual connections, we support our conjecture with a theoretical analysis (Table 2), and propose a Lipschitz constrained parameter initialization approach for solving this problem.", "Our experiments show the effectiveness of our simple approach on the convergence of deep Transformers, which achieves significant improvements on the WMT 14 English to German and the WMT 15 Czech to English news translation tasks.", "We also study the effects of deep decoders in addition to deep encoders extending previous works.", "We thank anonymous reviewers for their insightful comments.", "Hongfei Xu acknowledges the support of China Scholarship Council ([2018]3101, 201807040056).", "Deyi Xiong is supported by the National Natural Science Foundation of China (Grant No. 61861130364), the Natural Science Foundation of Tianjin (Grant No. 19JCZDJC31400) and the Royal Society (Lon-don) (NAF \\ R1 \\ 180122).", "Hongfei Xu, Josef van Genabith and Jingyi Zhang are supported by the German Federal Ministry of Education and Research (BMBF) under the funding code 01IW17001 (Deeplee)." ]
[ "abstain", "abstain", "objective", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "method", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "result", "objective", "other", "other", "other", "other" ]
[ "Zero-shot sequence labeling aims to build a sequence labeler without human-annotated datasets.", "One straightforward approach is utilizing existing systems (source models) to generate pseudo-labeled datasets and train a target sequence labeler accordingly.", "However, due to the gap between the source and the target languages/domains, this approach may fail to recover the true labels.", "In this paper, we propose a novel unified framework for zero-shot sequence labeling with minimum risk training and design a new decomposable risk function that models the relations between the predicted labels from the source models and the true labels.", "By making the risk function trainable, we draw a connection between minimum risk training and latent variable model learning.", "We propose a unified learning algorithm based on the expectation maximization (EM) algorithm.", "We extensively evaluate our proposed approaches on cross-lingual/domain sequence labeling tasks over twenty-one datasets.", "The results show that our approaches outperform state-of-the-art baseline systems.", "Sequence labeling is an important task in natural language processing.", "It has many applications such as Part-of-Speech Tagging (POS) (DeRose, 1988; Toutanova et al., 2003) and Named Entity Recognition (NER) (Ratinov and Roth, 2009; Ritter et al., 2011; Lample et al., 2016; Ma and Hovy, 2016; Hu et al., 2020).", "Approaches to sequence labeling are mostly based on supervised learning, which relies heavily on labeled data.", "However, the labeled data is generally expensive and hard to obtain (for low-resource languages/domains), which means that these supervised learning approaches fail in many cases.", "Learning knowledge from imperfect predictions from other rich-resource sources (such as cross-lingual, cross-domain transfer) (Yarowsky and Ngai, 2001; Guo et al., 2018; Huang et al., 2019; Hu et al., 2021) is a feasible and efficient way to tackle the low-resource problem.", "It transfers knowledge from rich-resource languages/domains to low-resource ones.", "One typical approach to this problem is utilizing existing systems to provide predicted results for the zero-shot datasets.", "However, due to the gap between the source and the target languages/domains, this approach may fail to recover the true labels.", "Several previous approaches try to alleviate this problem by relying heavily on cross-lingual information (e.g., parallel text (Wang and Manning, 2014; Ni et al., 2017)), labeled data in source languages (Chen et al., 2019), and prior domain knowledge (Yang and Eisenstein, 2015) for different kinds of zero-shot scenarios.", "However, these approaches are designed to be specific, and might not be generalizable to other kinds of settings where the required resources are expensive to obtain or not available due to data privacy (Wu et al., 2020).", "Instead, we want a learning framework that can address the zero-shot learning problem in a unified perspective.", "In this work, we consider two widely explored settings in which we have access to: 1) the imperfect hard predictions (Rahimi et al., 2019; Lan et al., 2020); 2) the imperfect soft predictions (Wu et al., 2020), produced by one or more source models on target unlabeled data , and propose two novel approaches.", "We start by introducing a novel approach based on the minimum risk training framework.", "We design a new decomposable risk function parameterized by a fixed matrix that models the relations between the noisy predictions from the source models and the true labels.", "We then make the matrix trainable, which leads to further expressiveness and connects minimum risk training to learning latent variable models.", "We propose a learning algorithm based on the EM algorithm, which alternates between updating a posterior distribution and optimizing model parameters.", "To empirically evaluate our proposed approaches, we extensively conduct experiments on four sequence labeling tasks of twenty-one datasets.", "Our two proposed approaches, especially the latent variable model, outperform several strong baselines.", "Given a sentence x = x 1 , . . . , x n , its word representations are extracted from the pre-trained embeddings and passed into a sentence encoder such as BiLSTM, Convolutional Neural Networks (CNN) and multilingual BERT (Devlin et al., 2019) to obtain a sequence of contextual features.", "Without considering the dependencies between predicted labels, the Softmax layer computes the conditional probability as follows, P ( y | x ) = n (cid:89) i =1 P ( y i | x ) Given the gold sequence y = y 1 , . . . , y n , the general training objective is to minimize the negative log-likelihood of the sequence, J ( ) = log P ( y | x ) = n (cid:88) i =1 log P ( y i | x ) For simplicity, throughout this paper, we assume that all the sequence labelers are based on the Softmax method.", "Supervised models fail when labeled data are absent.", "Learning from imperfect predictions from rich-resource sources is a viable approach to tackle the problem.", "Generally speaking, there are two settings to obtain the imperfect predictions from: single source and multi source.", "The simplest single-source approach is to train a single-source model on one source language/domain and use the source model to directly predict labels on the target test data.", "We name this approach as direct single-source transfer (DT).", "Another single-source approach is to use the predictions of the source model on a set of unlabeled target data to supervise the training of a target model.", "With imperfect hard predictions from the source model, the corresponding objective function is the cross-entropy loss between the imperfect hard predictions and the target model's soft predictions, J ( ) = log P ( y | x ) = n (cid:88) i =1 log P ( y i | x ) where y denotes the pseudo label sequence of x predicted by the source model and y i is the pseudo label for position i .", "With imperfect soft predictions from the source model, the corresponding objective function is the KL-divergence (KL) or mean square error (MSE) loss between the imperfect soft predictions and the target model's soft predictions (knowledge distillation, KD) (Wu et al., 2020).", "For multi-source setup, a simple approach contains the following two steps.", "The first step is to apply DT with each source language to produce predictions on unlabeled target data.", "The second step is to mix the predictions from all the source models and perform supervised learning of a target model on the mixed pseudo-labeled dataset.", "However, the mixed pseudo-labeled dataset can be very noisy because predictions from different source models may contradict each other.", "Similar to single-source setting, a more effective way is aggregating the soft predictions from multiple sources and doing KD (Wu et al., 2020).", "In supervised learning, minimum risk training aims to minimize the expected error (risk) concerning the conditional probability,", "where R ( y , y ) is the risk function that measures the distance between the gold sequence y and the candidate sequence y , and Y ( x ) denotes the collection of all the possible label sequences given the sentence x .", "The risk function can be defined in many ways depending on specific applications, such as the BLEU score in machine translation (Shen et al., 2016).", "However, in our setting, there are no gold labels to compute R ( y , y ) .", "Instead, we assume there are multiple pretrained source models which can be used to predict hard labels, and we define the risk function as R ( y , y ) to measure the difference between pseudo label sequence y predicted by source models and the candidate sequence y .", "The objective function becomes, J ( ) = EP ( y | x ) [ R ( y , y )] = (cid:88) y Y ( x ) P ( y | x ) R ( y , y ) Conventional minimum risk training is intractable which is mainly due to the combination of two reasons: first, the set of candidate label sequences Y ( x ) is exponential in size and intractable to enumerate; second, the risk function is hard to decompose (or indecomposable).", "To tackle the problem, we define the risk function as a negative probability P ( y | y ) that can be fully decomposed by position.", "The objective function becomes, J ( ) = (cid:88) y Y ( x ) P ( y | x ) R ( y , y ) = (cid:88) y Y ( x ) P ( y | x ) P ( y | y ) (1) = n (cid:89) i =1 (cid:88) y i P ( y i | x ) P ( y i | y i ) We introduce a matrix RK K to model P ( y i | y i ) , where K is the number of labels.", "Notice that here is a fixed matrix that does not change in training.", "In the general imperfect predictions learning, it is often implicitly assumed that the prediction from a source model is generally better than uniformly selecting a candidate label at random.", "Given this prior knowledge, we require P ( y i = k | y i = k ) > 1 K .", "Therefore, we empirically define matrix as, ij = (cid:40) if i = j , 1 K 1 if i (cid:54) = j where > 1 K is a hyper-parameter.", "In the implementation, for convenience, we multiply an identity matrix by a hyper-parameter and then apply Softmax operation to every column to obtain the matrix .", "To further explain , we give an example from the perspective of prediction in Table", "1. Given a sentence x = I cried, a label distribution P ( y | x ) for the sentence, a pseudo label sequence y = { Pron , Adj } predicted by the source model, and two settings 1 = 0 .", "4 and 2 = 1 for (1) and (2) respectively, we compute P ( y i | x ) P ( y i | y i ) as shown in the table.", "Since (2) is an identity matrix, it predicts the label with the largest value at each position.", "It assigns the wrong label Adj to the word cried as a consequence.", "On the contrary, (1) introduces some uncertainties by providing smoothing over the pseudo labels.", "As a result, it correctly predicts the word cried as Verb .", "From the perspective of training, which minimizes J ( ) , if is an identity matrix, then it is a supervised model with y as the supervision signal; on the other hand, if is a uniform matrix, then the supervision signal becomes random and training becomes meaningless.", "Extending to Leverage Soft Predictions Previous works shows that the soft predictions from source models can provide more information than the hard predictions (Hinton et al., 2015; Wu et al., 2020).", "Our novel approach can also easily leverage this information by simply replacing the one-hot pseudo labels with soft probability distributions from source models.", "The training objective becomes, J ( )= n (cid:89) i =1 (cid:88) y i P ( y i | x ) (cid:88) y i P s ( y i | x ) P ( y i | y i ) where P s is the source model's soft predictions.", "For simplicity, in the rest of this section, we introduce our approaches based on the setup of using one-hot pseudo labels, but all the approaches can be extended to leverage soft predictions in a similar way.", "In this subsection, we instead use a trainable matrix to model P ( y | y ) .", "We initialize in the same way as .", "Assuming that conditioning on y , x and y are independent with each other, we find that the non-negative term of equation (1) is a conditional marginal probability defined by a latent variable model in which y is the latent variable.", "In latent variable model training, we generally optimize the negative conditional log-likelihood, and the objective function becomes,", "Interpolation In practice, given a pre-defined hyper-parameter , we combine the fixed P ( y i | y i ) with the trainable P ( y i | y i ) to get a new probability,", "where [0 , 1] is a hyper-parameter, is the combined matrix.", "If = 1 , it denotes the minimum risk training.", "Otherwise, it denotes the latent variable model.", "By modeling the joint distribution over the pseudo labels which are predicted by U source models on the target unlabeled data, we can easily extend our latent variable model to the multi-source setting.", "The objective function becomes, J ( , )= n (cid:88) i =1 log (cid:88) y i P ( y i | x ) U (cid:89) u =1 P ( y ( u ) i | y i , u ) Our overall architecture of the latent variable model is depicted in Figure", "1. 3.4 Optimization In this section, we propose a unified optimization scheme, which is based on the EM algorithm (Dempster et al., 1977) 1 , to learn the parameters 1 Another approach is to perform direct gradient descent optimization, which we find weaker results.", "We have a discussion on that in the analysis section.", "of the two proposed approaches.", "The EM algorithm is widely applied to learn parameters in a large family of models with latent variables such as the Gaussian mixture models.", "It is an iterative approach that has two steps in every iteration, which are the E-step and the M-step.", "In the E-step, it optimizes a posterior distribution of the latent variables.", "In the M-step, it estimates the parameters of the latent variable model according to the posterior distribution.", "As the single-source setup can be seen as a special case, we focus on the multi-source setup to derive the equations.", "We first introduce Q ( y ) = (cid:81) i Q ( y i ) as a distribution over the latent variable y , and then we derive the upper bound of J ( , ) as follows, J ( , ) = n (cid:88) i =1 log (cid:88) y i P ( y i | x ) U (cid:89) u =1 P ( y ( u ) i | y i , u ) = n (cid:88) i =1 log (cid:88) y i Q ( y i ) P ( y i | x ) U (cid:81) u =1 P ( y ( u ) i | y i , u ) Q ( y i ) n (cid:88) i =1 (cid:88) y i Q ( y i )log P ( y i | x ) U (cid:81) u =1 P ( y ( u ) i | y i , u ) Q ( y i ) (2) = n (cid:88) i =1 EQ ( y i ) log P ( y i | x ) U (cid:89) u =1 P ( y ( u ) i | y i , u )+ C where C is a residual term, and Q ( y i ) stands for Q ( y i = y i ) .", "The inequation above is derived from Jensen's inequality.", "To make the bound tight for particular and , we derive Q ( y i ) as, Q ( y i ) P ( y i | x ) U (cid:89) u =1 P ( y ( u ) i | y i , u ) (3) We sketch our strategy of parameter update in the t -th iteration as follows, E step , we compute Q ( y i ) using parameters and from the ( t 1) -th iteration; M step , we update parameters and together using a gradient-based approach by minimizing the upper bound above.", "Q ( y i ) is fixed in this step and hence we minimize n (cid:88) i =1 EQ ( y i ) log P ( y i | x ) U (cid:89) u =1 P ( y ( u ) i | y i , u ) we repeat the two steps alternately until convergence.", "We give an overall process for multi-source setup with unlabeled target data in Algorithm", "1. Algorithm 1 Multi-source transfer with latent variable model 1: Input : unlabeled dataset of target T , U pretrained source models {M = M (1) , . . . , M ( U ) } , U trainable matrices { = (1) , . . . , ( U ) } and U fixed matrices { = (1) , . . . , ( U ) } , hyper-parameter and , maximal iterations E for the EM algorithm.", "2: Initialize : initialize and with the same hyper-parameter .", "Initialize { = (1) , . . . , ( U ) } using , and .", "Initialize an empty pseudo label list Y , an upper bound loss l m = + , and an overall loss l e = + .", "3: for u = 1 , . . . , U do 4: Use M ( u ) to obtain the hard/soft label sequence of the unlabeled data T and append the predictions to the list of pseudo label sequences.", "Y .", "5: end for 6: Concatenate the unlabeled data T with all pseudo label collections Y to form a new training dataset T .", "For inference, we use Q ( y ) to obtain", "lowing Wu et al. (2020), the source model are previously trained on its corresponding training data.", "We use the BIO scheme for CoNLL and OntoNotes NER tasks and Aspect Extraction.", "We run each model three times and report the average accuracy for the POS tagging task and F1-score for the other tasks.", "Cross-Lingual Sequence Labeling We choose three tasks to conduct the cross-lingual sequence labeling task, which are POS tagging, NER, and Aspect Extraction.", "For the POS tagging task, we use Universal Dependencies treebanks (UD) v2.4 4 and randomly select five anguages together with the English dataset.", "The whole datasets are English (En), Catalan (Ca), Indonesian (Id), Hindi (Hi), Finnish (Fi), and Russian (Ru).", "For the Aspect Extraction task, we select the restaurant domain over subtask 1 in the SemEval-2016 shared task (Pon-tiki et al., 2016).", "For the NER task, we evaluate our models on the CoNLL 2002 and 2003 shared tasks (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003).", "Cross-Domain Sequence Labeling We use English portion of the OntoNotes (v5) (Hovy et al., 2006), which contains six domains: broadcast conversation (bc), broadcast news (bn), magazine (mz), newswire (nw), and web (wb).", "Single-source Setup The following approaches are applicable for single-source setup,", "DT: we use the pre-trained source model to directly predict the pseudo labels on the target unlabeled data.", "Hard: we use the pseudo labels from DT on the target unlabeled data to train a new model.", "Hard-Cat: we apply DT with all the source languages/domains, mix the resulting pseudo labels from all the sources on the unlabeled target data, and train a new model.", "Hard-Vote: we do majority voting at the token level on the pseudo labels from DT with each source and train a new model.", "4 https://universaldependencies.org/ CONLL NER ASPECTEXTRACTION English German Dutch Spanish Avg.", "Both Setups The following approaches are applicable for both single-/multi-source setups, KD-re: to fairly compare with the the KD approach (Wu et al., 2020) in the same settings (such as source model's cross-lingual ability), we re-implement the KD approach and adapt it to all tasks.", "MRT: our minimum risk training approach with a fixed matrix with soft or hard predictions.", "LVM: our latent variable model with parameter (containing the fixed matrix and the trainable matrix ) with soft or hard predictions.", "We also provide the reported results from existing approaches for reference.", "Due to different experiment configuration reasons, directly comparing our approaches to their reported results is generally not fair.", "For the CoNLL NER tasks, we provide the reported results from Wu et al. (2020).", "For the cross-domain sequence labeling tasks, we provide the reported results from Lan et al. (2020) who learns a consensus network to aggregate predictions from multiple sources.", "Hyper-parameter selection in transfer learning is difficult as no labeled dataset is available for the target language.", "We select the hyper-parameters only on the development set over the English language and directly use the selected hyper-parameters for the other languages.", "This may result in sub-optimal SINGLESOURCEMULTI-SOURCECAIDHIFIRU Avg.", "performance but is more realistic.", "In latent variable model training, the latent variable is generally very flexible, which may result in sub-optimal performance.", "Therefore, the initialization of the latent variable is very crucial.", "In practice, we find that the best strategy is to initialize of with a large value (e.g., 0 . 9 ) and of with a small value (e.g., 0 . 3 ), and anneal from 1 to 0 .", "At the early stage of training, this initialization offers a strong prior for the encoder which can keep the encoder from going in a bad direction; and at later stages of training, the warmed-up encoder can better guide the training of and vice versa.", "In this way, the encoder and can achieve a good balance during training.", "More details of the hyper-parameters can be found in the Appendix A.2.", "For the single-source setting, we use English as the source language and the others as the unlabeled target languages.", "In the multi-source setting, we repeat our experiments multiple times, each time with a language as the target and the others as the sources.", "We evaluate all approaches on the CoNLL, Aspect Extraction, OntoNotes, and POS tagging.", "We report the results in Table 2, 3 and 4 5 .", "Observation #1 Our two approaches outperform several strong baselines on all the tasks and all the scenarios (single-/multi-source scenarios with soft/hard predictions), especially the multi-source scenario, which demonstrates the effectiveness of the two proposed approaches.", "It shows that modeling this kind of relation is fairly important, which 5 We utilize almost stochastic dominance (ASD) test (Dror et al., 2019) to compare the best score of our approaches and the score of the best performing baselines.", "We mark the the highest score as bold if its superiority is significant ( p < 0 . 05 ) and underline otherwise.", "helps to recover the true labels from noisy data.", "Meanwhile, introducing uncertainties for the relations between the predicted labels from the source models and the true labels in both training and prediction processes significantly benefit our approaches.", "Observation #2 Our LVM approach achieves overall improvements over the MRT approach on all tasks.", "It suggests that our LVM approach learns the relations between predicted labels from the source models and true labels better than MRT.", "Other Minor Observations First, all the approaches that use unlabeled target data for training outperform DT.", "It suggests that leveraging the unlabeled target data (which may contain knowledge of the target language/domain) in training for zero-shot transfer learning does help.", "Comparing the approaches that leverage soft instead of hard predictions from sources, the former generally outperform the latter.", "It suggests that soft predictions can still provide useful knowledge for samples with incorrect hard predictions.", "The reported results from Lan et al. (2020) are significantly worse.", "We speculate the reason is that they leverage poor embeddings and different encoders (BiLSTM-CRF).", "KD-re outperforms our approaches on Ca and Id of POS tagging task on the single-source setting, but its advantage is not statistically significant.", "We conduct the analysis on the multi-source setting with soft predictions from sources for its better performance.", "Big Data Performance We experiment with our two models and the KD-re baseline on big target training data on the POS tagging task.", "We ran-86.5 87.5 88.5 1000 10000 100000 Acc KD MRT LVM Figure 2: The multi-source performance of Ca datasets by varying different sizes on the POS tagging task.", "We randomly select 1000, 10000, and 100000 sentences to train these three approaches, evaluate on the UD test set for each of the three languages respectively, and show the results in Figure", "2. It shows that our latent variable model outperforms the other two approaches over all the settings.", "Though KD outperform MRT with less than 10000 sentences, but MRT has comparable result with enough unlabeled data.", "Besides, with more unlabeled data used for training, each model further gains a considerable boost.", "Our two proposed approaches can also be optimized directly by any gradient-based approach, such as the AdamW optimizer (Loshchilov and Hut-ter, 2018).", "We use the two proposed approaches to compare the performance of the direct gradient-based training strategy and the EM algorithm.", "We conduct the experiments on our two proposed approaches on CoNLL NER task on the multi-source setting.", "We show the results in Table", "5. It shows that the EM algorithm outperforms direct gradient-based training for our approaches, which is slightly different from previous findings (Berg-Kirkpatrick et al., 2010).", "Comparison to Hard EM In this part, we compare our optimization strategy (soft-EM) with the hard-EM approach.", "Instead of computing a dense vector for Q ( y i ) , hard-EM computes a one-hot vector.", "We conduct the experiments on our two proposed approaches on the CoNLL NER task on the multi-source setting.", "The results are shown in Table", "6. It shows that soft-EM gains slightly improvement over hard-EM on the MRT approach, but differs significantly from hard-EM on our LVM approach.", "Impact of Matrix We analyze the relation between the performance and different initialization of .", "We experiment with the MRT approach in the single-source setup with soft predictions on NER tasks and Figure 3 shows the results.", "The best value of is 2 for De and 3 for the others (resulting in = 0 . 43 and 0 . 67 respectively 6 ), which shows that the uncertainties introduced by a smooth can effectively boost the model's performance.", "On the other hand, setting to a nearly identity matrix with = 10 leads to worse scores.", "Cross-lingual/domain Sequence Labeling Recent works on cross-lingual transfer mainly have two scenarios: the single-source cross-lingual transfer (Yarowsky and Ngai, 2001; Wang and Manning, 2014; Huang et al., 2019) and the multi-source cross-lingual transfer (Tackstrom et al., 2012; Guo et al., 2018; Rahimi et al., 2019; Hu et al., 2021).", "Wu et al. (2020) propose a knowledge distillation approach to further leveraging unlabeled target data and achieve the state-of-the-art results.", "Hu et al. (2021) propose a multi-view framework to selectively transfer knowledge from multiple sources by utilizing a small amount of labeled dataset.", "Cross-domain adaption is widely studied (Steedman et al., 6 The CoNLL NER datasets have 11 labels (9 entity labels, a padding label and an ending label).", "2003).", "Existing works include bootstrapping approaches (Ruder and Plank, 2018), mixture-of-experts (Guo et al., 2018; Wright and Augenstein, 2020), and consensus network (Lan et al., 2020).", "Other previous work (Kim et al., 2017; Guo et al., 2018; Huang et al., 2019) utilized labeled data in the source domain to learn desired information.", "However, our proposed approaches do not require any source labeled data or parallel texts.", "Contextual Multilingual Embeddings Embeddings like mBERT (Devlin et al., 2019), XLM (CONNEAU and Lample, 2019) and XLM-R (Con-neau et al., 2020) which are trained on many languages, make great progress on cross-lingual learning for multiple NLP tasks.", "Recent works (Wu and Dredze, 2019; Pires et al., 2019) show the strong cross-lingual ability of the contextual multilingual embeddings.", "In this paper, we propose two approaches to the zero-shot sequence labeling problem.", "Our MRT approach uses a fixed matrix to model the relations between the predicted labels from the source models and the true labels.", "Our LVM approach uses trainable matrices to model these label relations.", "We extensively verify the effectiveness of our approaches on both single-source and multi-source transfer over both cross-lingual and cross-domain sequence labeling problems.", "Experiments show that MRT and LVM generally bring significant improvements over previous state-of-the-art approaches on twenty-one datasets.", "by Alibaba Group through Alibaba Innovative Research Program." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "objective", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "other", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "objective", "method", "method", "result", "abstain", "other" ]
[ "While deep learning is a powerful tool for natural language processing (NLP) problems, successful solutions to these problems rely heavily on large amounts of annotated samples.", "However, manually annotating data is expensive and time-consuming.", "Active Learning (AL) strategies reduce the need for huge volumes of labeled data by iteratively selecting a small number of examples for manual annotation based on their estimated utility in training the given model.", "In this paper, we argue that since AL strategies choose examples independently, they may potentially select similar examples, all of which may not contribute significantly to the learning process.", "Our proposed approach, Active 2 Learning (A 2 L), actively adapts to the deep learning model being trained to eliminate such redundant examples chosen by an AL strategy.", "We show that A 2 L is widely applicable by using it in conjunction with several different AL strategies and NLP tasks.", "We empirically demonstrate that the proposed approach is further able to reduce the data requirements of state-of-the-art AL strategies by 3 25 % on an absolute scale on multiple NLP tasks while achieving the same performance with virtually no additional computation overhead.", "Active Learning (AL) (Freund et al., 1997; McCallum and Nigam, 1998) reduces the need for large quantities of labeled data by intelligently selecting unlabeled examples for expert annotation in an iterative process.", "Many Natural Language Processing (NLP) tasks like sequence tagging (NER, POS) and Neural Machine Translation (NMT) are very data-intensive and require a meticulous, time-consuming, and costly annotation process.", "On the other hand, unlabeled data is practically unlimited.", "Due to this, many researchers have explored applications of active learning for NLP (Thompson et al., 1999; Figueroa et al., 2012).", "A general AL method proceeds as follows:", "(i) The partially trained model for a given task is used to (possibly incorrectly) annotate the unlabeled examples.", "(ii) An active learning strategy selects a subset of the newly labeled examples via a criterion that quantifies the perceived utility of examples in training the model.", "(iii) The experts verify/improve the annotations for the selected examples.", "(iv) These examples are added to the training set, and the process repeats.", "AL strategies differ in the criterion used in step", "(ii).", "We claim that all AL strategies select redundant examples in step", "(ii) .", "If one example satisfies the selection criterion, then many other similar examples will also satisfy it (see the next paragraph for de-tails).", "As the examples are selected independently, AL strategies redundantly choose all of these examples even though, in practice, it is enough to label only a few of them (ideally just one) for training the model.", "This leads to higher annotation costs, wastage of resources, and reduces the effectiveness of AL strategies.", "This paper addresses this problem by proposing a new approach called A 2 L (read as active-squared learning) that further reduces the redundancies of existing AL strategies.", "Any approach for eliminating redundant examples must have the following qualities:", "(i) The redundancy should be evaluated in the context of the trained model.", "(ii) The approach should apply to a wide variety of commonly used models in NLP.", "(iii) It should be compatible with several existing AL strategies.", "The first point merits more explanation.", "As a model is trained, depending on the downstream task, it learns to focus on certain properties of the input.", "Examples that share these properties (for instance, the sentence structure) are similar from the model's perspective.", "If the model is confused about one such example, it will likely be confused about all of them.", "We refer to a similarity measure that is computed in the context of a model as a model-aware similarity (Section 3.1).", "twin-(Bromley et al., 1994; Mueller and Thyagarajan, 2016) based method for computing model-aware similarity to eliminate redundant examples chosen by an AL strategy.", "This Siamese network actively adapts itself to the underlying model as the training progresses.", "We then use clustering based on similarity scores to eliminate redundant examples.", "(ii) We develop a second, computationally more effi-cient approach that approximates the first one with a minimal drop in performance by avoiding the clustering step.", "Both of these approaches have the desirable properties mentioned above.", "(iii) We experiment with several AL strategies and NLP tasks to empirically demonstrate that our approaches are widely applicable and significantly reduce the data requirements of existing AL strategies while achieving the same performance.", "To the best of our knowledge, we are the first to identify the importance of model-aware similarity and exploit it to address the problem of redundancy in AL. 2 Related Work Active learning has a long and successful history in the field of machine learning (Dasgupta et al., 2009; Awasthi et al., 2017).", "However, as the learning models have become more complex, especially with the advent of deep learning, the known theoretical results for active learning are no longer applicable (Shen et al., 2018).", "This has prompted a diverse range of heuristics to adapt the active learning framework to deep learning models (Shen et al., 2018).", "Many AL strategies have been proposed (Sha and Saul, 2007; Haffari et al., 2009; Bloodgood and Callison-Burch, 2010; Blundell et al., 2015; Gal and Ghahramani, 2016a), however, since they choose the examples independently, the problem of redundancy (Section 1) applies to all.", "We experiment with various NLP tasks like named entity recognition (NER) (Nadeau and Sekine, 2007), part-of-speech tagging (POS) (Mar-cus et al., 1993), neural machine translation (NMT) (Hutchins, 2004; Nepveu et al., 2004; Bahdanau et al., 2014; Cho et al., 2014; Sutskever et al., 2014; Ortiz-Martnez, 2016) and so on (Landes et al., 1998; Tjong Kim Sang and Buchholz, 2000).", "The tasks chosen by us form the backbone of many practical problems and are known to be computationally expensive during both training and inference.", "Many deep learning models have recently advanced the state-of-art for these tasks (Bahdanau et al., 2014; Lample et al., 2016; Siddhant and Lipton, 2018).", "Our proposed approach is compatible with any NLP model, provided it supports the usage of an AL strategy.", "Existing approaches have used model-independent similarity scores to promote diversity in the chosen examples.", "For instance, in Chen et al. (2015), the authors use cosine similarity to pre-calculate pairwise similarity between examples.", "We instead argue in favor of model-aware similarity scores and learn an expressive notion of similarity using neural networks.", "We compare our approach with a modified version of this baseline using cosine similarity on Infersent embeddings (Conneau et al., 2017).", "We use M to denote the model being trained for a given task.", "M has a module called encoder for encoding the input sentences.", "For instance, the encoder in M may be modeled by an LSTM (Hochre-iter and Schmidhuber, 1997).", "A measure of similarity between examples is required to discover redundancy.", "The simplest solution is to compute the cosine similarity between input sentences (Chen et al., 2015; Shen et al., 2018) using, for instance, the InferSent encodings (Con-neau et al., 2017).", "However, sentences that have a low cosine similarity may still be similar in the context of the downstream task.", "Model M has no incentive to distinguish among such examples.", "A good strategy is to label a diverse set of sentences from the perspective of the model.", "For example, it is unnecessary to label sentences that use different verb forms but are otherwise similar if the task is agnostic to the tense of the sentence.", "A straightforward extension of cosine similarity to the encodings generated by model M achieves this.", "However, a simplistic approach like this would likely be incapable of discovering complex similarity patterns in the data.", "Next, we describe two approaches that use more expressive model-aware similarity measures.", "In this approach, we use a Siamese twin's network (Bromley et al., 1994) to compute the pairwise similarity between encodings obtained from model M .", "A Siamese twin's network consists of an encoder (called the Siamese encoder) that feeds on the output of model M 's encoder.", "The outputs of the Algorithm 1: Active 2 Learning Data: D 1 : task dataset; D 2 : auxiliary similarity dataset Input: D 2% of dataset D 1 ; D D 1 D ; // unlabeled data Output: Labeled data Initialization: D ; M TRAIN ( D ); MA 2 L TRAIN ( M ( D 2 ) ); for i 1 to l do S AL (D) ; // top 2% confused samples if // Model-Aware Siamese then for each pair (s m , s n ) in S do S [ m, n ] MA 2 L (s m , s n ) ; R CLUSTER ( S ); else // Integrated Clustering R MA 2 L ( S ) ; R ANNOTATE ( R ); D D R ; M RETRAIN ( D ) Siamese encoder are used for computing the similarity between each pair of examples a and b as: sim( a, b ) = exp ( || o a o b || 2 ) , (1) where o a and o b are the outputs of the Siamese encoder for sentences a and b respectively.", "Let N denote the number of examples chosen by an AL strategy.", "We use the Siamese network to compute the entries of an N N similarity matrix S where the entry S ab = sim( a, b ) .", "We then use the spectral clustering algorithm (Ng et al., 2002) on the similarity matrix S to group similar examples.", "A fixed number of examples from each cluster are added to the training dataset after annotation by experts.", "We train the Siamese encoder to predict the similarity between sentences from the SICK (Sentences Involving Compositional Knowledge) dataset (Marelli et al., 2014) using mean squared error.", "This dataset contains pairs of sentences with manually annotated similarity scores.", "The sentences are encoded using the encoder in M and then passed on to the Siamese encoder for computing similarities.", "The encoder in M is kept fixed while training the Siamese encoder.", "The trained Siamese encoder is then used for computing similarity between sentences selected by an AL strategy for the given NLP task as described above.", "As M is trained over time, the distribution of its encoder output changes, and hence we periodically retrain the Siamese network to sustain its model-awareness.", "The number of clusters and the number of examples drawn from each cluster are user-specified hyper-parameters.", "The similarity computation can be done efficiently by computing the output of the Siamese encoder for all N examples before evaluating equation 1, instead of running the Siamese encoder O ( N 2 ) times.", "The clustering algorithm runs in O ( N 3 ) time.", "For an AL strategy to be useful, it should select a small number of examples to benefit from interactive and intelligent labeling.", "We expect N to be small for most practical problems, in which case the computational complexity added by our approach would only be a small fraction of the overall computational complexity of training the model with active learning (see Figure 1).", "While the approach described in Section 3.2 works well for small to moderate values of N , it suffers from a computational bottleneck when N is large.", "We integrate the clustering step into the similarity computation step to remedy this (see Figure 1) and call the resultant approach as Integrated Clustering Model ( Int Model ).", "Here, the output of model M 's encoder is fed to a clustering neural network C that has K output units with the softmax activation function.", "These units correspond to the K clusters, and each example is directly assigned to one of the clusters based on the softmax output.", "To train the network C , we choose a pair of similar examples (say a and b ) and randomly select a negative example (say c ).", "We experimented with both SICK and Quora Pairs dataset 3 .", "All examples are encoded via the encoder of model M and then passed to network C .", "The unit with the highest probability value for a is treated as the ground-truth class for b .", "Minimizing the objective given below maximizes the probability of b belonging to its ground truth class while minimizing the probability of c belonging to the same class: L ( a, b, c ) = 1 log p bi a 2 log(1 p ci a ) + 3 K (cid:88) k =1 p bk log p bk .", "(2) Here 1 , 2 , and 3 are user-specified hyperparameters, p xj is the softmax output of the j th unit for example x , j = 1 , 2 , . . . , K , x = a, b, c , and i a = arg max j { 1 , 2 ,...K } p aj .", "The third term encourages the utilization of all the K units across examples in the dataset.", "As before, a trained network C is used for clustering examples chosen by an AL strategy, and we select a fixed number of examples from each cluster for manual annotation.", "It is important to note that:", "(i) These methods are not AL strategies.", "Rather, they can be used in conjunction with any existing AL strategy.", "Moreover, given a suitable Siamese encoder or clustering network C , they apply to any model M .", "(ii) Our methods compute model-aware similarity since the input to the Siamese or the clustering network is encoded using the model M .", "The proposed networks also adapt to the underlying model as the training progresses.", "Algorithm 1 describes our general approach called Active 2 Learning.", "We establish the effectiveness of our approaches by demonstrating that they:", "(i) work well across a variety of NLP tasks and models,", "(ii) are compatible with several popular AL strategies, and", "(iii) further reduce the data requirements of existing AL strategies, while achieving the same performance.", "In particular, we experiment 1 with two broad categories of NLP tasks:", "(a) Sequence Tagging", "(b) Neural Machine Translation.", "Table 1 lists these tasks and information about the corresponding datasets (in-cluding the two auxiliary datasets for training the Siamese network (Section 3.2)) used in our experiments.", "We begin by describing the AL strategies for the two kinds of NLP tasks.", "Margin-based strategy: Let s ( y ) = P ( Y = y | X = x ) be the score assigned by a model M with parameters to output y for a given example x .", "Margin is defined as the difference in scores obtained by the best scoring output y and the second best scoring output y (cid:48) , i.e.: M margin = max y s ( y ) max y (cid:48) (cid:54) = y max s ( y (cid:48) ) , (3) where, y max = arg max y s ( y ) .", "The strategy selects examples for which M margin 1 , where 1 is a hyper-parameter.", "We use Viterbi's algorithm (Ryan and Nudd, 1993) to compute the scores s ( y ) .", "1 Codes for the experiments are available at the following github link: https://github.com/parag1604/A2L.", "Entropy-based strategy: All the NLP tasks that we consider require the model M to produce an output for each token in the sentence.", "Let x be an input sentence that contains n ( x ) tokens and define s j = max o O P ( y j = o | X = x ) to be the probability of the most likely output for the j th token in x .", "Here O is set of all possible outputs and y j is the output corresponding to the j th token in x .", "We define the normalized entropy score as: M entropy = 1 n ( x ) n ( x ) (cid:88) j =1 s j ( y ) log s j ( y ) .", "A length normalization n ( x ) is added to avoid bias due to the example length as it may be undesirable to annotate longer length examples (Claveau and Kijak, 2017).", "The strategy selects examples with M entropy 2 , where 2 is a hyper-parameter.", "Bayesian Active Learning by Disagreement (BALD): Due to stochasticity, models that use dropout (Srivastava et al., 2014) produce a different output each time they are executed.", "BALD (Houlsby et al., 2011) exploits this variability in the predicted output to compute model uncertainty.", "Let y ( t ) denote the best scoring output for x in the t th forward pass, and let N be the number of forward passes with a fixed dropout rate, then: M bald = 1 count(mode( y (1) , . . . , y ( N ) )) N .", "Here the mode( . ) operation finds the output which is repeated most often among y (1) , . . . , y ( N ) , and the count( . ) operation counts the number of times this output was encountered.", "This strategy selects examples with M bald 3 (hyper-parameter).", "Least Confidence (LC) This strategy estimates the uncertainty of a trained model on a source sentence x by calculating the conditional probability of the prediction y conditioned on the source sen-tence(Lewis and Catlett, 1994).", "Coverage Sampling (CS) A translation model is said to cover the source sentence if it translates all of its tokens.", "Coverage is estimated by mapping a particular source token to its appropriate target token, without which the model may suffer from under-translation or over-translation issues (Tu et al., 2016).", "Peris and Casacuberta (2018) proposed to use translation coverage as a measure of uncertainty by: MCS = (cid:80) n ( x ) j=1 log(min( (cid:80) n ( y ) i=1 i , j , 1)) n ( x ) (7) Here i , j denotes the attention probability calculated by the model for the j th source word in predicting the i th target word.", "It can be noted that the coverage score will be 0 for samples for which the model almost fully covers the source sentences.", "Attention Distraction Sampling (ADS) Peris and Casacuberta (2018) claimed that in translating an uncertain sample, the model's attention mechanism would be distracted (dispersed throughout the sentence).", "Such samples yield attention probability distribution with light tails (e.g., uniform distribu-tion), which can be obtained by taking the Kurtosis of the attention weights for each target token y i .", "where 1 n ( x ) is the mean of the distribution of the attention weights (for a target word) over the source words.", "The kurtosis value will be lower for distributions with light tails, so the average of the negative Figure 1: Comparison of time taken for one data selection step in NMT task by the Model Aware (MA) Siamese and Integrated Clustering (Int) Model across different ALS.", "kurtosis values for all words in the target sentence is used as the distraction score.", "For sequence tagging, we use two kinds of architectures: CNN-BiLSTM-CRF model (CNN for character-level encoding and BiLSTM for word-level encoding) and a BiLSTM-BiLSTM-CRF model (BiLSTM for both character-level and word-level encoding) (Lample et al. (2016); Siddhant", "and Lipton (2018)).", "For the translation task, we use LSTM based encoder-decoder architecture with Bahdanau attention (Bahdanau et al., 2014).", "These models were chosen for their performance and ease of implementation.", "The Siamese network used for model-aware similarity computation (Section 3.2) consists of two bidirectional LSTM (BiLSTM) encoders.", "We pass each sentence in the pair from the SICK dataset to model M and feed the resulting encodings to the Siamese BiLSTM encoder.", "The output is a concatenation of terminal hidden states of the forward and backward LSTMs, which is used to compute the similarity score using (1).", "As noted before, we keep model M fixed while training the Siamese encoders and use the trained Siamese encoders for computing similarity between examples chosen by an AL strategy.", "We maintain the model-awareness by retraining the Siamese after every 10 iterations.", "The architecture of the clustering model C (Sec-tion 3.3) is similar to that of the Siamese encoder.", "Additionally, it has a linear layer with a softmax activation function that maps the concatenation of terminal hidden states of the forward and backward LSTMs to K units, where K is the number of clusters.", "To assign an input example to a cluster, we first pass it through the encoder in M and feed the resulting encodings to the clustering model C .", "The example is assigned to the cluster with the highest softmax output.", "This network is also retrained after every 10 iterations to retain model-awareness.", "The initial data splits used for training the model M were set at 2% of randomly sampled data for Sequence Tagging ( 20% for NMT).", "These are in accordance with the splitting techniques used in the existing literature on AL (Siddhant and Lipton, 2018; Liu et al., 2018).", "The model is then used to provide input to train the Siamese/Clustering network using the SICK/Quora Pairs.", "At each iteration, we gradually add another 2% of data for sequence tagging ( 5% for NMT) by retrieving low confidence examples using an AL strategy, followed by clustering to extract the most representative examples.", "We average the results over five independent runs with randomly chosen initial splits.", "[Hyperparameters details in Appendix A].", "We claim that A 2 L mitigates the redundancies in the existing AL strategies by working in conjunction with them.", "We validate our claims by comparing our approaches with three baselines that highlight the importance of various components.", "Cosine: Clustering is done based on cosine similarity between last output encodings (correspond-ing to sentence length) from encoder in M .", "Although this similarity computation is model-aware, it is simplistic and shows the benefit of using a more expressive similarity measure.", "None: In this baseline, we use the AL strategy without applying Active 2 learning to remove redundant examples.", "This validates our claim about redundancy in examples chosen by AL strategies.", "Random: No active learning is used, and random examples are selected at each time.", "We perform ablation studies to demonstrate the utility of model-awareness using these baselines:", "Infersent: Clustering is done based on cosine similarity between sentence embeddings (Chen et al., 2015) obtained from a pre-trained InferSent model (Conneau et al., 2017).", "This similarity computation is not model-aware and shows the utility of model-aware similarity computation.", "Iso Siamese: To show that the Siamese network alone is not sufficient and model-awareness is needed, in this baseline, we train the Siamese network by directly using GloVe embeddings of the words as input rather than using output from the 5 We process the dataset to use only those sentences which are present in at least 5 other pairs.", "We retrieve 16000 sets, each with a source sentence and 5 other samples (comprising both positive and negative labels).", "An additional 1000 sets were generated for evaluation.", "model M 's encoder.", "This similarity, which is not model-aware, is then used for clustering.", "Figure 2 compares the performance of our methods with baselines.", "It shows the test-set metric on the y -axis against the percentage of training data used on the x -axis for all tasks.", "See Figures 7 and 8 in the Appendix for additional results.", "1. As shown in Figure 2, our approach consistently outperforms all baselines on all tasks.", "Note that one should observe how fast the performance increases with the addition of training data (and not just the final performance) as we are trying to evaluate the effect of adding new examples.", "Our ablation studies in Figure 3 show the utility of using model-aware similarity.", "2. In sequence tagging, we match the performance obtained by training on the full dataset using only a smaller fraction of the data ( 3 25 % less data as compared to state-of-art AL strategies) (Table 2).", "On a large dataset in NMT task (Europarl), A 2 L takes 4300 sentences fewer than the Least Confidence AL strategy to reach a Bleu score of 12 .", "3. While comparing different AL strategies is not our motive, Figure 2 also demonstrates that one can achieve performance comparable to a complex AL strategy like BALD, using simple AL strategies like margin and entropy, by using the proposed A 2 L framework.", "4. Additionally, from Figure 1, it can be observed that for one step of data selection:", "(i) The proposed MA Siamese model adds minimal overhead to the overall AL pipeline since it takes less than 5 additional seconds ( 112 of the time taken for ALS);", "(ii) By approximating the clustering step, Integrated Clustering (Int) Model further reduces the overhead down to 2 seconds.", "However, owing to this approximation, MA Siamese is observed to perform slightly better than the Int Model (Fig 3).", "A comparison of training time for various stages of the A 2 L pipeline is provided in Figure 4. We wish to state that our approach should be evaluated not in terms of the gain in the F1 score but Figure 5: [Best viewed in color] Qualitative case study to convey the notion of redundancy and the model aware similarity.", "in terms of the reduction in data required to achieve the same (3-25 % on multiple datasets).", "More importantly, this improvement comes at a negligible computation overhead cost.", "The reported improvements are not relative with respect to any baseline but represent an absolute value and are very signifi-cant in the context of similar performance improvements reported in the literature.", "In Figure 5, we provide a qualitative case study that demonstrates the problem of redundancy.", "This paper shows that one can further reduce the data requirements of Active Learning strategies", "by proposing a new method, A 2 L, which uses a model-aware-similarity computation.", "We empirically demonstrated that our proposed approaches consistently perform well across many tasks and AL strategies.", "We compared the performance of our approach with strong baselines to ensure that the role of each component is properly understood.", "This work was funded by British Telecom India Research Center project on Advanced Chatbot." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "method", "other" ]
[ "Knowledge is now starting to power neural dialogue agents.", "At the same time, the risk of misinformation and disinformation from dialogue agents also rises.", "Verifying the veracity of information from formal sources are widely studied in computational fact checking.", "In this work, we ask: How robust are fact checking systems on claims in colloquial style?", "We aim to open up new discussions in the intersection of fact verification and dialogue safety.", "In order to investigate how fact checking systems behave on colloquial claims, we transfer the styles of claims from FEVER (Thorne et al., 2018) into colloquialism.", "We find that existing fact checking systems that perform well on claims in formal style significantly degenerate on colloquial claims with the same semantics.", "Especially, we show that document retrieval is the weakest spot in the system even vulnerable to filler words, such as yeah and you know .", "The document recall of WikiAPI retriever (Hanselowski et al., 2018) which is 90.0% on FEVER, drops to 72.2% on the colloquial claims.", "We compare the characteristics of colloquial claims to those of claims in formal style, and demonstrate the challenging issues in them.", "Recently, knowledge has been starting to power neural dialogue agents (Moghe et al., 2018; Zhou et al., 2018b; Ghazvininejad et al., 2018; Qin et al., 2019; Gopalakrishnan et al., 2019), being equipped with Wikipedia (Dinan et al., 2019b), news (Gopalakrishnan et al., 2019), domain spe-cific knowledge-base (Eric and Manning, 2017), and commonsense (Zhou et al., 2018a; Young et al., 2018; Wu et al., 2020).", "However, the use of knowledge inevitably put dialogue agents in new jeopardy.", "For example, recent workshop on safety for conversational AI (Dinan et al., 2020b) introduced Equal contribution an example of such risk: Bickmore et al. (2018) asked participants to query conversational agents for advice in situations where medical information is needed.", "Then, internist and pharmacist judged the actions that the participants would take based on the advice.", "Assessments revealed that agents often deliver incorrect medical information that may cause lethal consequences.", "A bigger threat may be the abuse of dialogue agents to deliberately distribute disinformation.", "What would happen if knowledge-powered agents are tweaked to massively generate false claims on online communities?", "The impact of such fake news can be critical as they quickly spread through social media (Shu et al., 2017).", "The chatbot Tay's shut down due to malicious attempts show the imminent danger of abuse (Wolf et al., 2017).", "Verifying the integrity of a given piece of information has been studied in the field of computational fact checking.", "Thorne et al. (2018) introduce an annotated dataset FEVER for fact checking based on Wikipedia.", "Augenstein et al. (2019) collect claims on fact checking websites and release the MultiFC dataset.", "Jiang et al. (2020) collect a dataset requiring many-hop evidence extraction from Wikipedia.", "Wadden et al. (2020) collect a dataset of scientific claims to be verified.", "Most claims of existing datasets are taken from formal texts, such as news, academic papers, and Wikipedia.", "These claims tend to be concise and structured: Beautiful was number two on the Billboard Hot 100 in 2003 .", "On the other hand, claims or information that we encounter in dialogues are more unstructured and informal: The song Beautiful is great! It even reached number two on the Hot 100 in 2003, you know? .", "For improving the applicability of fact checking systems, they must also be robust for verifying the claims in dialogues.", "Unfortunately, threats regarding misinformation and disinformation from dialogue agents remain understudied.", "Research on dialogue safety mainly has focused on making dialogue agents robust to adversarial attacks (Dinan et al., 2019a), and preventing dialogue agents from generating offensive or biased responses (Henderson et al., 2018; Sap et al., 2019; Xu et al., 2020).", "In this work, we aim to investigate how fact checking systems behave when verifying claims in dialogue style, rather than claims from news outlets, scientific articles, or Wikipedia.", "Colloquial claims are different in several aspects compared to claims from formal sources.", "(i) They tend to also include filler words, casual comments, or personal feelings which do not require verification.", "(ii) Since claims in colloquial language are less precise than formal claims, correctly using the context in claims becomes important to disambiguate them.", "We demonstrate that these features make existing fact checking systems have difficulties in verifying colloquial claims.", "We use English datasets for the investigation in this work.", "Our major contributions of this work can be outlined as follows: (1) We open up new discussions in the intersection of fact verification and dialogue safety; how to verify claims in colloquial language, compared to previous works that solely focus on the claims in formal style ( e.g . news, academic papers, Wikipedia).", "(2) For this study, we curate colloquial claims by transferring the styles of claims in existing fact checking dataset of FEVER (Thorne et al., 2018).", "For style transfer, we finetune a pretrained dialogue model with a knowledge-grounded dialogue dataset and apply additional filtering to compensate for the quality of output.", "(3) We show that the existing fact checking systems that perform well on claims in formal style significantly degenerate on colloquial ones with the same semantics.", "We analyze the performance drop and show document retrieval is the weakest spot in the system.", "(4) We identify the challenging characteristics of colloquial claims;", "(i) they often involve expressions that are not verifiable ( e.g . filler words or personal feeling) and", "(ii) they include ambiguity inside the claim that necessities better understanding of the context.", "We release the code and the curated colloquial claims set.", "FEVER (Thorne et al., 2018) Claim : The iPhone 4 is a dial telephone.", "FEVER (Thorne et al., 2018) is a fact checking benchmark dataset based on Wikipedia.", "Its fact checking pipeline has become one of the standard followed by many (Hanselowski et al., 2018; Nie et al., 2019; Zhou et al., 2019; Liu et al., 2020; Zhong et al., 2020; Jiang et al., 2020).", "The pipeline comprises three stages: document retrieval, evidence selection, and claim verification.", "For a given claim to be verified, the system first retrieves the related documents from the pool.", "Next, among the returned documents, the system selects the most suitable sentences for evidence.", "Finally, based on the evidence sentences the system classifies the claim's veracity with three classes: SUPPORTED , REFUTED (contradicted by the evidence), and NOTENOUGHINFO (cannot be determined by the evidence).", "An example from the FEVER is shown in Table 1. 2.2 Wizard of Wikipedia The Wizard of Wikipedia (WoW) (Dinan et al., 2019b) may be the closest dialogue dataset to existing fact checking datasets.", "It is a knowledge-based open-domain dialogue dataset involving two speakers discussing on a given topic.", "An example is presented in Table 1. One speaker (referred as apprentice) is eager to learn about the topic, while the other speaker (the wizard) delivers knowledge-grounded responses based on both dialogue context and Wikipedia documents for the topic.", "In this dataset, the gold knowledge sentence from Wikipedia is provided for each wizard's response.", "Hence, we can regard the gold knowledge sentence as the evidence for the Wizard's response.", "However, WoW only provides pairs of (knowl-edge sentence, grounded response), hence those responses are all SUPPORTED by Wikipedia.", "There are no REFUTED or NOTENOUGHINFO responses in the dataset.", "Such limitation make it difficult to directly adopt WoW as a fact checking dialogue dataset.", "Nonetheless, its knowledge-grounding property makes it a useful resource for training dialogue models to generate colloquial utterances grounded on claims.", "Our goal is to curate colloquial claims by transferring the style of each claim sentence in the FEVER dataset 1 into colloquial style.", "We first finetune a dialog model with the WoW dataset so that it learns to transfer knowledge sentences from Wikipedia into conversational utterances (section 3.1).", "We then apply the finetuned model to transfer each claim in FEVER (sourced from Wikipedia) into colloquial style, and perform filtering process to warrant the integrity of this style transfer (section 3.2).", "Figure 1 overviews the whole pipeline of style transfer.", "We first finetune BART-large (Lewis et al., 2020) to generate the wizard's response given only the corresponding knowledge sentence from WoW, without the dialogue context.", "Take the example in Table 1, when the knowledge sentence is given as Hershey's headquarters are in Hershey, Pennsylvania , BART is finetuned to generate the wizard's response I love Hershey too! Do you know that Hershey's HQ is actually in Hershey? .", "We exclude the dialogue context during fine-tuning in order to enforce the dialogue model to exclusively focus on knowledge contents.", "The finetuned BART shows a low perplexity of 10.51 on WoW's validation set.", "This indicates that BART can generate information-grounded utterances when given knowledge sentences.", "Then, we apply the finetuned BART to transfer each claim in FEVER to a colloquial one.", "Our expectation is that since claims in FEVER are based on Wikipedia too and similar to knowledge sentences in WoW in many aspects, the finetuned model may be able to produce utterances while preserving the semantics of claims from FEVER.", "1 We verified that FEVER is released under a Creative Commons (CC BY-SA 3.0) license.", "However, naively using the generated claims as is has several issues, including", "(i) copy-and-paste,", "(ii) pronoun overwrite,", "(iii) semantic discrepancy, and", "(iv) lack of colloquialism.", "We carefully mitigate these issues through a filtering pipeline.", "We first oversample n colloquial candidates Q i = { q i,j } 468 j =1 per claim c i in FEVER, using BART through Nucleus Sampling (Holtzman et al., 2020) ( p = 0 . 95 ).", "Preventing Copy-Paste .", "We observe the dialogue model sometimes simply copies the input claim as output.", "Since copy-pasted candidates are not colloquial, we remove the ones whose F1 scores are higher than 0.9, in respect to the original claim.", "Preserving Named Entities .", "Utterances in dialogues tend to refer entities with pronouns rather than their original word.", "As a result, we observe that dialogue models also convert entities in claims to pronouns.", "For example, given the input claim Tetris has sold millions of physical copies , BART outputs Yeah it's fun even today, no wonder it sold millions of physical copies .", "Since there are no previous contexts for claims in FEVER, it is not possible to recognize that pronoun it is referring to tetris .", "In order to preserve the entities, we leverage the named entity recognition (NER) module from Stanza (Qi et al., 2020), which shows 88.8 F1-score on OntoNotes (Weischedel et al., 2013) test set.", "We extract a set of named entities E ci from claim c i , and compare it with the named entity set E qi,j of each q i,j in Q i .", "We remove candidates with less than two matching named entities.", "For claims with single named entity, we remove candidates having no named entities.", "Preserving Semantic Equivalence .", "It is well known that neural dialogue models lack consistency (Li et al., 2016) and can hallucinate irrelevant content (Roller et al., 2020).", "As a result, there can be semantic difference between the original FEVER claim and the generated one.", "To preserve the original semantics, we leverage natural language inference (NLI), which is a task of determining whether a hypothesis sentence can be inferred from the given premise sentence.", "The hypothesis sentence is classified into three categories: ENTAILMENT (true), CONTRADICTION (false), and NEUTRAL (undetermined).", "A sound colloquial claim should be entailed by the original Fortress Yes, Robin I'm certain Ah the game He was the Oversampled candidate claims FEVER claim Team Fortress2 developmentwas led by Robin Walker.", "claim and it also must not contradict the original.", "Suppose the original claim is Apple Inc. designed and manufactured iPhone 4 and the generated claim is I heard Apple is also famous for designing the iMac computer .", "This claim is removed because designing iMac cannot be inferred from the fact Apple manufactured iPhone 4.", "We conduct bidirectional NLI between the original claim and the generated one using RoBERTa (Liu et al., 2019) trained on MNLI (Williams et al., 2018).", "The RoBERTa model shows 90.59% accuracy on MNLI validation set.", "For each candidate q i,j , we conduct NLI ( c i , q i,j ) and NLI ( q i,j , c i ) with the original claim c i .", "We only preserve the candidates that result in ENTAILMENT for the former and do not result in CONTRADICTION for the latter.", "Ensuring Colloquialism .", "Although the candidates are generated by a dialogue model, they may still resemble the style of the original claims, rather than colloquial style.", "To ensure colloquialism, we select the topk candidate claims which are most difficult to discriminate from responses in Wizard of Wikipedia (WoW) (Dinan et al., 2019b), through an iterative adversarial filtering method AFLITE (Sakaguchi et al., 2020; Bras et al., 2020).", "We first embed the candidates with RoBERTa and train an ensemble of binary linear classifiers to determine each candidate whether it is from WoW or our colloquial claims.", "We eliminate candidates that are easily classified as our colloquial claims after each iteration.", "We continue the iteration until k candidates remain in each Q i .", "We set k = 3 .", "Since only candidates that are hard to discriminate from WoW responses survive, they resemble the styles of dialogue utterances.", "We defer the detailed algorithm for adversarial filtering to Appendix.", "Filtering Statistics .", "Table 2 shows the average survival rate of candidates after each filtering step.", "We observe that the NER and NLI filter effectively remove large amounts of candidates.", "On average, 29 out of 486 candidates survive after the NLI filtering stage.", "Then, adversarial filtering is used for selecting k candidates among the remainders.", "Figure 2 shows the recall for our colloquial claims by the binary classifiers used in AFLITE .", "As only indistinguishable candidate claims from the WoW responses survive, the recall drops after each iteration.", "We also compare the qualitative traits of candidates before and after the filtering in Section 4.2.", "Finally, we manually check all SUPPORTED and REFUTED instances in the test set of our Colloquial Claims dataset.", "Three human annotators choose the best suitable claim for each colloquial claim set ( | Q i | k ) for the given label and evidence.", "If there are no suitable claim in the set, we recover the set before topk selection.", "As a last resort, we let annotators rewrite the colloquial claim when no eligible candidate exists.", "The proportion that requires manual rewriting is less than 1% of 5,615 #Claims #Words/Claim Train Valid Test FEVER 145.4K 10K 10K 8.2 Colloquial Claims 410.0k 28.9K 8.4K 11.1 Table 3: Statistics of the Colloquial Claims compared to FEVER (Thorne et al., 2018).", "We first discuss the characteristics of our Colloquial Claims with quantitative analysis, compared to FEVER (Thorne et al., 2018) and Wizard of Wikipedia (WoW) (Dinan et al., 2019b).", "Diverse Claims .", "We provide basic statistics of our Colloquial Claims in Table 3. In FEVER, only a single claim exists per evidence set, whereas our Colloquial Claims provide up to three claims.", "As a result, the number of data instances of our dataset is larger than FEVER.", "Due to the wordy nature of colloquial language, the our transferred claims are longer and more diverse in length than those in FEVER.", "Figure 3 plots the density of the claim sentence lengths of FEVER and our dataset.", "Colloquial Style .", "The claims in our Colloquial Claims have similar styles to the utterances in dialogues.", "Following Yang et al. (2020), we gauge the style of sentences by measuring the perplexity with a pretrained DialoGPT (Zhang et al., 2019).", "The perplexity of the sentence becomes high if its style is far from a dialogue.", "Table 4 compares the perplexity of responses from WoW, claims from FEVER and our Colloquial Claims.", "The perplexity of claims in FEVER is high, whereas our Colloquial Claims have closer perplexity to WoW.", "Table 5 also compares the top-20 frequent tokens in the claims from FEVER and our dataset.", "The most frequent tokens in FEVER's claims are mostly fact-related words, such as american , released , and born .", "On the other hand, the claims in our Colloquial Claims also have tokens that frequently appear in conversations, such as know , actually , like , and oh .", "We conduct human evaluation via Amazon Mechanical Turk to investigate the effectiveness of our filtering pipeline.", "We random sample 100 data instances from our Colloquial Claims and compare between survived and removed candidates.", "Each instance is rated by three unique human annotators.", "To evaluate the overall quality of our generated claims, we ask human users to evaluate humanness in 4-point scale: Do you think this sentence is from a bot or a human? .", "We compare them with responses from WoW and FEVER on humanness.", "We also conduct NLI on the claims from our Colloquial Claims and FEVER to evaluate the label mappings.", "Users are instructed to classify claims into three veracity labels given the gold evidence: SUPPORTED , REFUTED , NOTENOUGHINFO .", "Table 6 summarizes the averaged humanness and human NLI scores.", "Since the responses in WoW are from real dialogues, we can observe they have the highest humanness score.", "Interestingly, our generated claims are evaluated to be better than human-generated claims in FEVER, in terms of humanness.", "We suspect that this is due to the colloquialism of our generated claims.", "The survived claims have more accurate label mappings with the evidence, compared to removed candidates.", "It is thanks to the bidirectional NLI filter that removes the candidate claims that are semantically different from the original claims.", "Table 7 shows some examples comparing our generated claims to the original FEVER claims.", "We conduct experiments on our curated colloquial claims to see how they impact existing fact checking systems.", "Datasets .", "FEVER (Thorne et al., 2018) consists of three steps of fact checking pipeline: document retrieval, evidence selection, and claim verification.", "Based on selected evidence, the claims are classified into three classes of veracity: SUPPORTED , REFUTED , NOTENOUGHINFO .", "The Colloquial Claims is our generated dataset based on FEVER with claims in the colloquial style.", "Metrics .", "FEVER fact checking uses two performance scores: label accuracy and FEVER-score.", "Label accuracy is the claim verification performance of the fact checking system.", "The FEVERscore is a more complicated evaluation regarding the whole pipeline.", "Following the FEVER challenge 2 , a claim verification is evaluated as correct if the system retrieves at least one complete set of ground-truth evidence sentences and also classifies 2 https://fever.ai/2018/task.html FEVER : Google Search displays movie showtimes.", "the claim correctly.", "For the evidence sentences, we evaluate the first 5 sentences retrieved from the system.", "We also report the recall for retrieved documents and selected evidence sentences.", "We run experiments on six combinations of the fact-checking system according to the steps.", "For each dataset evaluation, we finetuned the system on the respective dataset.", "Document Retrieval .", "We test three types of approaches: (1) oracle, (2) term-matching, and (3) similarity search with dense representation.", "First, the oracle always returns five evidence sentences including the gold evidence.", "Second, the WikiAPI 3 , following Hanselowski et al. (2018), retrieves Wikipedia documents by matching words in the claim through a python library.", "Third, Dense Passage Retrieval (DPR) (Karpukhin et al., 2020) retrieves documents via similarity search with BERT embeddings trained by metric learning.", "Claim Verification .", "We test two approaches: (1) BERT and (2) CorefBERT (Ye et al., 2020), which is one of the best performing methods on FEVER.", "The CorefBERT pretrains BERT to better capture the coreference information in text.", "We also apply kernel graph attention network (KGAT) (Liu et al., 2020) on BERT and CorefBERT for fine-grained attention using evidence graphs.", "More details can be found in Appendix.", "Table 8 compares the performance of fact checking systems on FEVER and our Colloquial Claims.", "Both label accuracy and FEVER-score significantly decrease for all systems on our Colloquial Claims, compared to FEVER.", "The Wiki-API+BERT+KGAT(CorefBERT) system performs on par with best performing models for FEVER by label accuracy of 73.8%.", "However, it degenerates on the colloquial dataset with the label accuracy of 60.9%.", "We remind that our Colloquial Claims shares the same document pool, annotated evidence sentences, and similar semantics with claims from FEVER.", "Thus, it is the difference in the claim's style that makes the fact checking systems fatally degenerate.", "The WikiAPI, used in many fact checking systems (Hanselowski et al., 2018; Chernyavskiy and Ilvovsky, 2019; Stammbach and Neumann, 2019; Zhou et al., 2019; Liu et al., 2020), shows superior performance than DPR on the FEVER dataset, with document recall of 90.0%.", "On Colloquial Claims, however, it crashes down to 72.2%.", "Meanwhile, the DPR shows more robust document retrieval on Colloquial Claims than WikiAPI.", "Apart from document retrieval and evidence selection, we can also observe performance decrease in the systems with evidence oracles.", "This indicates that claim verification is also more difficult on Colloquial Claims.", "We analyze the causes of degeneration in document retrieval and claim verification in relation to the colloquial traits.", "We compare three document retrieval methods along with the oracle: WikiAPI, DrQA (Chen et al., 2017), and Dense Passage Retrieval (DPR).", "DrQA is another variation of term-matching method based on TF-IDF.", "Table 9 shows the titles of ten most documents by each retriever.", "Filler Words Unnecessary of Fact Checking .", "In colloquial language, claims are not always composed of factual remarks requiring verification.", "Filler words ( e.g . I see , yeah, like ) are also frequently mixed in the utterances, as shown in Table 5.", "Hence, our Colloquial Claims requires systems to partition the parts that affect veracity from the ones that do not.", "However, Table 9 shows that word-matching retrieval systems, such as WikiAPI and DrQA, are vulnerable to those insignifi-cant parts.", "They naively retrieve filler word related documents very frequently.", "Minding the Context .", "Considering the context inside the sentence is essential for verifying colloquial claims.", "Lexical variation and polysemy is common in colloquial language.", "Such variations and ambiguity are tolerable because common context flows in the utterance.", "For example, in the colloquial claim of Niko Coster-Waldau is also the host of the show. He was with Fox at one point. , it is easy to see the word Fox stands for Fox Broad-Oracle WikiAPI Pakistan It Pocahontas I SpongeBob You Far from the Madding Crowd Yes (band) Samsung Yes (album) Two and a Half Men He Elizabeth of York That Ice-T They Spiderman There There (novel) Sausage Party HES DrQA DPR Heroes of Russia Minor League Yeah Yeah Beverly Hillibillies Yea Ed and Lorraine Warren H*** Yeah Benjamin Franklin Yea (football club) Yin and Yang Stefanie Drootin Hunger Games (film) Minor League Sausage Party Video Games Ice-T Google Search Mormons Google Apps Burj Khalifa Table 9: Comparison of the titles of the top-10 retrieved documents between oracle, WikiAPI, DrQA and DPR. casting Company based on the context.", "However, it is well known that simple term-matching methods cannot capture such context (Karpukhin et al., 2020).", "Thus, we observe that systems instead simply retrieve the document of fox .", "Also, Table 9 shows another example of contextless retrieval.", "The document Yes (band) , There There (novel) , and Yea (football club) are naively retrieved by the systems, due to simple filler words in colloquial claims.", "Overcoming the Colloquial Traits .", "Methods based on TF-IDF or word-matching are good at recognizing core keywords, but suffer at capturing the rich semantics of context.", "On the other hand, the DPR, a similarity search method based on dense embeddings, shows promising results.", "Results in Table 9 illustrate that DPR is able to ignore the context-irrelevant entities and focus more on fact-related entities.", "Compared to other retrieval methods, the ten most retrieved documents from DPR does not contain any filler words.", "Since filler words are irrelevant to the veracity of colloquial claims, the DPR learns their insignificance.", "Therefore, dense representation can be important for making fact-checking systems to be robust on claims in dialogues.", "checking datasets (Thorne et al., 2018; Baly et al., 2018; Augenstein et al., 2019; Jiang et al., 2020; Wadden et al., 2020; Chen et al., 2020).", "Recent works deploy adversarial attacks against fact checking systems (Thorne et al., 2019a,b; Niewinski et al., 2019; Atanasova et al., 2020b) and attempt to improve the system through generation (Atanasova et al., 2020a; Goyal and Durrett, 2020; Fan et al., 2020).", "Existing works tend to focus on verifying news or Wikipedia.", "However, verifying facts is not limited to such formal texts.", "Compared to previous works, we focus on verifying claims in the dialogue domain, which resembles more daily life situations.", "A special case of fact verification is rumour detection.", "Its goal is to determine the veracity of rumours from social media (Li et al., 2019).", "The rumour is classified based on the reactions of chained messages (Gorrell et al., 2019).", "The procedure and characteristics of rumour detection is quite different from the fact checking pipeline (Gorrell et al., 2019).", "In our task, we verify the claims based on factuality from the related documents, rather than stances of the comments.", "Safety in Open-domain Dialogue .", "Recently, much work has studied safety issues of machine dialogue agents in several aspects.", "Wulczyn et al. (2017) attempt to detect personal attacks in Wikipedia talk pages.", "Henderson et al. (2018) note the axes of bias, adversarial examples, privacy and safety, and propose that the community should aim to provide conditional safety guarantees.", "Khatri et al. (2018) train a sensitive language detector to evaluate the utterances in a chatbot dataset.", "Dinan et al. (2019a) propose a framework for dialogue agents to be robust to malicious human attacks.", "Other works have attempted to mitigate biases, such as gender bias (Dinan et al., 2020a) and racial bias (Sap et al., 2019).", "Recently, Tran et al. (2020) modify BERT (Devlin et al., 2019) to detect hatespeech.", "Xu et al. (2020) introduce a method to distill safety standards into the generative dialogue agent.", "Previous works cover a wide range of dialogue safety, yet the risk of disinformation and misinformation remain understudied.", "In this work, we extend dialogue safety to cover verification of responses with false information.", "This work aimed to open up new discussions in the intersection of fact checking and dialogue safety.", "In order to study how existing fact checking systems behave on claims in dialogues, we curate colloquial claims by transferring the styles of claims in FEVER (Thorne et al., 2018) to colloquialism.", "We leverage BART (Lewis et al., 2020) and Wizard of Wikipedia (WoW) (Dinan et al., 2019b).", "We finetune BART to generate the wizard's responses with knowledge sentences from WoW.", "Then, we input FEVER claims to generate claim-grounded utterances.", "We oversample candidate claims and apply filters to compensate quality.", "We showed that existing fact checking systems well-performing on FEVER degenerate on colloquial claims.", "We found that the document retriever is the weakest spot in the system which is even vulnerable to filler words.", "We compared the characteristic differences between claims in formal style and ones in colloquialism .", "An important future direction will be building a dialogue dataset for fact checking.", "We would like to thank Jinseo Jeong and Myeong-jang Pyeon for their valuable comments.", "We also thank the anonymous reviewers for their thoughtful suggestions on this work.", "This research was supported by Samsung Research Funding Center of Samsung Electronics under project number SRFC-IT2101-01.", "Gunhee Kim is the corresponding author." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "result", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "method", "result", "result", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "method", "abstain", "method", "method", "method", "result", "result", "method", "abstain", "other", "other", "other", "other" ]
[ "Continual learning has become increasingly important as it enables NLP models to constantly learn and gain knowledge over time.", "Previous continual learning methods are mainly designed to preserve knowledge from previous tasks, without much emphasis on how to well generalize models to new tasks.", "In this work, we propose an information disentanglement based regularization method for continual learning on text classification.", "Our proposed method first disentangles text hidden spaces into representations that are generic to all tasks and representations specific to each individual task, and further regularizes these representations differently to better constrain the knowledge required to generalize.", "We also introduce two simple auxiliary tasks: next sentence prediction and task-id prediction, for learning better generic and specific representation spaces.", "Experiments conducted on large-scale benchmarks demonstrate the effectiveness of our method in continual text classification tasks with various sequences and lengths over state-of-the-art baselines.", "We have publicly released our code at https: //github.com/GT-SALT/IDBR .", "Computational systems in real world scenarios face changing environment frequently, and thus are often required to learn continually from dynamic streams of data building on what was learnt before (Biesialska et al., 2020).", "For example, a tweeter classifier needs to deal with trending topics which are constantly emerging.", "While being an intrinsic nature of human to continually acquire and transfer knowledge throughout lifespans, most machine learning models often suffer from catastrophic forgetting : when learning on new tasks, models dramatically and rapidly forget knowledge from previous tasks (McCloskey and Cohen, 1989).", "As a result, Continual Learning (CL) (Ring, 1998; Thrun, Equal contribution. 1998) has received more attention recently as it can enable models to perform positive transfer (Perkins et al., 1992) as well as remember previously seen tasks.", "A growing body of research has been conducted to equip neural networks with the ability of continual learning abilities (Kirkpatrick et al., 2017; Lopez-Paz and Ranzato, 2017; Aljundi et al., 2018).", "Existing continual learning methods on NLP tasks can be broadly categorized into two classes: purely replay based methods (de Masson d'Autume et al., 2019; Sun et al., 2019) where examples from previous tasks are stored and re-trained during the learning of the new task to retain old information, and regularization based methods (Wang et al., 2019; Han et al., 2020) where constraints are added on model parameters to prevent them from changing too much while learning new tasks.", "The former usually stores an extensive amount of data from old tasks (de Masson d'Autume et al., 2019) or trains language models based on task identifiers to generate sufficient examples (Sun et al., 2019), which significantly increases memory costs and training time.", "While the latter utilizes previous examples efficiently via the constraints added on text hidden space or model parameters, it generally views them as equally important and regularize them to the same extent (Wang et al., 2019; Han et al., 2020), making it hard for models to differentiate informative representation that needs to be retained from ones that need a large degree of updates.", "However, we argue that when learning new tasks, task generic information and task specific information should be treated differently, as these generic representation might function consistently while task specific representations might need to be changed significantly.", "To this end, we propose an information disentanglement based regularization method for continual learning on text classification.", "Specifically, we first disentangle the text hidden representation space (e.g., the output representation of BERT (De-vlin et al., 2019)) into a task generic space and a task specific space using two auxiliary tasks: next sentence prediction for learning task generic information and task identifier prediction for learning task specific representations.", "When training on new tasks, we constrain the task generic representation to be relatively stable and representations of task specific aspects to be more flexible.", "To further alleviate catastrophic forgetting without much increases of memory and training time, we propose to augment our regularization-based methods by storing and replaying only a small amount of representative examples (e.g., 1% samples selected by memory selection rules like K-Means (MacQueen et al., 1967)).", "To sum up, our contributions are threefold: We propose an information disentanglement based regularization method for continual text classification, to better learn and constrain task generic and task specific knowledge.", "We augment the regularization approach with a memory selection rule that requires only a small amount of replaying examples.", "Extensive experiments conducted on five benchmark datasets demonstrate the effectiveness of our proposed methods compared to state-of-the-art baselines.", "Continual Learning Existing continual learning research can be broadly divided into four categories:", "(i) replay-based method, which remind models of information from seen tasks via experience replay (de Masson d'Autume et al., 2019), distillation (Rebuffi et al., 2017), representation alignment (Wang et al., 2019) or optimization constraints (Lopez-Paz and Ranzato, 2017; Chaudhry et al., 2019) using examples sampled from previous tasks (Rebuffi et al., 2017; de Masson d'Autume et al., 2019) or synthesized with generative models (Shin et al., 2017; Sun et al., 2019);", "(ii) regularization-based method, which constrains model's output (Li and Hoiem, 2018), hidden space (Rannen et al., 2017), or parameters (Lopez-Paz and Ranzato, 2017; Zenke et al., 2017; Aljundi et al., 2018) from changing too much to retain learned knowledge;", "(iii) architecture-based method, where different tasks are associated with different components of the overall model to directly minimize the interference between new tasks and old tasks (Rusu et al., 2016; Mallya and Lazebnik, 2018);", "(iv) meta-learning-based method, which directly optimizes the knowledge transfer among tasks (Riemer et al., 2019; Obamuyide and Vla-chos, 2019), or learns robust data representations (Javed and White, 2019; Holla et al., 2020; Wang et al., 2020) to alleviate forgetting.", "Among these different approaches, replay-based methods and regularization-based methods have been widely applied to NLP tasks to enable large pre-trained models (Devlin et al., 2019; Radford et al., 2019) to continually acquire novel world knowledge from streams of textual data without forgetting the already learned knowledge.", "For instance, replaying examples have shown promising performance for text classification (de Masson d'Autume et al., 2019; Sun et al., 2019; Holla et al., 2020), relation extraction (Wang et al., 2019) and question answering (de Masson d'Autume et al., 2019; Sun et al., 2019; Wang et al., 2020).", "However, they often suffer from large memory costs or considerable training time, due to the requirements of storing an extensive amount of texts (de Masson d'Autume et al., 2019) or training language models to generate a sufficient number of examples (Sun et al., 2019).", "Recently, regularization-based methods (Wang et al., 2019; Han et al., 2020) have also been applied to directly constrain knowledge deposited in model parameters without abundant rehearsal examples.", "Despite better efficiency compared to replay-based methods, current regularization-based approaches often fail to generalize well to new tasks as they treat and constrain all the information equally and thus limit the needed updates for parameters that are specific to different tasks.", "To overcome these limitations, we propose to first distinguish hidden spaces that need to be retained from those that need to be updated substantially through information disentanglement, and then regularize different spaces separately , to better remember previous knowledge as well as transfer to new tasks.", "In addition, we enhance our regularization method by replaying only a limited amount of examples selected by K-means as the memory selection rule.", "Textual Information Disentanglement Our work is related to information disentanglement for text data, which has been extensively explored in generation tasks like style transfer (Fu et al., 2017; Zhao et al., 2018; Romanov et al., 2019; Li et al., 2020), where text hidden representations are often disentangled into sentiment (Fu et al., 2017; John et al., 2019), content (Romanov et al., 2019; Bao et al., 2019) and syntax (Bao et al., 2019) information through supervised learning from pre-defined labels (John et al., 2019) or unsupervised learning with adversarial training (Fu et al., 2017; Li et al., 2020).", "Building on these prior works, we differentiate task generic space from task specific space via supervision from two simple yet effective auxiliary tasks: next sentence prediction and task identifier prediction.", "Related Learning Paradigms There exists some other learning paradigms also dealing with multiple tasks, such as multi-task learning (Yu et al., 2020) and transfer learning (Houlsby et al., 2019; Pfeiffer et al., 2021).", "However, neither can fit in the scenario of learning multiple tasks sequentially.", "The former could be adapted to dynamic environments by storing all seen training data and retraining the model after the arrival of new tasks, which highly decreases efficiency and is impractical in deployment.", "The latter only focuses on the target tasks and ignores catastrophic forgetting on the source tasks.", "A more thorough discussion can be found in Biesialska et al. (2020).", "In this work, we focus on continual learning for a sequence of text classification tasks { T 1 , ...T n } , where we learn a model f ( . ) , is a set of parameters shared by all tasks and each task T i contains a different set of sentence-label training pairs, ( x i 1: m , y i 1: m ) .", "After learning all tasks in the sequence, we seek to minimize the generalization error on all tasks (Biesialska et al., 2020) : R ( f ) = n (cid:88) i =1 E ( x i ,y i ) T i L ( f ( x i ) , y i ) We use two commonly-used techniques for this problem setting in our proposed model: Regularization : in order to preserve knowledge stored in the model, regularization is a constraint added to model output (Li and Hoiem, 2018), hidden space (Zenke et al., 2017) and parameters (Lopez-Paz and Ran-zato, 2017; Zenke et al., 2017; Aljundi et al., 2018) to prevent them from changing too much while learning new tasks.", "Replay : when learning new tasks, Experience Replay (Rebuffi et al., 2017) is commonly used to recover knowledge from previous tasks, where a memory buffer is first adopted to store seen examples from previous tasks and then the stored data is replayed with the training set for the current task.", "Formally, after training on task t 1 ( t 2) , | S t 1 | examples are randomly sampled from the t -th training set S t 1 into the memory buffer M , where 0 1 is the store ratio.", "Data from M is then merged with the t -th training set S t when learning from task t .", "In continual learning, the model needs to adapt to new tasks quickly while maintaining the ability to recover information from previous tasks, hence not all information stored in the hidden representation space should be treated equally.", "In previous work like style transfer (John et al., 2019) and controlled text generation (Hu et al., 2017), certain information (such as content and syntax) is extracted and shared among different categories and other information (such as style and polarity) is manipulated for each specific category.", "Similarly, in our continual learning scenario, there is shared knowledge among different tasks as well while the model needs to learn and maintain specific knowledge for each individual task in the learning process.", "This key observation motivates us to propose an information-disentanglement based regularization for continual text classification to retain shared knowledge while adapting specific knowledge to streams of tasks (Section 4.1).", "We also incorporate a small set of representative replay samples to alleviate catastrophic forgetting (Section 4.3).", "Our model architecture is shown in Figure 1. 4.1 Information Disentanglement (ID) This section describes how to disentangle sentence representations into task generic space and task specific space, and how separate regularizations are imposed on them for continual text classification.", "Formally, for a given sentence x , we first use a multi-layer encoder B ( . ) , e.g., BERT (Devlin et al., 2019), to get the hidden representations r which contain both task generic and task specific information.", "Then we introduce two disentanglement networks G ( . ) and S ( . ) to extract the generic representation g and specific representation s from r .", "For new tasks, we learn the classifiers by utilizing information from both spaces, and we allow different spaces to change to different extents to best retain knowledge from previous tasks.", "Task Generic Space Task generic space is the hidden space containing information generic to different tasks in a task sequence.", "During switching from one task to another, the generic information should roughly remain the same, e.g., syntactic knowledge should not change too much across the learning process of a sequence of tasks.", "To extract task generic information g from hidden representations r , we leverage the next sentence prediction task (Devlin et al., 2019) 1 to learn the generic information extractor G ( . ) .", "More specifically, we insert a [SEP] token into each training example during tokenization to form a sequence pair labeled IsNext , and switch the first sequence and the second sequence to form a sentence pair labeled NotNext .", "In order to distinguish IsNext pairs and NotNext pairs, extractor G ( . ) needs to learn the context dependencies between two segments, which is bene-ficial to understand every example and generic to any individual task.", "predictor f nsp on the generic feature extractor G ( . ) : L nsp = E x S t M ( L ( f nsp ( G ( B ( x )) , 0) + L ( f nsp ( G ( B ( x )) , 1)) where L is the cross entropy loss, M is the memory buffer and S t is the t -th training set.", "Task Specific Space Models also need task specific information to perform well over each task.", "For example, on sentiment classification words like good or bad could be very informative, but they might not generalize well for tasks like topic classification.", "Thus we employ a simple task-identifier prediction task on the task specific representation s , which means for any given example we want to distinguish which task this example belongs to.", "This simple auxiliary setup will encourage s to embed different information from different tasks.", "The loss for task-identifier predictor f task is: L task = E ( x,z ) S t M L ( f task ( S ( B ( x )) , z ) where z is the corresponding task id for x .", "Text Classification To adapt to the t -th task, we combine the task generic representation g = G ( B ( x ) and task specific representation s = S ( B ( x )) to perform text classification, where we minimize the cross entropy loss: L cls = E ( x,y ) S t M L ( f cls ( g s ) , y )) Here y is the corresponding class label for x , f cls ( . ) is the class predictor.", "To further prevent severe distortion when training on new tasks, we employ regularization on both generic representations g and specific representations s .", "Different from previous approaches (Li and Hoiem, 2018; Wang et al., 2019) which treat all the spaces equally, we allow regularization to different extents on g and s as knowledge in different spaces should be preserved separately to encourage both more positive transfer and less forgetting.", "Specifically, before training all the modules on task t , we first compute the generic representations and specific representations of all sentences x from the training set S t of current task t and memory buffer M t .", "Using the trained B t 1 ( . ) , G t 1 ( . ) and S t 1 ( . ) from previous task t 1 , for each example x we calculate the generic representation as G t 1 ( B t 1 ( x )) , and the specific representation as S t 1 ( B t 1 ( x )) to hoard the knowledge from previous models.", "The computed generic and specific representations are saved.", "During the learning from training pairs from task t , we impose two regularization losses separately: L greg = E x S t M t (cid:107) G t 1 ( B t 1 ( x )) G ( B ( x )) (cid:107) 2 L sreg = E x S t M t (cid:107) S t 1 ( B t 1 ( x )) S ( B ( x )) (cid:107) 2 4.3 Memory Selection Rule Since we only store a small number of examples as a way to balance the replay as well as the extra memory cost and training time, we need to carefully select them in order to utilize the memory buffer M efficiently.", "Considering that if two stored examples are very similar, then only storing one of them could possibly achieve similar results in the future.", "Thus, those stored examples should be as diverse and representative as possible.", "To this end, after training on t -th task, we employ K-means (MacQueen et al., 1967) to cluster all the examples from current training set S t : For each x S t , we utilize its embedding B ( x ) as its input feature to conduct K-means.", "We set the numbers of clusters to | S t | and only select the example closest to each cluster's centroid, following Wang et al. (2019); Han et al. (2020).", "We can write the final objective for continual learning on text classification as the following:", "We set the coefficient of the first three loss terms to 1 for simplicity and only introduce two coefficients to tune: g and s .", "In practice, L task and L cls are also conducted on each generated NotNext example x , L greg and L sreg are only optimized starting from the second task.", "The full information disentanglement based regularization (IDBR) algorithm is shown in Algorithm 1. Algorithm 1 IDBR Input Training sets { S 1 , ..., S n } , Replay Frequency , Store ratio , Coefficients g , s Output Optimal models B , G , S , f nsp , f task , f cls M = {} (cid:46) Initialize memory buffer Initialize B using pretrained BERT Initialize G, S, f nsp , f task , f cls for t = 1 , . . . , n do if t 2 then Store G ( B ( x )) , S ( B ( x )) , x S t M for batches S t do Optimize L in Equation 1 if step mod = 0 then (cid:46) Replay Sample t 1 batches from M Optimize L in Equation 1 end if end for else (cid:46) No regularization on 1st task for batches S t do Optimize L = L cls + L nsp + L task end for end if C = K-Means( S t , n clusters = | S t | ) (cid:46) C : centroid C (cid:48) = { Examples closest to centers C } M M C (cid:48) (cid:46) Add to memory end for return B , G , S , f nsp , f task , f cls 5 Experiment 5.1 Datasets Following MBPA++ (de Masson d'Autume et al., 2019), we use five text classification datasets (Zhang et al., 2015; Chen et al., 2020) to evaluate our methods, including AG News (news clas-sification), Yelp (sentiment analysis), DBPedia (Wikipedia article classification), Amazon (senti-ment analysis), and Yahoo! Answer (Q&A classi-fication).", "A summary of the datasets is shown in Table 1. We merge the label space of Amazon and Yelp considering their domain similarity, with 33 Dataset Class Type Train Test AGNews 4 News 8000 7600 Yelp 5 Sentiment 10000 7600 Amazon 5 Sentiment 10000 7600 DBPedia 14 Wikipedia 28000 7600 Yahoo 10 Q&A 20000 7600 Table 1: Dataset statistics we used for Setting (Sam-pled).", "Due to the limitation of resources, for most of our experiments, we create a reduced dataset by randomly sampling 2000 training examples and 2000 validation examples per class for every task.", "See Table 1 for the train/test size of each dataset.", "We name this setting Setting (Sampled).", "We tune all the hyperparameters on the basis of Setting (Sam-pled).", "Beyond that, to have a comparison with previous State-of-the-art, we also conduct experiments on the same training set and test set as MbPA++ (de Masson d'Autume et al., 2019) and LAMOL (Sun et al., 2019), which contains 115,000 training examples and 7,600 test examples for each task.", "For every task, we randomly hold out 500 examples per class from training examples for validation purpose.", "We name the latter Setting (Full).", "During training, we evaluate our model on validation sets from all seen tasks, following Kirkpatrick et al. (2017).", "Our experiments are mainly conducted on the task sequences shown in Table 2. To minimize the effect of task order and task sequence length on the results, we examine both length-3 task sequences and length-5 task sequences in various orders.", "The first 3 task sequences are a cyclic shift of ag (cid:1) yelp (cid:1) yahoo, which are three classification tasks in different domains (news classification, sentiment analysis, Q&A classification).", "The last four length-5 task sequences follows de Masson d'Autume et al. (2019).", "We compare our proposed model with the following baselines in our experiments:", "Replay (Wang et al., 2019; de Masson d'Autume et al., 2019): Finetune model augmented with an episodic memory.", "Replay examples from old tasks while learning new tasks.", "Regularization : On top of Replay , with an L2 regularization term added on the hidden state of the classifier following BERT.", "MBPA++ (de Masson d'Autume et al., 2019): augment BERT model with an episodic memory module and store all seen examples.", "MBPA++ performs experience replay at training time, and uses K-nearest neighbors to select examples for local adaptation at test time.", "LAMOL (Sun et al., 2019): train a language model that simultaneously learns to solve the tasks and generate training samples, the latter is for generating pseudo samples used in experience replay.", "Here the text classification is performed in Q&A formats.", "Multi-task Learning (MTL): The model is trained on all tasks simultaneously, which can be considered as an upper-bound for continual learning methods since it has access to data from all tasks at the same time.", "We use pretrained BERT-based-uncased from Hug-gingFace Transformers (Wolf et al., 2020) as our base feature extractor.", "The task generic encoder and task specific encoder are both one linear layer followed by activation function T anh , their output size are both 128 dimensions.", "The predictors built on encoders are all one linear layer followed by activation function softmax .", "8 and the maximum sequence length of 256 (use the first 256 tokens if one's length is beyond that).", "We use AdamW (Loshchilov and Hutter, 2019) as optimizer.", "For all modules except the task id predictor, we set the learning rate lr = 3 e 5 ; for task id predictor, we set its learning rate lr task = 5 e 4 .", "The weight decay for all parameters are 0.01.", "For experience replay, we set the store ratio = 0 .", "01 , i.e. we store 1% of seen examples into the episodic memory module.", "Besides, we set the replay frequency = 10 , which means we do experience replay once every ten steps.", "For information disentanglement, we mainly tune the coefficients of the regularization loss.", "For batches from memory buffer M , we set g to 2.5, select best s from { 1 .", "5 , 2 .", "0 , 2 .", "5 } .", "For batches from current training set S , we set g to 0.25, select best s from { 0 .", "15 , 0 .", "20 , 0 .", "25 } .", "We evaluate models after training on all tasks and report their average accuracies on all test sets as our metric.", "Table 3 summarizes our results in Setting (Sampled).", "While continual finetuning suffered from severe forgetting, experience replay with 1% stored examples achieves promising results, which demonstrates the importance of experience replay for continual learning in NLP.", "Beyond that, simple regularization turns out to be a robust method on the basis of experience replay, which shows consistent improvements on all 6 orders.", "Our proposed Information Disentanglement Based Regularization (IDBR) further improves regularization consistently under all circumstances.", "Table 4 compares IDBR with previous SOTA: MBPA++ and LAMOL in Setting (Full).", "Note that although we use the same training/testing data, there is some inherent differences between our settings and previous SOTA methods.", "Despite the fact that MBPA++ applies local adaptation when testing, IDBR still outperforms it by an obvious margin.", "We achieve comparative results with LAMOL, despite that LAMOL requires task identifiers during inference which makes its prediction task easier.", "Comparing results of length-3 sequences and length-5 sequences in Table 3, we found that the gap between IDBR and multi-task learning became bigger when the length of task sequence changed from 3 to 5. To better understand how IDBR", "grad-(a) Task Generic Space", "ually forgot, we followed Chaudhry et al. (2018) to measure forgetting F k after trained on task k as follows: F k = E j =1 ...t 1 f kj , f kj = max l { 1 ...k 1 } a l,j a k,j where a l,j is the is the model's accuracy on task j after trained on task l .", "On order 4, 5 and 6, we calculate the forgetting every time after IDBR was trained on a new task and summarize them in Table 5. For continual learning, we hypothesize that the model is prone to suffer from more severe forgetting as the task sequence becomes longer.", "We found that although there was some big drop after training on the 3rd task, IDBR maintained stable performance as the length of task sequence increased, especially after training on 4-th and 5-th task, the forgetting increment was relatively small, which demonstrated the robustness of IDBR.", "To study whether our task generic encoder G tends to learn more generic information and task specific encoder S captures more task specific information, we used t-SNE (van der Maaten and Hinton, 2008) to visualize the two hidden spaces of IDBR, using the final model trained on order 2, and the results are shown in Figure 2, where Figure 2a visualizes task generic space and Figure 2b visualizes task specific space.", "We observe that compared with task specific space, generic features from different tasks were more mixed, which demonstrates that the next sentence prediction helped task generic space to be more task-agnostic than task specific space, which was induced to learn separated representations for different tasks.", "Considering we only employed two simple auxiliary tasks, the effect of information disentanglement was noticeable.", "Effect of Disentanglement In order to demonstrate that each module of our information disentanglement helps the learning process, we performed ablation study on the two auxiliary tasks using order 5 as a case study.", "The results are summarized in Table 6. We found that both task-id prediction and next sentence prediction contribute to the final Model 4 5 6 Avg Reg only on s 72.05 72.54 72.61 72.40 Reg only on g 72.01 72.98 72.73 72.57 Reg on both 72.63 73.72 73.23 73.19 Table 7: Comparison among using regularization on task specific space only, task generic space only and both of them.", "performance.", "Furthermore, the performance gain was much larger by combing these two auxiliary tasks together.", "Intuitively, the model needs both tasks to disentangle the representation well, since it is easy for the model to ignore one of the spaces if the constraint is not imposed appropriately.", "The results show that the two tasks are likely complimentary to each other in helping the model learn better disentangled representations.", "Impact of Regularization To study the effect of regularization on task generic hidden space g and task specific hidden space s , we performed an ablation study which only applied regularization on g or s , and compared the results with regularization on both in Table 7. We found that regularization on both spaces results in a much better performance than regularization on one of them only, which demonstrates the necessity of both regulariz-Rules 1 2 3 Average Random 71.52 72.60 73.03 72.38 K-Means 71.80 72.72 73.08 72.53 Table 8: Comparison between different selection rules: select stored examples randomly or select by K-Means.", "ers.", "While we may expect to give more tolerance to specific space for changing, we found that no regularization on it would lead to severe forgetting of previously learnt good task specific embeddings, hence it is necessary to add a regularizer over this space as well.", "Beyond that, we also observed that under most circumstances, adding regularization on the task generic space g results in a more significant gain than adding regularization on the task specific space s , consistent with our intuition that task generic space changes less across tasks and thus preserving it better helps more in alleviating catastrophic forgetting.", "Impact of K-Means To demonstrate our hypothesis that when the memory budget is limited, selecting the most representative subset of examples is vital to the success of continual learning, we performed an ablation study on order 1,2,3 using IDBR with and without K-Means.", "The result is shown in Table 8. From the table, we found that using K-Means helps boost the overall performance.", "Specifically, the improvement brought by K-Means was larger on those challenging orders, i.e. orders on which IDBR had worse performance.", "This is because for these challenging orders, the forgetting is more severe and the model needs more examples from previous tasks to help it retain previous knowledge.", "Thus with the same memory budget constraint, diversity across saved examples will help the model better recover knowledge learned from previous tasks.", "In this work, we introduce an information disentanglement based regularization (IDBR) method for continual text classification, where we disentangle the hidden space into task generic space and task specific space and further regularize them differently.", "We also leverage K-Means as the memory selection rule to help the model benefit from the augmented episodic memory module.", "Experiments conducted on five benchmark datasets demonstrate that IDBR achieves better performances compared to previous state-of-the-art baselines on sequences of text classification tasks with various orders and lengths.", "We believe the proposed approach can be extended to continual learning for other NLP tasks such as sequence generation and sequence labeling as well, and plan to explore them in the future.", "We would like to thank the anonymous reviewers for their helpful comments, and the members of Georgia Tech SALT group for their feedback." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "other", "method", "other", "other", "other", "other", "other", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "method", "objective", "abstain", "objective", "other" ]
[ "Multi-task Learning methods have achieved significant progress in text classification.", "However, existing methods assume that multi-task text classification problems are convex multiobjective optimization problems, which is unrealistic in real-world applications.", "To address this issue, this paper presents a novel Tchebycheff procedure to optimize the multitask classification problems without any convex assumption.", "The extensive experiments back up our theoretical analysis and validate the superiority of our proposals.", "Multi-task Learning (MTL) aims to learn multiple related tasks simultaneously, and obtain better performance than learning each task independently by setting inductive bias across tasks.", "(Caruana, 1993; Bakker and Heskes, 2003; Ben-David and Schuller, 2003; Ando and Zhang, 2005).", "It has achieved great success in various applications ranging from computer vision (Kendall et al., 2018) to text classification (Liu et al., 2016, 2017; Xiao et al., 2018).", "Existing MTL methods for text classification, usually set up the inductive bias across tasks by designing a parameterized hypothesis class that shares some parameters across tasks (e.g. shares some hidden layers in a Neural Network), and cast the multi-task text classification problem as a multiobjective optimization problem.", "L 1 -metric method is one of the most popular strategies for solving the multi-objective optimization problem.", "Specifically, it learns the parameters by minimizing a weighted linear combination of per-task losses.", "And this method is able to find an arbitrary Pareto optimal solution in the Pareto set if the problem is convex.", "Unfortunately, for a non-convex problem, this *Corresponding author.", "method excludes many Pareto optimal solutions from its search scope.", "To illustrate the issue, it is instructive to consider a 2-tasks learning case shown as Figure 1. From Figure 1, we can see that for a non-convex problem, the Pareto points located at the concave part of the Pareto front are unachievable.", "According to the uniform convergence properties of MTL (Baxter, 2000), the exclusion of Pareto optimal solutions may degenerate the generalization performance of multi-task text classification.", "To address the non-convexity problems, this paper proposes a novel Tchebycheff procedure to improve the performance of multi-task text classification.", "To validate the superiority of the proposed method, we conduct the experiments on two classical text classification problems: sentiment analysis on reviews (Blitzer et al., 2007) and topic classification on news (Lang, 1995).", "The results show that our proposed method can converge and outperform several state-of-the-art multi-task text classification methods.", "The family of Pareto optimality methods, including L 1 -metric methods (weighted sum methods) (Maurer et al., 2016; Chen et al., 2018; Kendall et al., 2018) and multiple-gradient descent algorithm (MGDA) (Sener and Koltun, 2018), have become one of the most prevalent Multi-task Learning (MTL) strategies.", "In multi-task text classification, L 1 -metric methods are widely used (Liu et al., 2016, 2017; Xiao et al., 2018; Yadav et al., 2018).", "However, for non-convex problems, the L 1 -metric methods are likely to exclude the optimal hypothesis from the hypothesis class.", "To handle the non-convex case, MGDA leverages the Karush-Kuhn-Tucker conditions and provides Pareto stationary points as solutions.", "However, the solutions are not sufficient to be Pareto optimal.", "A novel MTL method, which can achieve Pareto optimal without any convex assumption, is necessary to compensate for disadvantages in the L 1 -metric and MGDA.", "In this paper, a novel Tchebycheff procedure is proposed to achieve Pareto optimal without any convex assumption.", "Consider a multi-task learning problem with T tasks over an input space X and a collection of task spaces {Y t } Tt =1 .", "There is also a parametric hypothesis h = { f t } Tt =1 g = { f t ( g ( x, sh ) , t ) } Tt =1 : X {Y t } Tt =1 for each task, where sh represents the parameters shared between tasks, t represents the task-specific parameters, g ( , sh ) : X RK is the feature map used across different tasks.", "K is the dimension of the representation space.", "The functions g ( , sh ) : X RK and f t ( , t ) : X Y t are chosen from respective hypothesis classes G and F .", "h is in hypothesis classes H .", "The choice of representation and specialized predictors is based on the data observed for all the tasks.", "The data takes the form of a multi-sample D = { D t } Tt =1 , with D t = ( X t , Y t ) and ( X t , Y t ) = { x ti , y ti } n t i =1 P n t t .", "The task-specific training loss is denoted by L t ( f t ( g ( X t , sh ) , t ) , Y t ) : Y t Y t R + .", "Correspondingly, the empirical loss of the task t is defined as L t ( sh , t )= 1 n t (cid:2) n t i =1 L t ( f t ( g ( x ti , sh ) , t ) , y ti ) .", "We also denote the transpose of the vector/matrix by superscript (cid:3) , the logarithms to base 2 by log .", "MTL can be formulated as a multi-objective optimization problem that optimizes a collection of possibly conflicting objectives (Sener and Koltun, 2018).", "We formulate the optimization objective of MTL as a vector-valued loss L : min sh ; 1 ,..., t L ( sh ; 1 , ..., T ) , (1) where L ( sh ; 1 , ..., T ) = ( L 1 ( sh , 1 ) , ..., LT ( sh , T )) (cid:2) .", "The goal of multi-objective optimization is to achieve the (weak) Pareto optimality.", "Definition 1 (Pareto optimality for MTL) .", "The Pareto optimality for MTL is defined as:", "(i) A solution dominates a solution if L t ( sh , t ) L t ( sh , t ) for all tasks t and L ( sh ; 1 , ..., t ) (cid:6) = L ( sh ; 1 , ..., t ) .", "(ii) A solution is called Pareto optimal if there exists no solution that dominates .", "Definition 2 (Weak Pareto optimality for MTL) .", "A solution is weakly Pareto optimal if there does not exist another solution such that L t ( sh , t ) < L t ( sh , t ) for all tasks t .", "The set of (weak) Pareto optimal solutions are different trade-offs between tasks.", "The Pareto optimal set is a subset of the weakly Pareto optimal set.", "Global criterion is a standard technique for finding (weak) Pareto optimality, which optimizes all tasks", "together by minimizing a weighted L p -objective shown as (2).", "where 1 p , L t = | L t ( sh , t ) l t | , w t 0 and (cid:2) Tt =1 w t = 1 .", "l t is the ideal empirical loss of training task t .", "p = 1 , 2 or are widely used choices.", "The L is a Tchebycheff metric.", "The state-of-the-art multi-task text classification methods use the L 1 metric.", "Non-convex Multi-objective Optimization : L metric can find every Pareto optimal solution without convex assumption.", "By contrast, the L 1 metric excludes some Pareto optimal solutions when the problem is non-convex (Miettinen, 1998).", "It can be interpreted geometrically in a two-dimensional case shown as Figure 2. From Figure 2, we can see that a Pareto optimality is achieved at the point of tangency between the Pareto front and the surface formulated by L p metric.", "L 1 metric cannot be tangency to the Pareto optimal points located at the concave part of the Pareto front.", "In practice, most of the multi-task text classification problems are non-convex multi-objective problems, especially when the Deep Neural Network involved.", "According to the uniform convergence properties of MTL (Baxter, 2000), the exclusion of Pareto optimal solutions will lead to the degenerated performance.", "Therefore, we use the L metric to boost the performance.", "Weak Pareto optimality : The solution of a L metric objective is weakly Pareto optimal.", "Figure 2 provides geometrical interpretation.", "Empirical risk combinations formulate the upper bound of the generalization error of MTL (Baxter, 2000).", "Weakly Pareto optimal set, which contains more candidate T a s k 1 A d v Shared Layers I npu t T a s k T Task Specific Layers Discriminator Figure 4: An adversarial hard parameter sharing network model.", "empirical risk combinations than the Pareto optimal set, can achieve a lower generalization error than Pareto optimal set.", "Therefore, this paper presents to use L -metric to improve the performance of multi-task text classification.", "Many multi-task neural network models can be used in multi-task text classification, such as hard parameter sharing networks (Caruana, 1997) and soft parameter sharing networks (Liu et al., 2017; Xiao et al., 2018).", "This paper adopts a hard parameter sharing network model, because it has the lowest computational cost among the models.", "Original hard parameter sharing network : A hard parameter sharing network learns multiple related tasks simultaneously by sharing the hidden layers across all tasks, while keeping task-specific output layers for each task shown as Figure 3.", "The shared layers can be formulated by any feature extractor (e.g. long short-term mem-ory (LSTM) (Hochreiter and Schmidhuber, 1997), TextCNN (Kim, 2014)), while the task-specific output layers are task dependent.", "In multi-task classification, the task-specific layers are usually formulated by fully connected layers ending with a softmax function.", "Adversarial hard parameter sharing network : Cutting edge work (Liu et al., 2017) shows that adding an adversarial module to a MTL model can improve the performance.", "We extend the original hard parameter sharing network with an adversarial module shown as Figure 4. The adversarial module is essentially a task discriminator in the representation space, which discriminates which Algorithm 1: Tchebycheff Procedure Input: data D t = ( X t , Y t ) , the number of training epochs N e .", "task a sample x belongs to and can be formulated as (3).", "To boost the performance in non-convex problems, we use the Tchebycheff ( L ) metric to formulate the optimization objective.", "The scales of empirical risks for different tasks can vary significantly.", "To normalize the scales, we divide each empirical risk in the MTL model with the empirical risk of learning the corresponding task independently, which typically have similar scale.", "That is, we define the weight w t in (2) as (4).", "where l t is the empirical risk of learning task t independently.", "In practice, we set l t to be the training loss of training task t independently and achieving the highest accuracy in verification.", "In the ERM (Empirical Risk Minimization) paradigm, it is reasonable to assume that the minimum empirical loss of each task equals 0 .", "That is, l t = 0 in (2).", "Further more, the empirical losses are non-negative.", "This paper present the Tchebycheff Loss for multi-task text classification as (5).", "Algorithm 2: Adv Tchebycheff Procedure Input: data D t = ( X t , Y t ) , the number of training epochs N e , .", "Initialization: Train each task t independently, get l t (the loss corresponding to the highest verification accuracy) and initialize sh 0 with the hidden layers of task 1 .", "for i = 1 to N e do Train the discriminator with shi 1 and get L iD if L iD then t =arg max t { min 1 w 1 L 1 ( shi 1 , 1 ) ,... } shi , ti = arg min sh , t L t ( sh , t ) else shi = arg min sh LD end if end for for t = 1 to T do tN e = arg min t L t ( shN e , t ) end for return shN e , 1 N e , ..., TN e 4.3 Tchebycheff Loss for Adversarial MTL The empirical loss of the discriminator can be formulated as (6).", "(6) where (cid:0) y i = t is the indicator function which equals to 1 when y i = t otherwise 0 .", "In the adversarial MTL setting, we add the loss of the discriminator into the Tchebycheff loss.", "In the Tchebycheff procedure, we optimize sh with the discriminator when LD > , where is a hyper parameter.", "(7) is the Tchebycheff loss for Adversarial MTL.", "By minimizing the Tchebycheff loss (5) or (7), we can learn a (adversarial) hard parameter sharing network model.", "The training process of the model is defined as an (adversarial) Tchebycheff procedure, which is formulated as Algorithm 1 ( Algorithm 2 for the adversarial model).", "The networks are trained with backpropagation.", "In the adversarial Tchebycheff procedure, the dis-4221 criminator is trained by using a gradient reversal layer (Ganin and Lempitsky, 2015).", "The computational cost of training a hard parameter sharing network model with Tchebycheff procedure is higher than training it with a L 1 metric.", "The extra cost comes from the process of selecting the task with maximum loss.", "However, it can be easily reduced by parallelly computing loss of each task.", "In this section, firstly, we conduct a synthetic experiment to validate our theory analysis.", "Then, we perform experimental studies on two real-world applications: sentiment analysis and topic classification.", "The implementation is based on PyTorch (Paszke et al., 2019).", "The code can be found in the supplementary materials.", "In this section, two 2-objective optimization problems, problem 1 and 2 , are introduced to evaluate the performance of the L 1 metric method and the L metric method.", "Problem 1 is a convex 2-objective optimization problem, while problem 2 is a non-convex 2-objective optimization problem.", "Problem 1. min x 1 ,x 2 ( x 1 , x 2 ) (cid:3) s.t. x 2 1 /x 1 x 1 0 , x 2 0 .", "Let w 1 { 0 .", "01 , 0 .", "02 , 0 .", "03 , ..., 0 .", "99 , 1 } and w 2 = 1 w 1 .", "We solve problem 1 by using the L 1 metric method (minimizing w 1 x 1 + w 2 x 2 ) and L metric method (minimizing max( w 1 x 1 , w 2 x 2 ) ) respectively.", "The results are shown in Figure 5.", "Then, we compare the L 1 metric method with the L metric method in solving the non-convex problem 2. Figure 6 shows the results.", "Experimental results verify the superiority of the L metric method at handling non-convex case.", "Sentiment Analysis 1 .", "We evaluate our algorithm on product reviews from Amazon.", "The dataset (Blitzer et al., 2007) contains product reviews from 14 domains: apparel, baby, books, camera photo, DVDs, electronics, health personal care, kitchen appliances, magazines, music, software, sports outdoors, toys, games and video.", "We consider each domain as a binary classification task.", "Reviews with rating > 3 are labeled positive, those with rating < 3 are labeled negative.", "Reviews with rating = 3 are discarded as the sentiments are ambiguous and hard to predict.", "The training/testing/validation partition is randomly split into 70% training, 10% testing, and 20% validation.", "Topic Classification 2 .", "We select 16 newsgroups from the 20 Newsgroup dataset, which is a collection of approximately 20,000 newsgroup documents.", "We formulate the 16 newsgroups into four 4-class classification tasks (shown as Table 1).", "The training/testing/validation partition is randomly split into 60% training, 20% testing, and 20% validation.", "We implement our (adversarial) Tchebycheff Procedure via a deep MTL network with hard parameter sharing strategy (Caruana, 1997).", "As shown in Figures 3 and 4, all tasks have task-specific output layers and share the feature map layers.", "In the adversarial Tchebycheff Procedure, an extra adversarial module is added in the deep MTL network.", "In our experiments, TextCNN (Kim, 2014) is used to build feature extraction module.", "The TextCNN is structured with 3 parallel convolutional layers with kernels size of 3, 5, 7, respectively.", "The extracted feature representations are then concatenated and classified by the task-specific output module, which has one fully-connected layer.", "The adversarial module is built with one fully connected layer whose output size equals to the number of the tasks.", "It is noteworthy that the adversarial module connects to the shared layers via a gradient reversal layer (Ganin and Lempitsky, 2015).", "The gradient reversal layer multiplies the gradient by 1 during the backpropagation, which optimizes the adversarial loss function (6).", "We train the deep MTL network model according to Algorithms 1 and 2 respectively.", "We set be 2 .", "5 and 1 for sentiment analysis and topic classification respectively.", "The learning rates are 1 e 4 and 3 e 4 for sentiment analysis and topic classification respectively.", "We use Adam optimizer (Kingma and Ba, 2015) and train 3000 epochs for both sentiment analysis and topic classification.", "The batch size is 256.", "We use dropout with a probability of 0.5 for both adversarial modules and all task-specific output modules.", "We compare our proposed methods with baselines and some state-of-the-art methods:", "(i) Single Task: solving tasks independently,", "(ii) Uniform Scaling: minimizing a uniformly weighted sum of loss functions,", "(iii) MGDA: using the MGDA-UB method proposed by (Sener and Koltun, 2018).", "(iv) Adversarial MTRL: using the adversarial MTL framework proposed by (Liu et al., 2017).", "We report results over 10 runs by plotting classification accuracy of each classification task for sentiment analysis and topic classification in Figures 7 and 8 respectively.", "Figures 7 and 8 visually compare the classification accuracy performances between all the methods.", "The numerical results validate that the proposed (adversarial) Tchebycheff procedure outperforms the state-of-the-art methods.", "To verify the convergence of the proposed (adver-sarial) Tchebycheff procedure, we plot curves of training loss for each task and discriminator in", "Figure 9 for topic classification.", "The (adversar-ial) Tchebycheff procedure obtains similar convergence curves in sentiment analysis.", "The results verify that our method converges rapidly.", "From Figure 9: Convergence curve of task specific loss achieved by adversarial Tchebycheff procedure for topic classification.", "Figure 9, we can see that the adversarial module only works in the first 500 epochs.", "as shown in Figures 10 and 11.", "In the color maps, each task has a specific color and each epoch is colored by the task with the maximum loss.", "Here, we display the color maps for sentiment analysis.", "Figures 10 and 11 show that the (adversarial) Tchebycheff procedure is a dynamic procedure, 4225 Table 2: Average training time (second/epoch) comparsion between Uniform Scaling method (uniform), MGDA (Sener and Koltun, 2018), Adversarial MTRL (adv MTRL) (Liu et al., 2017), Tchebycheff procedure (TP), adversarial Tchebycheff procedure (adv TP), Multi-processing Tchebycheff procedure (MTP-TP) and adversarial Multi-processing Tchebycheff procedure (adv MTP-TP).", "which changes optimization objective according to its strategy ( L metric) in each epoch and fi-nally achieves better performance.", "The procedure is totally different from existing methods, which optimize all tasks together.", "We run the code on a server with a 2.2GHz Intel CPU and a single NVIDIA GeForce RTX 2080Ti GPU.", "The results of the average training time for each epoch in (adversarial) Tchebycheff procedure (TP) are shown in Table 2. From Table 2, we can see that the (Adversarial) Tchebycheff procedure is slower than the Uniform Scaling method and Adversarial MTRL (Liu et al., 2017).", "In an adversarial Tchebycheff procedure, optimizing the adversarial task (4.5s per epoch for sentiment analysis and 2.1s per epoch for topic classification) is more time-consuming than optimizing a single task (3.5s per epoch for sentiment analysis and 1.5s per epoch for topic classification).", "However, optimizing the adversarial module appears less than 100 epochs.", "The extra computational cost resulted from the adversarial training can be ignored.", "We are able to accelerate the (adversarial) Tchebycheff procedure with Multi-processing.", "In Multi-processing (adversarial) Tchebycheff procedure, we accelerate the procedure of selecting the task by computing the loss of each task in different processes.", "We implement the code by using the multiprocessing package in PyTorch.", "From Table 2, we can see that Multi-processing (adversarial) Tchebycheff procedure outperforms MGDA and Adversarial MTRL.", "Most of multi-task text classification problems are non-convex multi-objective optimization problems.", "However, existing methods ignore the non-convexity and solve the problems using convex optimization methods.", "To address this issue, this paper presents an (adversarial) Tchebycheff procedure for multi-task text classification without any convex assumption.", "Numerical experiments show that our proposed methods can converge and outperform state-of-the-art methods.", "In the Tchebycheff Procedure, we choose the weight for each task according to the empirical risk of learning the corresponding task independently.", "Obtaining the empirical risk is a little laborious.", "In the future, it would be fruitful to develop a novel weighting strategy for the Tchebycheff Procedure.", "This work is supported by the National Natural Science Foundation of China under Grants 61976161 and 61822113, and the Fundamental Research Funds for the Central Universities under Grant 41300082." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "objective", "method", "abstain", "abstain", "other" ]
[ "This paper presents a tree-structured neural topic model, which has a topic distribution over a tree with an infinite number of branches.", "Our model parameterizes an unbounded ancestral and fraternal topic distribution by applying doubly-recurrent neural networks.", "With the help of autoencoding variational Bayes, our model improves data scalability and achieves competitive performance when inducing latent topics and tree structures, as compared to a prior tree-structured topic model (Blei et al., 2010).", "This work extends the tree-structured topic model such that it can be incorporated with neural models for downstream tasks.", "Probabilistic topic models, such as latent Dirichlet allocation ( LDA ; Blei et al., 2003), are applied to numerous tasks including document modeling and information retrieval.", "Recently, Srivastava and Sutton (2017); Miao et al. (2017) have applied the autoencoding variational Bayes ( AEVB ; Kingma and Welling, 2014; Rezende et al., 2014) framework to basic topic models such as LDA.", "AEVB improves data scalability in conventional models.", "The limitation of the basic topic models is that they induce topics as flat structures, not organizing them into coherent groups or hierarchies.", "Tree-structured topic models (Griffiths et al., 2004), which detect the latent tree structure of topics, can overcome this limitation.", "These models induce a tree with an infinite number of nodes and assign a generic topic to the root and more detailed topics to the leaf nodes.", "In Figure 1, we show an example of topics induced by our model.", "Such characteristics are preferable for several downstream tasks, such as document retrieval (Weninger et al., 2012), aspect-based sentiment analysis (Kim et al., 2013) and extractive summarization (Celikyilmaz Root Carry Purchase Cover 1: quality months zipper time back 11: sleeve inside inch protection nice 111: bottom cover top plastic scratches 112: color cover mac keyboard love 12: perfect quality price bought size 121: item return receive amazon money 122: price recommend buy perfectlove 13: pockets carry strap shoulder compartment 131: big laptops tablet description hp 132: books school carry bags back Figure 1: Topics inferred by our tree-structured topic model from Amazon reviews of laptop bags. The five most frequent words are shown and manually labeled. and Hakkani-Tur, 2010), because they provide succinct information from multiple viewpoints.", "For instance, in the case of document retrieval of product reviews, some users are interested in the general opinions about bag covers, while others pay more attention to specific topics such as the hardness or color of the covers.", "The tree structure can navigate users to the documents with desirable granularity.", "However, it is difficult to use tree-structured topic models with neural models for downstream tasks.", "While neural models require a large amount of data for training, conventional inference algorithms, such as collapsed Gibbs sampling (Blei et al., 2010) or mean-field approximation (Wang and Blei, 2009), have data scalability issues.", "It is also desirable to optimize the tree structure for downstream tasks by jointly updating the neural model parameters and posteriors of a topic model.", "To overcome these challenges, we propose a tree-structured neural topic model ( TSNTM ), which is parameterized by neural networks and is trained using AEVB.", "While prior works have applied AEVB to flat topic models, it is not straightforward to parameterize the unbounded ancestral and fraternal topic distribution.", "In this paper, we provide a solution to this by applying doubly-recurrent neural networks ( DRNN ; Alvarez-Melis and Jaakkola, 2017), which have two recurrent structures over respectively the ancestors and siblings.", "Experimental results show that the TSNTM achieves competitive performance against a prior work (Blei et al., 2010) when inducing latent topics and tree structures.", "The TSNTM scales to larger datasets and allows for end-to-end training with neural models of several tasks such as aspect-based sentiment analysis (Esmaeili et al., 2019) and abstractive summarization (Wang et al., 2019).", "Following the pioneering work of tree-structured topic models by Griffiths et al. (2004), several extended models have been proposed (Ghahramani et al., 2010; Zavitsanos et al., 2011; Kim et al., 2012; Ahmed et al., 2013; Paisley et al., 2014).", "Our model is based on the modeling assumption of Wang and Blei (2009); Blei et al. (2010), while parameterizing a topic distribution with AEVB.", "In the context of applying AEVB to flat document or topic modeling (Miao et al., 2016; Srivastava and Sutton, 2017; Ding et al., 2018), Miao et al. (2017) proposed a model, which is closely related to ours, by applying recurrent neural networks ( RNN ) to parameterize an unbounded flat topic distribution.", "Our work infers the topic distributions over an infinite tree with a DRNN, which enables us to induce latent tree structures.", "Goyal et al. (2017) used a tree-structured topic model (Wang and Blei, 2009) with a variational autoencoder ( VAE ) to represent video frames as a tree.", "However, their approach is limited to smaller datasets.", "In fact, they used only 1,241 videos (corre-sponding to documents) for training and separately updated the VAE parameters and the posteriors of the topic model by mean-field approximation.", "This motivates us to propose the TSNTM, which scales to larger datasets and allows for end-to-end training with neural models for downstream tasks.", "We present the generative process of documents and the posterior inference by our model.", "As shown in Figure 2, we draw a path from the root to a leaf node and a level for each word.", "The word is drawn from the multinomial distribution assigned to the topic specified by the path and level.", "1. For each document index d { 1 , . . . , D } : Draw a Gaussian vector: x d N ( 0 , 20 ) (1) Obtain a path distribution: d = f ( x d ) (2) Obtain a level distribution: d = f ( x d ) (3) 1 11 12 111 112 121 c d,1 c d,2 c d,4 c d,3 z d,1 z d,3 z d,2 z d,4 sampling a path sampling a level w d,1 w d,3 w d,2 w d,4 Figure 2: Sampling process of a topic for each word.", "2. For each word index n { 1 , . . . , N d } in d : Draw a path: c d,n Mult( d ) (4) Draw a level: z d,n Mult( d ) (5) Draw a word: w d,n Mult( c d,n [ z d,n ] ) (6) where c d,n [ z d,n ] V 1 is the word distribution assigned to a topic, c d,n [ z d,n ] .", "While Wang and Blei (2009); Blei et al. (2010) draw a path for each document, this constrains a document to be generated from only the topics in the path.", "Hence, we draw a path for each word, enabling a document to be generated from all topics over a tree.", "Wang and Blei (2009) draws a path and a level distribution via the tree-based stick-breaking construction given by (7) and (8): k Beta(1 , ) , k = par ( k ) k k 1 (cid:89) j =1 (1 j ) (7) l Beta(1 , ) , l = l l 1 (cid:89) j =1 (1 j ) (8) Here, k { 1 , . . . , K } and par ( k ) denote the k -th topic and its parent, respectively.", "l { 1 , . . . , L } denotes the l -th level.", "See Appendix A.1 for more details.", "In contrast, we introduce neural architectures, f and f , to transform a Gaussian sample to a topic distribution, allowing for posterior inference with AEVB.", "Specifically, we apply a DRNN to parameterize the path distribution over the tree.", "A DRNN is a neural network decoder for generating tree-structured objects from encoded representations (Alvarez-Melis and Jaakkola, 2017).", "A DRNN consists of two RNNs over respectively the ancestors and siblings (see Appendix A.2).", "We assume that their two recurrent structures can parameterize the unbounded ancestral and fraternal path distribution conditioned on a Gaussian sample x , using a finite number of parameters.", "h k = tanh( W p h par ( k ) + W s h k 1 ) (9) where h par ( k ) and h k 1 are the hidden states of a parent and a previous sibling of the k -th topic, respectively.", "We alternate the breaking proportions, , in (7) and obtain the path distribution, , as: k = sigmoid( h (cid:62) k x ) (10) Moreover, we parameterize the unbounded level distribution, , by passing a Gaussian vector through a RNN and alternating the breaking proportions, , in (8) as: h l = tanh( W h l 1 ) (11) l = sigmoid( h (cid:62) l x ) (12) 3.2 Parameterizing Word Distribution Next, we explain the word distribution assigned to each topic 1 .", "We introduce the embeddings of the k -th topic, t k RH , and words, U RV H , to obtain the word distribution, k V 1 , by (13).", "k = softmax( U t (cid:62) k 1 l ) (13) where 1 l is a temperature value and produces more sparse probability distribution over words as the level l gets to be deeper (Hinton et al., 2014).", "As the number of topics is unbounded, the word distributions must be generated dynamically.", "Hence, we introduce another DRNN to generate topic embeddings as t k = DRNN( t par ( k ) , t k 1 ) .", "Several neural topic models (Xie et al., 2015; Miao et al., 2017; He et al., 2017) have introduced diversity regularizer to eliminate redundancy in the topics.", "While they force all topics to be orthogonal, this is not suitable for tree-structured topic models, which admit the correlation between a parent and its children.", "Hence, we introduce a tree-specific diversity regularizer with t ki = t i t k as: (cid:88) k/ Leaf (cid:88) i,j Chi( k ): i (cid:54) = j (cid:18) t (cid:62) ki t kj (cid:107) t ki (cid:107)(cid:107) t kj (cid:107) 1 (cid:19) 2 (14) where Leaf and Chi( k ) denote the set of the topics with no children and the children of the k -th topic, respectively.", "By adding this regularizer to the variational objective, each child topic becomes orthogonal from the viewpoint of their parent, while allowing parentchildren correlations.", "Under our proposed probabilistic model, the likelihood of a document is given by (15):", "p ( w d | 0 , 0 , ) = (cid:90) , (cid:110)(cid:89) n (cid:88) c n ,z n p ( w n | c n [ z n ] ) p ( c n | ) p ( z n | ) (cid:111) p ( , | 0 , 0 ) d d = (cid:90) , (cid:110)(cid:89) ( ) w n (cid:111) p ( , | 0 , 0 ) d d (15)", "where K 1 is the topic distribution and is derived as = (cid:80) ( (cid:80) ) .", "As the latent variables c n and z n are integrated out in (15), the evidence lower bound for the document log-likelihood is derived as: L d = E q ( , | w d ) (cid:104)(cid:88) n log( ) w n (cid:105) KL (cid:104) q ( , | w d ) || p ( , | 0 , 0 ) (cid:105) (16) where q ( , | w d ) is the variational distribution approximating posteriors.", "Following the AEVB framework, we introduce multi-layer perceptrons ( MLP ) f and f 2 for transforming bag-of-words vector w d to the variational Gaussian distribution.", "The variational distribution of the posteriors is re-written as: q ( , | w d ) = q ( f ( x ) , f ( x ) | w d ) = N ( x | f ( w d ) , f 2 ( w d )) (17) We sample and from q ( , | w d ) by sampling (cid:15) N ( 0 , I ) and computing x = f ( w d ) + (cid:15) f 2 ( w d ) .", "The priors, p ( , | 0 , 20 ) , is also rewritten as N ( x | 0 , 20 ) .", "To sum up, the evidence lower bound is approximated with sampled topic distribution as: L d (cid:88) n log( ) w n KL (cid:2) N ( x | f ( w d ) , f 2 ( w d )) ||N ( x | 0 , 20 ) (cid:3) (18) 3.4 Dynamically Updating the Tree Structure To allow an unbounded tree structure, we introduce two heuristic rules for adding and pruning the branches.", "We compute the proportion of the words in topic k : p k =( (cid:80) Dd =1 N d d,k ) / ( (cid:80) Dd =1 N d ).", "For each non-leaf topic k , if p k is more than a threshold, a child is added to refine the topic.", "For each topic k , if the cumulative proportion of topics over descendants, (cid:80) j Des( k ) p j , is less than a threshold, the k -th topic and its descendants are removed ( Des( k ) denotes the set of topic k and its descendants).", "We also remove topics with no children at the bottom.", "In our experiments, we use the 20NewsGroups and the Amazon product reviews .", "The 20NewsGroups is a collection of 20 different news groups containing 11 , 258 training and 7 , 487 testing documents 2 .", "For the Amazon product reviews , we use the domain of Laptop Bags provided by Angelidis and Lapata (2018), with 31 , 943 training, 385 validation and 416 testing documents 3 .", "We use the provided test documents in our evaluations, while randomly splitting the remainder of the documents into training and validation sets.", "As baselines, we use a tree-structured topic model based on the nested Chinese restaurant process ( nCRP ) with collapsed Gibbs sampling (Blei et al., 2010).", "In addition, we use a flat neural topic model, i.e. the recurrent stick-breaking process ( RSB ), which constructs the unbounded flat topic distribution via an RNN (Miao et al., 2017).", "For the TSNTM and RSB, we use 256 -dimensional word embeddings, a one-hidden-layer MLP with 256 hidden units, and a one-layer RNN with 256 hidden units to construct variational parameters.", "We set the hyper-parameters of Gaussian prior distribution 0 and 20 as a zero mean vector and a unit variance vector with 32 dimensions, respectively.", "We train the model using AdaGrad (Duchi et al., 2011) with a learning rate of 10 2 , an initial accumulator value of 10 1 , and a batch size of 64 .", "We grow and prune a tree with a threshold of 0 .", "05 in Section 3.4 and set a temperature as = 10 in Section 3.2 4 .", "Regarding the nCRP-based model, we set the nCRP parameter as = 0 .", "01 , the GEM parameter as = 10 , m = 0 .", "5 , and the Dirichlet parameter as = 5 .", "The hyperparameters of each model are tuned based on the perplexity on the validation set in the Amazon product reviews .", "We fix the number of levels in the tree as 3 with an initial number of branches 3 for both the second and third levels.", "2 For direct comparison against Miao et al. (2017), we use the training/testing splits and the vocabulary provided at https://github.com/akashgit/ autoencoding_vi_for_topic_models .", "3 https://github.com/stangelid/oposum 4 The code to reproduce the results is available at: https: //github.com/misonuma/tsntm .", "Several works (Chang et al., 2009; Newman et al., 2010) pointed out that perplexity is not suitable for evaluating topic interpretability.", "Meanwhile, Lau et al. (2014) showed that the normalized pointwise mutual information ( NPMI ) between all pairs of words in each topic closely corresponds to the ranking of topic interpretability by human annotators.", "Thus, we use NPMI instead of perplexity as the primary evaluation measure following Srivastava and Sutton (2017); Ding et al. (2018).", "Table 1 shows the average NPMI of the topics induced by each model.", "Our model is competitive with the nCRP-based model and the RSB for each dataset.", "This indicates that our model can induce interpretable topics similar to the other models.", "As a note, we also show the average perplexity over the documents of each model in Table", "2. For the AEVB-based models (RSB and TSNTM), we calculate the upper bound of the perplexity using ELBO following Miao et al. (2017); Srivastava and Sutton (2017).", "In contrast, we estimate it by sampling the posteriors in the nCRP-based model with collapsed Gibbs sampling.", "Even though it is difficult to compare them directly, the perplexity of the nCRP-based model is lower than that of the AEVB-based models.", "This tendency corresponds to the result of Srivastava and Sutton (2017); Ding et al. (2018), which report that the model with collapsed Gibbs sampling achieves the lowest perplexity in comparison with the AEVB-based models.", "In addition, Ding et al. (2018) also reports that there is a trade-off between perplexity and NPMI.", "Therefore, it is natural that our model is competitive with the other models regarding to NPMI, while there is a significant difference in achieved perplexity.", "For evaluating the characteristic of the tree structure, we adopt two metrics: topic specialization and hierarchical affinity following Kim et al. (2012).", "Topic specialization: An important characteristic of the tree-structure is that the most general topic is assigned to the root, while the topics become more specific toward the leaves.", "To quantify this characteristic, we measure the specialization score as the cosine similarity of the word distribution between each topic and the entire corpus.", "As the entire corpus is regarded as the most general topic, more specific topics have lower similarity scores.", "Figure 3 presents the average topic specialization scores for each level.", "While the root of the nCRP is more general than that of our model, the tendency is roughly similar for both models.", "Hierarchical Affinity: It is preferable that a parent topic is more similar to its children than the topics descended from the other parents.", "To verify this property, for each parent in the second level, we calculate the average cosine similarity of the word distribution to children and non-children respectively.", "Figure 4 shows the average cosine similarity over the topics.", "While the nCRP-based model induces child topics slightly similar to their parents, our model infers child topics with more similarity to their parent topics.", "Moreover, lower scores of the TSNTM also indicate that it induces more diverse topics than the nCRP-based model.", "To evaluate how our model scales with the size of the datasets, we measure the training time until the convergence for various numbers of documents.", "We randomly sample several number of documents (1,000, 2,000, 4,000, 8,000, 16,000 and all) from the training set of the Amazon product reviews and measure the training time for each number of documents.", "The training is stopped when the perplexity of the validation set is not improved for 10 consecutive iterations over the entire batches.", "We measure the time to sample the posteriors or update the model parameters, except for the time to compute the perplexity 5 .", "As shown in Figure 5, as the number of documents increases, the training time of our model does not change considerably, whereas that of the nCRP increases significantly.", "Our model can be trained approximately 15 times faster than the nCRP-based model with 32,000 documents.", "We proposed a novel tree-structured topic model, the TSNTM, which parameterizes the topic distribution", "distribution over an infinite tree by a DRNN.", "Experimental results demonstrated that the TSNTM achieves competitive performance when inducing latent topics and their tree structures, as compared to a prior tree-structured topic model (Blei et al., 2010).", "With the help of AEVB, the TSNTM can be trained approximately 15 times faster and scales to larger datasets than the nCRP-based model.", "This allows the tree-structured topic model to be incorporated with recent neural models for downstream tasks, such as aspect-based sentiment analysis (Esmaeili et al., 2019) and abstractive summarization (Wang et al., 2019).", "By incorporating our model instead of flat topic models, they can provide multiple information with desirable granularity.", "We would like to thank anonymous reviewers for their valuable feedback.", "This work was supported by JST ACT-X Grant Number JPMJAX1904 and CREST Grant Number JPMJCR1513, Japan.", "5 All computational times are measures on the same machine with a Xeon E5-2683-v4 (2.1 GHz, 16 cores) CPU and a single GeForce GTX 1080 (8GB) GPU." ]
[ "method", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "other", "other", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "other", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other" ]
[ "Knowledge graphs (KG) have become increasingly important to endow modern recommender systems with the ability to generate traceable reasoning paths to explain the recommendation process.", "However, prior research rarely considers the faithfulness of the derived explanations to justify the decision-making process.", "To the best of our knowledge, this is the first work that models and evaluates faithfully explainable recommendation under the framework of KG reasoning.", "Specifically, we propose neural logic reasoning for explainable recommendation (LOGER) by drawing on interpretable logical rules to guide the path-reasoning process for explanation generation.", "We experiment on three large-scale datasets in the e-commerce domain, demonstrating the effectiveness of our method in delivering high-quality recommendations as well as ascertaining the faithfulness of the derived explanation.", "Compared with traditional recommender systems (RS), explainable recommendation is not only capable of providing high-quality recommendation results but also offers personalized and intuitive explanations (Zhang and Chen, 2020).", "Incorporating a knowledge graph (KG) into recommender systems has become increasingly popular, since KG reasoning is able to generate explainable paths connecting users to relevant target item entities.", "At the same time, there is increasing demand for systems to ascertain the faithfulness of the generated explanation, i.e., assess whether it faithfully reflects the reasoning process of the model and is consistent with the historic user behavior.", "However, previous work has largely neglected faithfulness in KG-enhanced explainable recommendation (Xian et al., 2020a; Fu et al., 2020a).", "A number of studies (Lakkaraju et al., 2019; ter Hoeve et al., 2018; Wu and Mooney, 2018) argue that Equal contribution faithful explanations should also be personalized and gain the capability to reflect the personalized user historic behavior.", "However, to the best of our knowledge, none of the existing explainable recommendation models based on KGs have considered faithfulness in the explainable reasoning process and its evaluation on the generated explainable paths.", "For instance, PGPR (Xian et al., 2019; Zhao et al., 2020) infers explainable paths over the KG without considering personalized user behavior, and its prediction on next potential entities is merely based on the overall knowledge-driven rewards.", "CAFE (Xian et al., 2020b) builds user module profiles to guide the path inference procedure.", "However, as illustrated in Subramanian et al. (2020), such neural module networks only implicitly abstract the reasoning process and lack of considering the faithfulness of explanations.", "In this paper, we propose a new KG-enhanced recommendation model called LOGER to produce faithfully explainable recommendation via neural logic reasoning.", "To fully account for heterogeneous information and rules about users and items from the KG, we leverage an interpretable neural logic model for logical reasoning, enhanced by a general graph encoder that learns KG representations to capture semantic aspects of entities and relations.", "These two components are iteratively trained via the EM algorithm by marrying the merits of interpretability of logical rules and the expressiveness of KG embeddings.", "Subsequently, the learned rule weights are leveraged to guide the path reasoning to generate faithful explanations.", "The derived logical rules are expected to be consistent with historic user behavior and the resulting paths genuinely reflect the decision making process in KG reasoning.", "We experiment on three large-scale datasets for e-commerce recommendation that cover rich user behavior patterns.", "The results demonstrate the superior recommendation performance achieved by our model compared to the state-of-the-art baselines, with the guarantee of the faithfulness on the generated path-based explanations.", "The contributions of this paper are threefold.", "We highlight the significance of considering faithfulness in explainable recommendation.", "We propose a novel approach that incorporates interpretable logical rules into KG path reasoning for recommendation and explanation generation.", "We experiment on three large-scale datasets showing promising recommendation performance as well as faithful path-based explanation.", "A knowledge graph (KG) for recommendation is defined as G = { ( e h , r, e t ) | e h , e t E , r R} , where E denotes the entity set consisting of sets of users U , items I , and other entities, while R denotes the relation set.", "Each triplet ( e h , r, e t ) represents a fact indicating head entity e h interacts with tail entity e t via relation r .", "In recommendation tasks, we are particularly interested in useritem interactions { ( u, r ui , v ) | u U , r ui R , v I} with the special relation r ui meaning purchase in e-commerce or like in movie recommendation.", "The problem of KG reasoning for explainable recommendation is formulated as follows.", "Given an incomplete KG G with missing useritem interactions, for every user u U , the goal is to select a set of items as recommendations { v | ( u, r ui , v ) (cid:54) G , v I} along with a set of paths as explanations connecting each pair of the user and a predicted item.", "The key challenge is to not only guarantee the recommendation quality with the rich information in KG, but also generate faithful explanations that reflect the actual decision-making process of the recommendation model and are consistent with historic user behavior.", "We introduce the novel neural LOGic Explainable Recommender (LOGER) for producing faithfully explainable recommendations with a KG.", "As illustrated in Fig. 1, it consists of three components:", "(i) a KG encoder for learning embeddings of KG entities and relations to capture their semantics,", "(ii) a neural logic model for conducting interpretable logical reasoning to make recommendations, and", "(iii) a rule-guided path reasoner for generating faithfully explainable paths.", "Both KG encoder and neural logic model are trained iteratively via the EM algorithm (Neal and Hinton, 1998) so that they mutually benefit to make recommendations via logical reasoning.", "Additionally, personalized rule importance scores are derived for every user and leveraged to guide the path reasoning for faithful explanation generation.", "Let X hrt be a binary random variable indicating whether a triplet ( e h , r, e t ) is true or not, XG = { X hrt | ( e h , r, e t ) G} be a random variable regarding all observed triplets in the KG G , and XH = { X hrt | ( e h , r, e t ) H } be a random variable of hidden useritem interactions in H = { ( u, r ui , v ) | u U , v I , ( u, r ui , v ) (cid:54) G} .", "The KG encoder is generally defined as a triplet-wise function f : E R E (cid:55) [0 , 1] parametrized by that maps each triplet to a real-valued score.", "For any triplet ( e h , r, e t ) G H , we can interpret its truth probabilistically via the KG encoder f as q ( X hrt | ) = Bernoulli ( X hrt | f ( e h , r, e t )) .", "The KG encoder f can be instantiated with any existing KG embedding (Ji et al., 2020) or graph neural network (Wu et al., 2020) model.", "We focus on composition rules for useritem interactions, i.e., r ui is a composition of relations r 1 , . . . , r j if ( u, r 1 , e 1 ) ( e j 1 , r j , v ) ( u, r ui , v ) , u U , v V , e 1 , . . . , e j 1 E .", "Given a set of logical rules L mined from the KG, the goal of this component is, for every user u U , to emit a set of personalized rule importance scores y u = { y u,l } l L to capture the historic user behavior.", "To achieve this, we build upon Markov Logic Networks (Qu and Tang, 2019), an interpretable probabilistic logic reasoning method that models the joint distribution of all triplets via a set of logical rules L , i.e., p ( XG , XH | w ) = 1 Z exp (cid:0)(cid:80) l L w l n l (cid:1) , where w = { w l } l L with w l being the global weight of rule l L , and n l denotes the number of true groundings of rule l over observed and hidden triplets.", "Accordingly, we de-fine the personalized rule importance score to be y u,l = w l n l ( u ) (cid:80) l (cid:48) L n l (cid:48) ( u ) , where n l ( u ) is the number of groundings of rule l over the observed triplets in { ( u, r ui , v ) G} .", "However, it is intractable to directly maximize the log likelihood of observed triplets to learn the global weights w , i.e., max w log p ( XG | w ) .", "Instead, we employ the EM algorithm to iteratively optimize the objective to acquire optimal global weights.", "E-Step We introduce a mean-field variational distribution q ( XH | ) (cid:81) ( e h ,r,e t ) H q ( X hrt | ) over hidden useritem interactions in H .", "The goal of the E-step is to estimate q ( XH | ) by minimizing the KL divergence between q ( XH | ) and the posterior distribution p ( XH | XG , w ) with fixed w .", "For each triplet ( e h , r, e t ) H , we denote by L hrt the set of rules associated with the triplet and by G hrt the corresponding groundings of all logical rules in L hrt .", "Following Qu and Tang (2019), the optimal q ( XH | ) can be achieved under the fixed-point condition, i.e., q ( X hrt | ) p ( X hrt | XG hrt , w ) , for all ( e h , r, e t ) H .", "Here, q ( X hrt | ) is approximated by the KG encoder f , and p ( X hrt | XG hrt , w ) can be estimated with the global weights w of the rules in L hrt from the last iteration: p ( X hrt = 1 | XG hrt , w ) = (cid:32)(cid:80) l L hrt w l | L hrt | (cid:33) , (1) where ( ) is the sigmoid function.", "In other words, if a hidden triplet ( e h , r, e t ) is asserted to be true by the rules (e.g., p ( X hrt = 1 | XG hrt , w ) > 0 .", "5 ), the probability q ( X hrt = 1 | ) given by the KG encoder is also expected to be high.", "Therefore, to learn the parameter , we aim to maximize the log-likelihood function over all observed triplets in G and the plausibly true hidden triplets in H + = { ( e h , r, e t ) | p ( X hrt = 1 | XG hrt , w ) } , which leads to the objective (cid:96) ( ) = (cid:88) ( e h ,r,e t ) G H + log q ( X hrt = 1 | ) , (2) where is a hyperparameter.", "M-Step The goal of the M-step is to learn the global rule weights w by maximizing the log-likelihood function E q ( XH ) [log p ( XG , XH ; w )] given a fixed from the E-step.", "Since the log-likelihood term models the joint distribution over all triplets, which is hard to compute for a large KG, we approximate it with the pseudolikelihood (Besag, 1975): (cid:96) PL ( w ) = (cid:80) ( e h ,r,e t ) G HE q ( XH | ) [log p ( X hrt | XG hrt , w )] .", "Then, we can invoke gradient ascent to acquire the optimal w , with the gradient defined as: w l (cid:96) PL ( w l ) = (cid:88) ( e h ,r,e t ) G 1 p hrt | L hrt | + (cid:88) ( e h ,r,e t ) H q ( X hrt = 1 | ) p hrt | L hrt | , (3) where p hrt = p ( X hrt = 1 | XG hrt , w ) .", "Once the optimal global weights are acquired, we can make a recommendation by calculating the ranking score of a user u U and an item v I as q ( X urv | ) + p ( X urv = 1 | XG urv , w ) , where r = r ui and R is a hyperparameter.", "We draw on the KG encoder f and the personalized rule importance scores y u from the last two steps to generate explainable paths for every user u .", "Specifically, we train an LSTM-based path reasoning network that takes the start user embedding as input and predicts a sequence of entities and relations to form a path.", "For every user u , we restrict the reasoner to generate the paths that follow the rules with the largest scores in y u .", "The details of and path reasoning are described in the Appendix.", "Dataset We experiment on three domain-specific e-commerce datasets from Amazon, namely Cellphones , Grocery , and Automotive .", "There are two requirements that lead to the selection of these categories in our experiments.", "First, the constructed KG should contain rich user behavior patterns, e.g., user mentioned features or preferred styles, etc.", "This is the major difference from most of the existing work (Zhao et al., 2019), which only extends knowledge on the item side.", "Second, the KGs are assumed to be large-scale.", "We select several large subsets from Fu et al. (2020b), where the constructed KG can be regarded as an updated version of those of Ai et al. (2019) based on the Amazon review dataset (Ni et al., 2019).", "The remaining three datasets are the ones that satisfy both of the aforementioned requirements.", "Statistical details of datasets are provided in the Appendix.", "Baselines & Metrics We consider several state-of-art baselines in the following experiments.", "CKE (Zhang et al., 2016) uses semantic representations derived from TransR (Lin et al., 2015) to enhance the matrix factorization process.", "RippleNet (Wang et al., 2018) is a hybrid method combining regularization and path formats, and augmenting user representations with a memory-network-like approach.", "PGPR (Xian et al., 2019) designed a policy-guided graph search algorithm for recommendation over KGs.", "HeteroEmbed (Ai et al., 2018) aims to learn the embeddings of a heterogeneous graph including users, items, and relations for recommendation.", "KGAT (Wang et al., 2019) explicitly models higher-order KG connectivity and learns node representations by propagating the embedding of neighbors with corresponding importance discriminated by an attention mechanism.", "We adopted the same metrics as Ai et al. (2018) to evaluate the recommendation performance of all models: Precision , Recall , Normalized Discounted Cumulative Gain ( NDCG ), and Hit Rate ( HR ).", "We first evaluate the recommendation quality of our model.", "The results of all methods across all three datasets are reported in Table 1. In general, our method significantly outperforms all state-of-the-art baselines on all metrics.", "Taking Cellphones as an example, our method achieves an improvement of 6.01% in NDCG against the best baseline (underlined), and an improvement of 5.82% in Hits@10.", "Similar trends can be observed on other benchmarks as well.", "Note that both our model and HeteroEmbed adopt TransE for KG representation learning, yet our model achieves better performance, mainly attributed to the iterative learning of graph encoder and neural logic model.", "Measuring Faithfulness Inspired by previous work (Maaten and Hinton, 2008; Serrano and Smith, 2019; Subramanian et al., 2020), we de-fine the faithfulness to be the JensenShannon (JS) divergence of rule-related distributions from training and test sets.", "Specifically, we randomly sample 50 users from the training set.", "For each user u , we further sample around 1,000 paths between the user and the connected item nodes, and calculate the rule distribution over these paths, denoted by F ( u ) .", "We compare the proposed LOGER with two baselines, PGPR, and KGAT, each of which is used to generate 20 explainable paths for every selected user in the test phase.", "Similarly, we can calculate the rule distribution over these 20 paths, denoted by Q f ( u ) .", "The JS scores are defined as follows.", "Here, Q w ( u ) is the rule weight distribution derived from the personalized rule importance scores of our method or the path weights of baselines.", "Smaller values of two JS scores correspond to better faithfulness of the explainable paths.", "This faithfulness evaluation is motivated in terms of the consistency of the explainable paths with respect to the user historic behavior.", "User Study Additionally, we conduct a user study to evaluate the faithfulness of the explainable paths.", "We display 50 sampled KG paths starting from one user towards purchased items in the training set to represent examples of user historical behaviors.", "For comparison, we also present 10 explainable paths generated by three methods for the same user in the test dataset.", "We ask 20 human subjects to rank these methods based on whether Cellphones Grocery Automotive Precision Recall NDCG HR Precision Recall NDCG HR Precision Recall NDCG HR CKE 0.0360 0.1760 0.1847 0.3067 0.0612 0.2528 0.3070 0.4511 0.0458 0.1871 0.2257 0.3621 RippleNet 0.0419 0.2141 0.2177 0.3715 0.0591 0.2682 0.2858 0.4800 --PGPR 0.0462 0.2148 0.2366 0.3801 0.0649 0.2710 0.3174 0.4926 0.0589 0.2315 0.2804 0.4409 KGAT 0.0476 0.2274 0.2365 0.3835 0.0702 0.2916 0.3381 0.5020 0.0601 0.2500 0.2859 0.4514 HeteroEmbed 0.0527 0.2543 0.2626 0.4226 0.0785 0.3316 0.3701 0.5572 0.0695 0.2923 0.3314 0.5082 LOGER 0.0622 0.2977 0.3227 0.4808 0.0906 0.3754 0.4370 0.6121 0.0743 0.3091 0.3653 0.5346 Table 1: Recommendation quality of all methods on three datasets.", "paths obtained by three methods.", "Bold numbers indicate the best results.", "the generated paths are consistent with those from the training set.", "Then, we calculate the average ranking scores ( Avg. Rank ) by averaging the rank given by each human tester on each method.", "Results The results on the Cellphones and Grocery datasets are reported in Table 2. We observe that our method LOGER achieves the lowest JS scores and average ranking score, which reveal the effectiveness of our model in producing more faithful explanations in both quantitative measurements and in the user study.", "We further study how hidden triplets used in training KG encoder (Eq. 2) influence the recommendation performance.", "We experiment on the Cellphones data under different sizes of hidden triplet sets H + .", "We choose the sizes of { 10 , 20 , 30 , 40 , 50 } and keep all other settings unchanged.", "The results are plotted in Fig. 2, including our model (red circles) and the best baseline HeteroEmbed (blue crosses).", "We find that our model consistently outperforms the baseline in all the metrics under different numbers of hidden triplets.", "Better recommendation performance can be achieved with more hidden triplets included in training the KG encoder, because more candidate items will enhance the capability of our model to discern the logical rules of good quality and hence benefit the recommendation prediction.", "In this paper, we propose LOGER for faithfully explainable recommendation, which generates explainable paths based on personalized rule importance scores via neural logic reasoning that adequately captures historic user behavior.", "We experiment on three large-scale datasets for e-commerce recommendation showing superior recommendation quality of LOGER as well as the faithfulness of the generated explanations both quantitatively and qualitatively.", "We hope to encourage future work that values explainability and in particular the faithfulness of explanations.", "Our code is available at https://github.com/orcax/LOGER .", "We thank the reviewers for the valuable feedback and suggestions.", "This work was supported in part by NSF IIS-1910154 and IIS-2007907.", "Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsors." ]
[ "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "objective", "result", "method", "other", "other", "other", "other" ]
[ "We consider the problem of learning to simplify medical texts.", "This is important because most reliable, up-to-date information in biomedicine is dense with jargon and thus practically inaccessible to the lay audience.", "Furthermore, manual simplification does not scale to the rapidly growing body of biomedical literature, motivating the need for automated approaches.", "Unfortunately, there are no large-scale resources available for this task.", "In this work we introduce a new corpus of parallel texts in English comprising technical and lay summaries of all published evidence pertaining to different clinical topics.", "We then propose a new metric based on likelihood scores from a masked language model pretrained on scientific texts.", "We show that this automated measure better differentiates between technical and lay summaries than existing heuristics.", "We introduce and evaluate baseline encoder-decoder Transformer models for simplification and propose a novel augmentation to these in which we explicitly penalize the decoder for producing jargon' terms; we find that this yields improvements over baselines in terms of readability.", "The need for accessible medical information has never been greater.", "A Pew Research survey of American's online health habits in 2013 revealed that one in three American adults have gone online to figure out a medical condition (Fox and Duggan, 2013).", "Given the rise of medical misinformation on the internet (Ioannidis et al., 2017), accessibility has become an increasingly urgent issue (World Health Organization, 2013; Armstrong and Naylor, 2019).", "However, sources that provide accurate and up-to-date information, including scientific papers and systematic reviews (Chalmers et al., 1995), are often effectively inaccessible to most readers because they are highly technical and laden with terminology (Damay et al., 2006).", "Technical abstract: Analysis showed a higher rate of weight gain in the high-volume feeds group: mean difference 6.20 g/kg/d (95% confidence interval 2.71 to 9.69).", "There was no increase in the risk of feed intolerance or necrotising enterocolitis with high-volume feeds, but 95% confidence intervals around these estimates were wide.", "Plain-language summary: Very low birth weight infants who receive more milk than standard volumes gain weight more quickly during their hospital stay.", "We found no evidence suggesting that giving infants high volumes of milk causes feeding or gut problems, but this finding is not certain.", "One potential solution to this problem is text simplification , i.e., editing documents such that they are accessible to a wider audience, while preserving the key information that they contain.", "Although manual simplification is too expensive to feasibly apply at scale, automatic text simplification (Sid-dharthan, 2014; Alva-Manchego et al., 2020) provides a potential means of rendering a large volume of specialist knowledge more accessible.", "Large-scale data-driven simplification systems have mostly been trained on Wikipedia (Zhu et al., 2010; Woodsend and Lapata, 2011; Coster and Kauchak, 2011) and news (Xu et al., 2015), and focus on sentence simplification (Wubben et al., 2012; Wang et al., 2016; Xu et al., 2016; Zhang and Lapata, 2017; Kriz et al., 2019; Dong et al., 2019; Alva-Manchego et al., 2020); on the other hand, medical text simplification is resource poor.", "Recent work has involved constructing sentence-aligned data automatically using monolingual text alignment methods (Adduru et al., 2018; Van den Bercken et al., 2019), but this process is noisy and constrains the task to sentence-level simplification.", "In this work we explore new data and modern conditional text generation models (Lewis et al., 2020) to simplify medical documents.", "We introduce a dataset of paired (technical, simplified) texts derived from the Cochrane Database of Systematic Reviews, which is comprised of evidence syntheses on a wide range of clinical topics.", "Critically, each review includes a plain-language summary (PLS) written by the authors.", "PLS are written directly from the full reviews with their own structure and guidelines; they are not simplified versions of the corresponding technical abstracts of the reviews, nor are they summaries of the abstracts.", "However, we observe that portions of the PLS can be considered simplifications of analogous sections in the abstracts, that is, they contain roughly the same content but involve simplification operations such as paraphrasing, word/sentence deletion, and summarization.", "We heuristically derive 4459 such pairs of sections (or paragraphs) of technical plain English bitexts.", "We provide an excerpt of the dataset we have constructed in Table", "1. This data allows us to explore characteristics of simplified versions of technical medical texts.", "We show that the differences in traditional readability metrics, such as Flesch-Kincaid (Kincaid et al., 1975) and Automated Readability Index (Sen-ter and Smith, 1967), are small.", "Instead, the differences are better captured using large-scale pretrained masked language models, and this reveals that there is more to the language difference than the shallow cues such as sentence and word lengths that traditional readability metrics focus on.", "We present baseline methods for automatic text simplification over this data and perform analyses that highlight the challenges of this important simplification task.", "We find that when naively fine-tuned for the task, existing encoder-decoder models such as BART (Lewis et al., 2020) tend to prefer deletion over paraphrasing or explaining, and are prone to generating technical words.", "We propose a new approach to try and mitigate the latter issue by imposing a variant of unlikelihood loss (Welleck et al., 2019) that explicitly penalizes the decoder for production of technical' tokens.", "We show that this yields improvements in terms of readability with only a minor tradeoff with content quality.", "In sum, this work takes a step towards paragraph-level simplification of medical texts by: (1) introducing a sizable new dataset, (2) proposing and validating a new masked language model (MLM)-based metric for scoring the technicality of texts, (3) analyzing and understanding the style of plain language in this important domain, and (4) presenting baselines that exploit a variant of unlikelihood training to explicitly penalize models for producing jargon.", "We release our code and data at https://github.com/AshOlogn/Paragraph-level-Simplification-of-Medical-Texts .", "Recent efforts on data-driven text simplification methods have tended to rely on two resources: the Wikipedia-Simple Wikipedia aligned corpus (Zhu et al., 2010; Woodsend and Lapata, 2011; Coster and Kauchak, 2011) and the Newsela simplification corpus (Xu et al., 2015).", "Yet, there is an urgent need to simplify medical texts due to health literacy levels (World Health Organization, 2013).", "However, due to a lack of resources with which to train model-based simplification systems in this domain, past work has tended to focus on lexical simplification (Damay et al., 2006; Kandula et al., 2010; Abrahamsson et al., 2014; Mukherjee et al., 2017).", "Recently, Adduru et al. (2018) and Van den Bercken et al. (2019) introduced sentence-aligned corpora at the scale of thousands of sentence pairs.", "In contrast to our corpus, these datasets were automatically derived using paraphrase mining or monolingual alignment processes.", "Furthermore, as these are exclusively sentence corpora, they limit the set of potential approaches to just those that operate over sentences.", "Grabar and Cardon (2018) created a simplification corpus for medical texts in French, in which a small subset of the text pairs are manually sentence-aligned, resulting in 663 sentence pairs, 112 of which are also from Cochrane.", "With respect to modeling, recent work has focused on sentence simplification, treating it as a monolingual machine translation task (Wubben et al., 2012; Wang et al., 2016; Xu et al., 2016) using encoder-decoder models (Zhang and Lapata, 2017; Kriz et al., 2019; Dong et al., 2019).", "In the medical domain, existing systems tend to adopt lexical and syntactic simplification (Damay et al., 2006; Kandula et al., 2010; Llanos et al., 2016).", "Research on document simplification has been sparse; to the best of our knowledge, the few prior works on this in English have focused on analysis (Petersen and Ostendorf, 2007), sentence deletion (Wood-send and Lapata, 2011; Zhong et al., 2020), and localized explanation generation (Srikanth and Li, 2020).", "This work proposes and evaluates an encoder-decoder model for paragraph-level simplification.", "We compiled a dataset of technical abstracts of biomedical systematic reviews and corresponding PLS from the Cochrane Database of Systematic Reviews, which comprises thousands of evidence synopses (where authors provide an overview of all published evidence relevant to a particular clinical question or topic).", "The PLS are written by review authors; Cochrane's PLS standards (Cochrane, 2013) recommend that the PLS should be written in plain English which can be understood by most readers without a university education.", "PLS are not parallel with every sentence in the abstract; on the contrary, they are structured heterogeneously (Kadic et al., 2016).", "To derive the dataset we scraped the online interface to the database for articles containing PLS, extracting the raw text of the technical abstracts and PLS for those that we identified.", "In this way we obtained 7820 pairs after removing problematic links (e.g., HTTP 404 errors).", "We also excluded reviews with atypical formatting that would have required extensive manual inspection.", "On average, PLS are shorter than abstracts (Ta-ble 2, raw').", "They contain sections different from those in the abstracts, emphasize different content, and sometimes contain information not in the abstract.", "We divided documents into those that are split into sections with subheadings and those without (henceforth long-form summaries); 56% of the data are long-form.", "For the sectioned PLS, headers are quite different from those found in the abstracts.", "The latter adhere to one of the 2 following formats:", "1. Background, Objectives, Search Methods, Selection Criteria, Data Collection and Analysis, Main Results, Au-thors' Conclusions", "2. Background, Objectives, Methods, Main Results, Authors' Conclusions In contrast, PLS contain a variety of headings, with the most common ones shown below: background , study characteristics , key results , review question , quality of the evidence , search date , quality of evidence , conclusions Others include questions such as What was the aim of this review?", "And How up-to-date was the review?", "Manual inspection revealed that the results , discussion , and conclusion sections of abstracts and summaries tended to occur in parallel.", "This motivated us to extract aligned subsets of abstracts and summaries to compose our dataset.", "More specifically, we determined the approximate location of the section describing studies and results in each text and kept everything from that point forward.", "Therefore, in the abstracts we kept the text from the Main Results section onward.", "For the sectioned PLS we kept every section after and including the first that contained one of the following substrings: find , found , evidence , tell us , study characteristic .", "For the long-form PLS, we found the first paragraph containing any of the following words within the first couple sentences and included that and subsequent paragraphs: journal, study, studies, trial .", "We keep one-paragraph PLS in their entirety.", "We also exclude instances where the PLS and abstracts are drastically different in length, by keeping only instances where the length ratio between the two falls between 0.2 and 1.3.", "Our final dataset comprises 4459 pairs of technical abstracts and PLS, all containing 1024 tokens (so that they can be fed into the BART model in their entirety).", "Readability metrics.", "Designing metrics that reliably capture readability remains an open topic of research.", "In recent years, a host of metrics have been developed that use a wide variety of linguistic features to assess readability in a supervised manner.", "For example, Kate et al. (2010) developed a metric based on syntactical, semantic, and language model-based features, and Vajjala and Lucic (2018) developed a new readability corpus, on which they trained support vector machines to predict text readability.", "For this medical text simplification task, however, we considered a couple established heuristics-based readability metrics due to clear domain differences between our Cochrane corpus and those used to train supervised readability metrics: the Flesch-Kincaid score (Kincaid Metric Abstracts PLS Flesch-Kincaid 14 . 4 2 . 3 12 . 9 2 . 4 ARI 15 . 5 2 . 8 14 . 9 3 . 0 Table 3: Means and standard deviations of different readability scores calculated over abstracts and PLS. et al., 1975) and the automated readability index (ARI) (Senter and Smith, 1967), which estimate the educational maturity (grade-level) required to comprehend a text.", "These metrics rely on a combination of shallow cues, most notably lengths of words, sentences, and documents.", "Table 3 reports the mean grade levels of abstracts and PLS calculated via the above metrics.", "There are small but statistically significant ( p < 0 . 01 , paired t -test) differences between the abstract and PLS distributions, especially for Flesch-Kincaid.", "For instance, the maximum difference in mean minimum grades (1.5) is achieved by Flesch-Kincaid, and the number is only 0.6 with ARI.", "By contrast, a 35 grade level difference was shown on the Wikipedia and Britannica simplification datasets (Li and Nenkova, 2015).", "The high grade-level suggested by standard readability metrics confirms prior studies highlighting that these plain lan-guage' summaries of medical systematic reviews remain at higher reading levels than those of average US adults (Karacic et al., 2019).", "Masked language models.", "Despite the small differences in readability metrics, PLS do qualitatively seem easier to understand (see Table 1 for an ex-ample).", "This suggests that existing measures are incomplete.", "We propose adopting modern masked language models namely BERT (Devlin et al., 2019) as another means of scoring the tech-nicality' of text.", "In particular, when such models are trained on specialized or technical language (e.g., scientific articles) we would expect the likelihoods subsequently assigned to jargon' tokens to be relatively high compared to a model trained over general lay corpora, as in the original BERT model (Devlin et al., 2019).", "Capitalizing on this intuition, we consider two large-scale pre-trained masked language models: (1) BERT (Devlin et al., 2019) trained on BooksCorpus (Zhu et al., 2015) and English Wikipedia; and (2) SciBERT (Beltagy et al., 2019), trained on a sample of 1.14 million technical papers from Semantic Scholar (Ammar et al., 2018) (mostly biomedical and computer science articles).", "Inspired by the original training objective for these models, we compute a probability score for a document by splitting it into sentences, masking 10 subsets of 15% of the tokens in each sentence (ex-empting CLS and SEP ), computing the likelihoods of the original tokens in the distributions output by the model in each masked position, and averaging these probabilities over all the masked subsets and sentences in the document.", "The details are shown in Algorithm", "1. Algorithm 1 Used to compute a probability score for a text document D given a masked language model M .", "The output of the model returned by a call to FORWARD is a matrix where each row maps to a distribution over all the tokens in the vocabulary.", "The APPEND function adds a value to the end of a list.", "Figure 1 depicts the distributions of probabilities output by general BERT and SciBERT for the abstracts and PLS in our dataset.", "Both masked LMs induce distributions over instances from the respective sets that are clearly different.", "For example, SciBERT (which yields sharper differences) outputs higher likelihoods for tokens comprising the technical abstracts than for those in the plain language versions, as we might expect given that this is pretrained on technical literature.", "A paired t -test confirms that these observed differences between the abstracts and PLS distributions are statistically significant (with p < 0 . 01 ).", "Which metric discriminates better?", "To better determine how well the proposed masked probability outputs discriminate between technical abstracts and PLS, we plot receiver operating characteristic Figure 1: BERT (left) vs SciBERT (right) probabilities of technical abstracts (blue) and PLS (red).", "(ROC) curves for the outputs of BERT, SciBERT, Flesch-Kincaid and ARI, coding technical and PLS abstracts as 0 and 1, respectively.", "The SciBERT curve has a higher AUC score (0.70) than the general BERT curve (0.66), indicating that it is better at discriminating between plain language and technical abstracts.For this reason, we use the SciBERT masked probabilities when analyzing the texts generated by our models.", "The AUC score for SciBERT is also higher than that for Flesch-Kincaid, indicating that simplicity in PLS can be better captured by probabilistic means than by surface-level linguistic cues, and that it is more appropriately viewed as a stylistic difference rather than one of readability.This echoes the arguments made by early investigators of readability metrics that these measures do not replace more subtle linguistic characteristics, e.g., style (Klare, 1963; Chall, 1958).", "We next investigate lexical differences between technical abstracts and PLS.", "In prior work, Gledhill et al. 2019 performed extensive lexical analysis on this corpus by comparing the relative frequencies of different part-of-speech n -grams found in the abstracts and PLS.", "Here, we analyze the weights from a logistic regression model that classifies whether a text is a technical abstract or a PLS (coding the latter as y = 1 ); the weights learned by the model can be conveniently incorporated into the loss function we use to train our simplification model (Section 4.2).", "We represent texts as normalized bag-of-words frequency vectors (with a feature for each token in the BART vocabulary).", "We performed 5-fold cross validation on the data and observed an average accuracy of 92.7%, which indicated that even this relatively simple model is capable of accurately distinguishing technical abstracts from PLS.", "We also evaluated this model on the train-validation split described in Section 4.3.", "The model achieves a very high AUC score of 0.99, indicating that it almost perfectly separates abstracts from PLS.", "To better understand which kinds of tokens are most associated with technical abstracts and PLS, we examined the tokens with the highest-magnitude learned weights in the model, with the most negative weights corresponding to tokens indicative of technical abstracts and the most positive ones being indicative of PLS.", "These notable tokens are displayed in Table 4.", "From this table it is clear that numerical tokens and those related to statistical analysis, like bias and CI (confidence interval) are most indicative of abstracts.", "The tokens indicative of PLS are less illuminating and merely reflect common phrases include in PLS, such as In this review and We searched scientific databases .", "In Section 4, we use this model as a discriminator along with our transformer encoder-decoder model during training to penalize the generation of tokens that are indicative of technical abstracts.", "Our baseline simplification model is BART (Lewis et al., 2020), an encoder-decoder architecture in which both components are transformers (Vaswani et al., 2017).", "The decoder is auto-regressive, making it a natural fit for generation tasks.", "BART has been shown to achieve strong performance on text summarization, specifically on the CNN/Daily Mail (Hermann et al., 2015) and XSum (Narayan et al., 2018) datasets.", "mated via fine-tuning on the XSum (Narayan et al., 2018) dataset as provided by HuggingFace's Model Hub (Wolf et al., 2019).", "We then fine-tune these models on our corpus.", "1 In the decoding step, we use nucleus sampling (Holtzman et al., 2019): at each step of token generation the next token is sampled from a probability distribution constructed by removing the tail' of probability mass from BART's output distribution and then renormalizing.", "This strategy mitigates the awkward repetition typical of greedy methods like beam search while still avoiding incoherence by truncating the unlikely tail in the original model distribution.", "As an additional mechanism to encourage simple terminology in the PLS generated by our model, we propose a new method in which we explicitly penalize the model for producing seemingly technical words via unlikelihood training (Welleck et al., 2019; Li et al., 2020).", "The idea is to add a term to the objective that encourages the model to decrease the probability mass assigned to some set of tokens S .", "This is realized by adding a term to the (log) loss: UL = (cid:80) |S| j =1 log(1 p ( s j | y <t , x )) , where x is the technical abstract input to the encoder, y <t is the prefix of the target summary y input to the decoder at time t , and p ( s j | y <t , x ) is the probability assigned to token s j in the distribution output by BART (with model parameters ) at time t .", "This 1 We also considered starting from a checkpoint corresponding to training over CNN/Daily News but preliminary manual examination of model outputs suggested starting from XSum yielded higher quality outputs.", "expression is referred to as Unlikelihood Loss (UL).", "The UL term is weighted by a positive constant and added to the typical log-likelihood objective.", "We construct S by collecting tokens with negative weights from a bag-of-words logistic regression model trained to classify whether a document is simple (1) or complex (0), for which negative tokens are indicative of complex language.", "We then softmax the absolute values of these weights so that they sum to 1 and the tokens most indicative of technical abstracts (i.e., those with the most negative weights initially) contribute the most to this sum.", "We consider three variants of this procedure.", "(1) We classify whether a document is a PLS or an abstract (Section 3.3).", "(2) We use external data, namely the Newsela corpus (Xu et al., 2015), and train a model to distinguish between documents of reading levels 0 and 3.", "2 (3) We train two different models for the previous tasks and then sum the weight vectors before applying a softmax to derive token penalties.", "Let w j denote the learned logistic regression weight for token s j S .", "The final weight w (cid:48) j used in the unlikelihood loss function is: w (cid:48) j = exp( | w j | /T ) (cid:80) |S| i =1 exp( | w i | /T ) (1) where T is the temperature of the softmax.", "A modification we make to the unlikelihood loss function is that we only apply the loss for a given token s j if the probability distribution output for the token at position t indicates that s j should be output, that is, if s j = arg max v V p ( v | y <t ) where V denotes BART's token vocabulary.", "Denoting an indicator function for this event by 1 s j ,t , our final unlikelihood loss term L ( p , S , y ) is: | y | (cid:88) t =1 |S| (cid:88) j =1 1 s j ,t w (cid:48) j log(1 p ( s j | y <t )) (2) 4.3 Experimental setup Data.", "We split our dataset of 4459 abstract-PLS pairs so that 3568 reviews are in the training set, 411 in the validation set, and 480 in the test set.", "We experimented with hyperparameters by manually inspecting a subset of the validation set and report results on the entire test set.", "2 Five-fold evaluation showed that the model achieved > 90 % accuracy.", "We also experimented with the Simple Wikipedia/Wikipedia dataset (Zhu et al., 2010), but this model was not effective in early experiments.", "Hyperparameters.", "For nucleus sampling, we use a topp value of 0.9.", "In the unlikelihood training procedure, we experimented with different values of in our total loss function ( 1 , 10 , 10 3 , 10 6 ) on the validation set and different temperatures T in the softmax step ( 1 , 2 , 5 , 10 ).", "Based on manual examination of the generated texts in the validation set, we determined that ( T = 2 , = 100) yields the most coherent and high-quality simplifications, so we only report results for this case.", "All models are fine-tuned on our dataset for 1 epoch with a batch size of 1 and a learning rate that starts at 3e-5 and decreases linearly to 0 over the course of training.", "For optimizer, we used AdamW with = 1e-8 (Kingma and Ba, 2015; Loshchilov and Hutter, 2019).", "In this section we comment on the generated texts' readability, quality of summarization and simplification, stylistic fidelity with the PLS, and overall coherence and simplicity based on human examination.", "In the results tables, we indicate whether lower or higher scores for the metrics reported are better with and symbols, respectively.", "Table 5 reports the mean readability scores achieved under different training settings.", "Results generated via models trained with the proposed UL objective achieve significantly lower Flesch-Kincaid scores than those achieved by both the technical abstracts and reference PLS, whereas the model trained without UL produced texts with a higher reading level than the PLS.", "Rather surprisingly, the UL-Newsela and UL-both settings, both of which use the Newsela dataset to produce unlikelihood weights, did not yield a decrease in estimated grade levels.", "We suspect that this could be attributed to the difference in domains, that is, the tokens contributed by the Newsela classifier are not generated frequently enough to have a noticeable impact during unlikelihood training.", "These results suggest that: (1) BART is capable of performing simplification of medical texts such that outputs enjoy reduced reading levels compared to those of the technical abstracts; (2) The proposed use of UL to explicitly penalize the model for outputting jargon allows for the generation of text with even greater readability than the reference PLS.", "The reading levels of even the simplified out-FK ARI SciBERT Abstracts 14.42 15.58 0.57 PLS 13.11 15.08 0.53 No UL 13.44 15.09 0.55 UL-Cochrane 11.97 13.73 0.55 UL-Newsela 12.51 14.15 0.54 UL-Both 12.26 14.04 0.54 Table 5: Flesch-Kincaid, ARI, and SciBERT masked probability scores for generated PLS.", "puts, however, are at the late-high school/early college levels.", "This could reflect the relatively small differences in readability scores between abstracts and PLS in general (Section 3.2).", "In Section 3.2 we showed that SciBERT masked probability scores are more useful as a discriminator between technical abstracts and PLS than the standard readability metrics, which use surface-level cues like word and sentence counts.", "Experiments by Jawahar et al. (2019) suggest that BERT-style masked language models encode a wide array of syntactic and semantic features of language, which they then employs for downstream tasks.", "For this reason, we use SciBERT masked probability scores as our notion of style, with lower scores corresponding to simpler, less technical language.", "To explore the extent to which the generated summaries stylistically resemble the PLS, we computed the average of the SciBERT masked probability scores of the generated texts for each model.", "The results are shown in Table 5 along with the readability scores.", "We see that every model produces text with significantly lower probability scores than the abstracts, which suggests that they successfully convert input abstracts into less-technical summaries.", "Though the average scores are higher than that of the PLS, this difference is not statistically significant, so we can consider the outputs of the models to be stylistically on par with the target PLS.", "We report SARI (Xu et al., 2016), a standard edit-based metric for text simplification, and BLEU (Pa-pineni et al., 2002), a precision-based method for machine translation that is also often reported for simplification systems.", "Xu et al. (2016) showed R1 R2 RL BLEU SARI No UL 0.40 0.15 0.37 0.44 0.38 UL-Cochrane 0.38 0.14 0.36 0.39 0.40 UL-Newsela 0.39 0.15 0.37 0.43 0.39 UL-Both 0.38 0.14 0.37 0.40 0.39 Table 6: ROUGE, BLEU, and SARI scores for generated PLS.", "that SARI correlates better with human evaluation for simplification tasks, focusing more on simplicity, while BLEU is stronger with respect to meaning and grammar.", "Finally we report the F1 versions of ROUGE-1, ROUGE-2, and ROUGE-L (Lin, 2004), which are the standard metrics typically used for summarization tasks.", "Table 6 shows the mean ROUGE, BLEU, and SARI scores.", "While UL models yielded small but significantly better SARI scores, the opposite is true for the ROUGE and BLEU measures.", "Despite the lack of clear patterns in these scores, there are clear qualitative differences between the different models' outputs, which are expounded upon in Section 5.4.", "Extractive vs. abstractive?", "Although not re-flected in the automatic evaluation metrics above, the increase in readability of UL models led us to suspect that UL models are more abstractive than extractive, namely, they contain more paraphrases.", "To determine the degree to which the outputs directly copy content from the technical abstracts, we computed the fraction of n -grams in the output PLS that also occur in the abstract (without considering repetition).", "These results are shown in Table 7.", "We observe that the introduction of UL clearly decreases n -gram overlap, and the difference becomes more marked as n increases.", "The use of Cochrane weights (those from the logistic regression model trained to discriminate between technical abstracts and PLS) likely reduces n -gram overlap because the tokens most penalized in UL training are those used to represent numerical data, e.g., statistics and confidence intervals.", "Penalizing these tokens discourages the regurgitation of numerical details from the technical abstract.", "The use of Newsela weights does not have the same effect, again likely due to the domain difference between the tokens penalized during unlikelihood training and those generated by the model.", "None N=1 N=2 N=3 N=4 PLS 0.56 0.29 0.19 0.14 No-UL 0.95 0.89 0.84 0.79 UL-Cochrane 0.84 0.67 0.57 0.49 UL-Newsela 0.92 0.81 0.73 0.66 UL-Both 0.89 0.76 0.67 0.59 Table 7: % of n -grams in reference/generated PLS that are also in the abstracts.", "of the model settings, however, achieve n -gram overlap scores nearly as low as the reference PLS, indicating that the generated summaries remain considerably more extractive than human-written PLS.", "We manually examined the outputs generated by our models on a random sample of 40 technical abstracts from the test split of our dataset.", "While reading these outputs, we made special note of text length, readability and coherence, the presence of hallucinated information not found in the corresponding abstract, and artifacts such as repetition and misspelled words.", "Our examination demonstrated that the generated texts were all significantly shorter than their respective abstracts and also shorter than the reference PLS.", "Furthermore, the models trained with Cochrane weights (UL-Cochrane' and UL-Both') produced shorter texts on average than the models trained without UL or with Newsela weights.", "This observation is supported by the results in Table 9, which displays the average number of tokens and sentences in the summaries generated under different training settings.", "One explanation for why UL with Cochrane weights produces shorter summaries is that training with these weights discourages the copying of statistics from the original abstract, a phenomenon exemplified in Appendix A, Table 10.", "Another trend that we noticed was that higher values produce shorter, more readable summaries at the expense of information completeness.", "Training with a high also increases the likelihood of hallucination, misspelling, and repetition.", "These drawbacks greatly impacted coherence for 1000 .", "These observations suggest a tradeoff between complet-ness of information and conciseness as is varied in the training process.", "Hallucination: The evidence is up-to-date as of February 2016.", "We found seven studies, involving 1839 participants, that compared home-based treatment with hospital-based care for venous thromboembolism.", "Misspelling: The review authors provided no information on other important outcomes, including gastro-oesophageal reflux, aspiration pneumonia, necrotise enterulitis...", "Repetition: However, we were not able to combine their results because of the small number and small number of people in the included studies.", "of a statement of the form The evidence is current to [month] [year] .", "The reason for this is that many PLS contain such a statement of currency not found in the technical abstracts, so models learn to include such a statement even if it cannot be factually deduced from the abstract.", "Another observation is that most commonly misspelled words are those of medications and diseases.", "Table 8 provides examples of the various kinds of artifacts found in the generated PLS.", "The presence of these artifacts suggest that in practice, generated texts should be reviewed before being used.", "In this work we considered the important task of medical text simplification.", "We derived a new resource for this task made up of technical abstracts summarizing medical evidence paired with plain language versions of the same; we have made this data publicly available to facilitate further research.", "3 We proposed a new masked language model (MLM)-based measure of the technicality of text, which quantifies technicality by calculating the likelihood of tokens in the input text with respect to a transformer-based MLM trained on a technical corpus.", "We demonstrated that this metric better discriminated technical abstracts from PLS than more traditional notions of readability.", "the training objective by incorporating an explicit penalty for production of jargon' terms.", "We found that this method can improve model outputs (i.e., can increase simplicity and the abstractiveness of summaries) according to the metrics considered.", "This paper presents a dataset from the Cochrane library; this comprises only the freely available portion of the information on Cochrane (abstracts that are readily available to all).", "No annotators other than the authors of this paper are involved in the manual inspection of this data.", "In addition, the Cochrane data in itself, and our collection and inspection of it, does not involve any personally identifiable information.", "The baseline models presented involves simplifying medical texts.", "Inconsistencies (e.g., hallucinations) of the generated PLS with respect to the original review is an artifact discussed in Section 5.4.", "This can lead to misinformed readers.", "Therefore, the outputs of the proposed systems should always be manually examined before being used.", "This work was supported in part by the National Institutes of Health (NIH), grant R01-LM012086, and the National Science Foundation (NSF), grant IIS-1850153.", "We acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported within this paper." ]
[ "method", "abstain", "abstain", "abstain", "objective", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "objective", "result", "abstain", "result", "result", "abstain", "result", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "method", "objective", "method", "other", "other", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "How can language technology address the diverse situations of the world's languages?", "In one view, languages exist on a resource continuum and the challenge is to scale existing solutions, bringing under-resourced languages into the high-resource world.", "In another view, presented here, the world's language ecology includes standardised languages, local languages, and contact languages.", "These are often subsumed under the label of under-resourced languages' even though they have distinct functions and prospects.", "I explore this position and propose some ecologically-aware language technology agendas.", "This paper is about the world's local languages by which I mean small, primarily-oral languages, often Indigenous or endangered, including the original and emerging languages of Africa, Asia, Australia, the Americas, the Pacific, and the minority languages of Europe.", "Local languages are often called under-resourced because they lack what is required for creating speech and language technologies (Krauwer, 2003).", "Some have been called acutely under-resourced, because they are spoken by few people and are rarely written down (Jimer-son and Prud'hommeaux, 2018).", "From here, it is a small step down to zero expert resources and the zero resource scenario (Dunbar et al., 2017).", "I depict this situation in Figure 1.", "In the middle we have standardised languages, including high-resource' languages (e.g. English, Spanish, Mandarin, and Arabic), and under-resourced' languages where there are community aspirations for language technologies, and where commercial, or social, or political resources are being leveraged to create the missing language resources (e.g. Irish, Zulu).", "I represent these languages with hard boundaries in Figure 1 to remind us that standardisation delimits languages.", "With standardisation comes Figure 1: The Central-Peripheral Model: fully-translatable high-resource languages occupy the centre (large dark circles), surrounded by standardised but less translatable under-resourced languages (smaller cir-cles), and outside the resource horizon of the global information society (dotted circle), we have unstandardised languages: under-resourced languages going out to acutely under-resourced languages.", "writing (Joseph, 1987), along with a standardised orthography, written literature, formal education, widespread literacy, and mass media.", "Figure 1 represents what I believe to be the mindset of people who are working in low-resource sce-narios' and seeking one-size-fits-all solutions.", "The vision of Language Technology for All' (LT4All) is to expand the resource horizon and deliver language technologies like machine translation and speech recognition to all languages.", "The hope is that, where political will and economic incentive have failed, technological mastery will succeed in delivering digital language equality.", "Regardless of what one thinks about such prospects, I believe that this agenda is misguided because it does not address the ecology of the world's languages.", "In this paper I describe a multipolar view of language ecology.", "I call on researchers working on local languages to make a local turn , working from the ground up with speakers to identify new opportunities for language technologies.", "In the central-peripheral model (Fig. 1), languages outside the high-resource centre are regarded as deficient.", "In language after language, we prob-lematise complex socio-political situations purely in terms of missing data, and we prioritise solutions that target this shortcoming.", "I will refer to this as poverty-conscious language technology'.", "Poverty-conscious language technology views the high-resource language situation as normative.", "It sets up language technologists as the ones who will come to the rescue of deficient languages.", "This position is a form of Eurocentrism, a colonial world-view centred on Western civilisation.", "It is marked by several beliefs and values which I illustrate here.", "Efficiency.", "The goal of the DARPA LORELEI program was to develop methods that apply to languages of any type from any language family, eliminating the need to tailor specific technologies to a narrow set of input languages (Tzoukermann et al., 2021).", "The architects of this scheme sought to capture public imagination with the scale of their vision: Tool kit would work for every language (all 7,000 of them) (McCaney, 2015).", "Language equality.", "In the present context, this is the belief that languages are equally deserving of technology, that language technology is for all languages.", "It is reflected in the label Machine Translation For All, 1 and in a manual to help every language digitize and share equally in the benefits of a connected digital world, ensuring that no language is left behind'. 2 Technologisation.", "The computer is presented as a neutral tool for manipulating data and implementing and testing theories (Garvin, 1963; Lawler and Aristar Dry, 1998; Bird, 1999; Hanke, 2017; Barn-brook, 2022).", "We provide computational tools to support language documentation, since documen-tation as language salvation has become the operative metaphor used by language experts (Perley, 2012).", "We might aim to help society directly, allocating our technical capabilities for maximal social good (Jin et al., 2021), using language technology [as] the key to achieve full digital language equality in the new multilingual and interconnected world (Steurs, 2021).", "Observe that it is us who will em-1 https://sigul-2022.ilc.cnr.it/ mt4all-shared-task/ 2 https://translationcommons.org/impact/ language-digitization/ power marginalised communities by introducing our disruptive language technologies (Joshi et al., 2019), while unwittingly reinforcing the central-peripheral model (cf. Schelenz and Pawelec, 2022).", "Scriptism.", "There is a position that writing is a more ideal form of linguistic representation than speech (or scriptism', Harris, 1980).", "It appears in the belief that saving languages involves reducing them to writing (Moore, 2006; Kornai, 2013; Anderson et al., 2019).", "It appears in the impulse to standardise the writing of indigenous languages so that we can apply language technologies to them (e.g. Mager et al., 2018).", "It appears when labels such as Machine Translation for All' and Euro-pean Language Equality' are used in ways that exclude oral languages.", "General-purpose solutions.", "A field linguist documented the request of a speech researcher for his data: The scenario was that nothing was known about the language, and the data set consisted solely of audio recordings of sentences plus translations into other languages.", "Thus, the challenge was to automate all the following tasks:", "(i) establishing the phoneme inventory,", "(ii) generating phoneme-level alignments for the audio data,", "(iii) training an acoustic model, and", "(iv) identifying words and their pronunciation in the target language.", "In short, the aim was to make a language accessible for speech technology by only using audio recordings and written translations, bypassing the need for transcriptions, pronunciation dictionaries, and even phoneme set definitions.", "From the point of view of computer science, this ambitious objective was much more interesting than the creation of a high-quality automatic speech recognition tool (Michaud et al., 2018, 400f).", "In the foreground here is the speech technologist and their skill in tackling an artificial problem, for which they need the lin-guist's data.", "They show little interest in delivering locally meaningful products.", "This is a widespread situation, where we apply our savoir faire and do more with less (e.g. Bird et al., 2014; Kempton and Moore, 2014; Vetter et al., 2016; Dunbar et al., 2017; Mller et al., 2017).", "These beliefs and values underlying the central-peripheral model contain unhelpful assumptions about language ecology (cf. Haugen, 1972; Calvet, 2006; Lewis and Simons, 2016, 63ff).", "One assumption is the monolingual mindset'.", "At least half of the world's population speaks more than one language, employing different languages 7818", "(a) The centre is ringed by culture areas', or zones of translatability', each containing local languages, and having a linguistic overlap with the centre due to historical contact and mass media.", "(b) A message originating in French is translated into English (step 0), after which local linguistic expertise takes over, in expressing the message in spoken form in the contact language (Aboriginal English, step 1), and interpreting it into local languages (steps 2 and 3); many paths exist thanks to the rich language ecology; local expertise does most of the work (see Sec. 4.2).", "People who belong to a predominantly monolingual culture are not used to seeing the world in this [multi-lingual] way, because their mindset has been established through centuries of being part of a dominant culture, in which other people learn your language and you do not learn theirs. It is notable that the nations which are most monolingual in ability and attitude are those with a history of major colonial or religious expansion (Crystal, 2000, 45).", "In reality, many speech communities have a repertoire of languages, each one playing a different role in the local linguistic ecosystem.", "A common situation is to have high' and low' prestige varieties (Fishman, 2001), also known as vehicular and vernacular languages, one for participation in commerce and education and one for participation in the local lifeworld.", "Another assumption is that written culture is normative.", "Fully literate persons can only with great difficulty imagine what a primary oral culture is like... Try to imagine a culture where no-one has ever looked up' anything. (Ong, 1982, 31).", "This assumption does harm: There is an urgent need to forefront the cultural divide between Aboriginal oral cultures and western literate cultures. The divide is disempowering Aboriginal people because literacy is argued to be a passport to success' in the dominant culture... Aboriginal people talk of reviving languages by returning to how the old people passed on the knowledge and the languages, on country and through the spoken word (Kimberley Language Resource Centre, 2010).", "A third assumption concerns the powerlessness of people whose languages are under threat, of speakers relegated to the role of unwitting casualties victimised by processes greater than them-selves (Perley, 2012).", "Yet language shift is inevitable, and we can observe the agency of many Indigenous communities who bring epistemic resources including grammatical distinctions and lexical items from an ancestral language into a new language (Dickson, 2015; Ponsonnet, 2019).", "The model in Figure 1 is an instance of the central-peripheral model that dominates most technocratic thinking about technology, media, and cul-ture (Srinivasan, 2017).", "The main parameter is the quantity of language resources, and whether they are sufficient to bring a language over the line into the highly-connected, global information society.", "The language we use gives us away: for all projects an agenda on the world; resource presumes machine-readability; expert means a specific type of western expertise; language in language resource' implies the ideology of language as data; scaling in scaling up the current language technologies for the rich diversity of human languages assumes that we have already identified the technological solutions.", "As an alternative to the central-peripheral model, consider the multipolar model shown in Figure", "2(a).", "The centre contains the standardised languages, i.e., major international languages that are fully translatable.", "It is ringed by less well-resourced 7819 languages, with differing strength of connection the centre.", "Some of these are regional spoken varieties of standardised languages, which include contact languages' (also known as trade languages, vehicular languages, or languages of wider com-munication).", "Contact languages connect people to other linguistic regions (cf. Fishman, 1998; Crystal, 2003).", "These regions are indicated using grey ovals in Figure", "2(a).", "What are these regions?", "In linguistic ecology, one begins not with a particular language but with a particular area, not with selective attention to a few languages but with comprehensive attention to all the languages in the area (Voegelin and Voegelin, 1964, 2).", "This is a notion from linguistic anthropology known as a culture area' (Newman, 1971).", "Each culture area contains many local languages, usually languages with primary orality.", "Translation between these languages is facilitated by a shared geography, culture, and lifeworld, plus a long history of language contact, and so we might also refer to these as zones of translatability'.", "I avoid the term high-resource' when referring to the centre of the multipolar model as this valorises a particular state of a language.", "It reifies our technological commodities as attributes of a language.", "The notion of standardised' language is pre-existing, suggests standardised orthography, and an institutionally delimited, prestige variety.", "It reminds us of the existence of complexities and compromises (Ferguson, 1962; Joseph, 1987).", "The terms under-resourced' and low-resource' conflate would-be standardised languages with those having purely local functions.", "3 The term low-resource language' is a barrier to understanding. It is applied to languages like Tamil, with 75 million speakers, most of them literate in the language, and a history of written texts that goes back thousands of years. It is ridiculous to use the same term to describe the biggest' Indigenous language in Canada, Cree, with 75,000 speakers and few written texts (Kuhn, 2022, 89).", "Writing is key to differentiating the two.", "4 I advocate limiting the scope of under-resourced' and low-resource' labels to just the would-be standardised languages.", "I propose that 3 This is not to say that there are not languages having both aspirations, e.g., contact languages and languages undergoing development.", "4 This is made explicit in the Sustainable Use Model, where sustainable literacy' is distinguished from sustainable orality' (Lewis and Simons, 2016).", "the community deprecate labels like acutely under-resourced' because they are a myopic way to view the linguistic creation of oral cultures.", "I further propose that we retire the sense of zero resource scenario' when referring to local languages (as distinct from child language acquisition).", "The local' descriptor might also supersede others such as her-itage', indigenous', endangered', threatened', or unwritten', which may be seen as valorising, pejorative, or Eurocentric (cf. Grinevald and Pivot, 2013).", "The local' descriptor is apt in reminding us of the local lifeworld and culture area.", "In observing three primary linguistic spaces, I do not seek to confine a given language to one of the three spaces.", "Local languages have diasporas, such as the Nahuatl, Quechua and Hawaiian communities in New York (Kaufman and Perlin, 2018).", "Regional spoken varieties of a single language may have markedly different functions in different culture areas, e.g. Spanish in Mexico vs New Mexico (cf. Lewis and Simons, 2016, 46).", "Language development efforts may bring local languages into the centre without compromising their local functions.", "Even the term local language' is problematic insofar as it seems to individuate bounded, homogeneous varieties.", "If the boundary of a language is unclear, it is not because Western science has not finished its job, but because human languages are not bounded codes in the first place (Dobrin et al., 2009).", "Diversity within a single language is sometimes problematised as deficit: lack of an orthographic normalization... large dialectal variation, and missing standardization (Mager et al., 2018, 57), yet this diversity within a language is the natural state and only a problem for those who would seek to scale technologies built on the assumptions of a standardised language.", "Finally, the zero resource scenario' builds in another Eurocentrism which needs to be rooted out.", "It is the positivist position that we arrive at true knowledge by induction, generalising over cases.", "When we look at local languages and ask what we can be sure of having for our general purpose models, the answer is raw speech with translations, i.e., the zero resource scenario.", "This is a lowest-common-denominator approach, and it inevitably brings us back to poverty-conscious language technology.", "Researchers in the centre need new ways of learning technology lessons concerning local languages (cf. Sec. 5).", "The multipolar model presents an opportunity to consider the agenda of language technology in three primary linguistic spaces.", "The first space is the global information society, with its standardised and would-be standardised languages (Sec. 4.1).", "The second space consists of the culture areas, their local languages, and primary orality (Sec. 4.2).", "The third space is where the first and second spaces intersect.", "Here we have contact languages, along with local languages undergoing active development (Sec. 4.3).", "We consider each of these in turn.", "From the centre, we want to continue to expand the reach of language technologies to more languages, to serve the purposes of economic integration (Rivera Pastor et al., 2018).", "This is a version of the original agenda of under-resourced language processing (cf. Fig. 1), restricted to languages with a realistic prospect of standardisation.", "For example, the goal of the European Language Equality Project is to enable all [European] languages, regardless of their specific circumstances, to realize their full potential, supporting them in achieving full digital equality in the coming decade (Gaspari et al., 2021, 2).", "We can therefore chart the progress of an individual language such as Irish towards digital language equality (Lynn, 2022).", "Let us consider language technology in the context of a humanitarian crisis.", "When it comes to messages like tsunami warning, move to higher ground, there is global reach through standardised languages alone.", "We may just need translation between standardised languages (step 0 in Fig.", "2(b)).", "From here on, we can rely on the expertise of speakers of regional varieties who thanks to historical contact and mass media readily understand the standardised language (step 1).", "Some people are highly mobile in this intercultural space, and thanks to their command of both local languages and contact languages, serve as connectors.", "They can interpret broadcast messages into the local lifeworld (step 2), where there is further expertise to take it to speakers of other varieties (step 3).", "Conversely, when a speaker of a local language delivers information in a crisis situation, they will often use a contact language.", "They will not be hampered by the lack of language technology in their local language, but by the lack of support for their variety of the contact language (e.g. Lewis, 2010; Lewis et al., 2011; Anastasopoulos et al., 2020).", "This situation can arise even when the person is speaking a major language like English, simply because local spoken varieties of English are still not well supported (cf. Koenecke et al., 2020; Markl and Lai, 2021).", "There is an opportunity here: support for contact languages, including creoles and regional spoken varieties of standardised languages, including and their rendering into non-standardised orthography, is a promising pathway for widespread language technology enabled participation in the global information society.", "Communications beyond these standardised languages and contact languages do not require LT4All, because there is local expertise in bridging lifeworlds and in interpreting between contact languages and local languages.", "There is still the risk that broadcast messages may be misunderstood or even cause harm.", "The need may not be for one-shot translation of a fixed message, but for dialogue and two-way education (Sec. 4.3).", "Dialogue reduces the chance of messages which while trivial to translate are not context aware: e.g. the instruction to Australian Aboriginal people living in overcrowded housing to stand apart from each other instead of a more locally aware instruction to stay in your family groups; or the instruction to villagers in Flores to run to higher ground where they would only be killed by landslides.", "5 This points to opportunities in the intercultural space (Sec. 4.3) and to the importance of working with local experts (Sec. 5.2).", "Much computational work already exists for local languages and is being brought together by the ACL SIG for Endangered Languages (SIGEL) and the ISCA SIG in Under-resourced Languages (SIGUL), including workshops on Computational Methods in the Study of Endangered Languages, Spoken Language Technologies for Under-Resourced Languages, Collaboration and Computing for Under-Resourced Languages, and NLP for Indigenous Languages of the Americas.", "It includes tasks associated with such topics as computer-supported collaborative language documentation, and NLP for polysynthetic languages (e.g. Hanke, 2017; Lane et al., 2022).", "It includes support for wider participation in NLP (e.g. Nekoto et al., 2020; Mirza-5 https://indosasters.org/2017/08/20/a-critical-reflection-on-running-to-higher-ground-narrativemyth-and-reality-in-tsunami-warning-and-response/ 7821", "(a) Indigenous research as the intersection between knowledge practices (following Christie, 2006).", "khalov et al., 2021), and language resources with the prospect of connecting local languages in ways that are not mediated by standardised pivot languages (e.g. Madonsela et al., 2016).", "This work varies in the degree to which it is locally conceived.", "In some places there is institutional support for developing a local language, including standardis-ing an orthography, teaching literacy, and translating literature to and from a standardised language (e.g. Zulu, Haitian Creole).", "Language development may shift a language into the overlap between a culture area and the global information society.", "A promising approach for work with speakers of a local language is offered by constructivism (e.g. Charmaz, 2014).", "A set of methods which have been successful in Arnhem Land is known as Ground-Up .", "It has grown from the observation that Indigenous knowledge is local and performed, and it employs methods that are emergent and situated.", "Early applications of Ground-Up methods involved content management and health communication (Cass et al., 2002; Verran et al., 2007; Lowell et al., 2021).", "In the language space, we can work from the ground up to explore the ecology of local speech varieties (cf. Haugen's ecological questions', Haugen, 1972, 65).", "we can explore the language ideology, the practices that support (and draw support from) local languages, and the country itself as a language resource.", "From this place we can seek new opportunities for language technologies.", "Perhaps this will still lead to such agendas as economic participation and multilingual information access.", "However, where I work in Arnhem Land, people tend to see language as coupled with identity, culture, ancestors, and country.", "They do not tend to see language as data, or language as lexico-grammatical code.", "Our conversations about learning centre on human learning not machine learning.", "When it comes to working with technology, people prefer culturally meaningful work to passive participation in a Western process (cf. Le Ferrand et al., 2022).", "Many people are passiomate about intergenerational transmission of knowledge, and do not obsess about getting everything transcribed and translated.", "Apart from the Inuit, no Indigenous community we've spoken with has shown much interest in machine translation (MT) between their ancestral language and English (or French, in Quebec). Communities are typically more interested in tools to encourage learning and use of their ancestral language (Kuhn, 2022, 89).", "An approach to technology engagements in culture areas is suggested by work on codesign (e.g. Verran and Christie, 2007; Verran et al., 2007; Bidwell et al., 2008; Bidwell and Browning, 2010; Winschiers-Theophilus et al., 2010; Brereton et al., 2013; Winschiers-Theophilus and Bidwell, 2013; Brereton et al., 2014; Soro et al., 2016; Taylor et al., 2018, 2019).", "We could apply such methods to the study of language technologies in culture areas.", "The third space is a hybrid place, an intersection of worlds.", "It has been discussed under such headings as the contact zone', the recognition space', the intercultural space', the arena', and the research interface' (Bhabha, 2012; Pratt, 1991; Somerville and Perkins, 2003; Taylor, 2008; Hunt et al., 2008; Jasper and Duyvendak, 2015; Ryder et al., 2020).", "One framing is Indigenous research (Fig.", "3(a)), defined as that part of an Indigenous knowledge tradition which is recognisable or legible from a Western research perspective... [or conversely] as that part of the Western academic research tradition which is at the same time conceived, shaped, governed and understood within Indigenous knowledge traditions. The area in the middle of the diagram is Indigenous research because it fulfils the criteria for both Indigenous knowledge production and academic research (Christie, 2006, 80).", "tralian Aboriginal community, I need pretexts for sitting with local people, and this comes from the established activities of a ranger program and a school.", "Here there are opportunities for computer assisted language learning and for spoken document retrieval from an archive of untranscribed media.", "There may be other opportunities for technology to augment traditional learning processes (Harris, 1984; Trudgen, 2012, 200ff), and for computer supported cooperative work that privileges local languages and knowledge systems (Christie and Verran, 2014; Carew et al., 2015; Hanke, 2017; Bettinson and Bird, 2021b,a).", "Part of the dynamic of working together on a language resource is the diverse meanings this may have for participants (cf. Star and Griesemer, 1989; Star, 2010).", "The externally-driven, telic work of compiling a dictionary may sit alongside local people's atelic, day-to-day participation in exploring the meanings of words with elders and visiting the places where the associated stories can be told.", "The resulting bound volume or mobile app might be a learning resource to one person and an emblem of prestige to another.", "The perspective I have articulated in this paper has arisen from living and working in a Kunwinjku-speaking community situated in Arnhem Land, Aboriginal country in the far north of Australia (cf.", "Fig", "2(b)).", "Here my attempts to pursue my Eurocentric practices in data collection have foundered.", "Over a period of several years, and with the patient guidance of many local people, I, a western-educated middle-aged white male, have learnt about the local lifeworld, glimpsed local expertise, and borne witness to systemic injustice.", "translation, practices where the agendas of language technology and language documentation fortuitously align.", "From my centralised perspective, the task of rendering speech into text, and the task of translating that text into another language, are disjoint.", "The technologies of speech recognition and machine translation are similarly distinct.", "However, I found that matters were different at the local level.", "The task of working together on a recording and deciding what was said turns out to be a two-way practice that merges transcription and translation (Sec. 5.1).", "The task of working together on an emergency broadcast to interpret it into a local language turns out not to be conventional one-shot translation of a fixed message but a two-way practice of understanding the true stories (Sec. 5.2).", "I recount these experiences to reveal the contingent, situated nature of work in a third space, and to suggest that a suitable way to learn lessons from such stories is not induction to lowest common denominator scenarios leading to one-size-fits-all solutions, but abduction to deeper accounts of speech communities and language technologies.", "Transcription for acutely under-resourced' languages has depended on recruiting participants to transcribe speech recordings and to provide phrase-aligned translations into a standardised language, a practice that focusses on surface forms, quantity, and efficiency.", "Yet transcriptional practices on the ground are far from mechanical, and there is no simple ground-truth transcription (Hermes and Engman, 2017; Himmelmann, 2018; Bird, 2020).", "On numerous occasions, I have found that there is no local interest in the tedious work of rendering speech recordings into text.", "When I look at local people's transcriptions' I see a practice akin to note-taking or inscription.", "People write down 7823 In Arnhemland, Yol N u [Aboriginal] people live in extended family groups with traditional authority structures.", "When Balanda [Westerners] don't understand or respect our way of governance, they often come up with ways of dealing with problems that undermine the authority of our Elders and their ways of keeping people and places safe.", "When trying to spread the word in Yol N u communities, the Balanda authorities told everyone to wash their hands and stand apart from each other .", "This way of sharing the story had the effect of by-passing the Elders, and of prioritising ways to keep ourselves safe as individuals.", "It cared for the biomedical body' threatened by the virus, but not the Yol N u body' which includes our family and clan groups.", "They picked one person to take the news to the people in the community, but this did not involve negotiating among ourselves what the right story for Yol N u should be, and the best way for it to be shared.", "There are ways we can work together, beginning with the authority of Elders, to understand the true stories of this virus.", "We have traditional ways of doing that sort of work and sharing out the right responsibilities to the right people.", "We know the right ways to keep our relationships strong, including our relations to other clan groups and to our homelands.", "When we are able to remain connected with each other and our places, this is how we remain healthy.", "In the process, we discuss the form and meaning of key words and phrases.", "Here is where local interests intersect with a newcomer's need to expand their vocabulary and improve their ability to recognise words in connected speech.", "How can we privilege local interests and expertise, and flip this transcriptional practice from a deficit scenario to a strength scenario?", "My answer is sparse transcription', schematised in Figure 4.", "We give up the slavish left-to-right phoneme level transcription practice, and instead prioritise our agency in identifying words of interest and discussing their significance.", "Thus, on the top left of Figure 4 we have a sparse transcription, where some tokens of lexical items have been found, manually or automatically.", "We are not concerned about narrow transcription of those items, only with identifying tokens of a lexeme in connected speech.", "On the right we have the practice of working together where local experts and western learners clarify the meaning of words, and enlarge the lexicon.", "Through multiple iterations, the transcription of a corpus gets denser.", "Our ability to automatically spot topic words and to retrieve relevant spoken documents improves.", "Following the onset of the COVID-19 pandemic, the Australian government broadcast simplistic directives about behaviour change (Lowell et al., 2021, 172).", "The assumption seems to be that all knowledge lies in the centre, and when it comes to reaching communities who speak other languages, it is a question of translation.", "To us in the language technology community, the government's approach presents a golden opportunity.", "We could obtain funding, collect a parallel corpus, and build a translation system.", "We would measure success in terms of the quantity of data collected and the performance of the system on gold translations.", "Over time, we would bring another language into the centre.", "However, in our success we would have missed the point: this is not a translation problem.", "Consider the response of some local elders to the government's communication strategy (Fig. 5).", "The elders touch on many issues.", "What is the utility of an instruction to self-isolate or what came across in Yol N u as stand apart from each other in communities with chronic overcrowding?", "6 Where is the sense in transmitting messages through a person who is not locally recognised as a knowledge authority, a practice which harms the Yol N u body?", "A likely response in the language technology community would be to collect more data and build a better system.", "Yet how would we hope to learn, via the mere exercise of matching words or phrases in one language with those of another (Du-ranti, 1997, 154), that Yol N u have a different metaphysics for an apparently simple term like body'?", "Our approach to translation works best when there is a shared lifeworld, where lexicalised concepts, metaphors and tropes line up across languages, i.e., within a zone of translatability", "(Fig.2(a)).", "The government's practice of COVID communication is more Eurocentrism, and a consequence of the West's view of itself as the centre of legitimate knowledge, [and of] science as the all-embracing method for gaining an understanding of the world (Smith, 2012).", "The Yol N u elders delivered a sophisticated response to the government's simplistic directives.", "They identified metaphysical issues, and asked to work together to understand the true stories.", "This practice has been called two-way learning in Australia (Harris, 1990), cf.", "two-eyed seeing in Canada (Wright et al., 2019).", "6 https://www.creativespirits.info/ aboriginalculture/land/overcrowded-houses 7824 In a more culturally aware approach, Balanda [Western] educators discussed with the Yol N u participants how to explain each concept in their own language as it was introduced. This triggered active and collaborative engagement in the learning process and provided opportunities for misunderstandings to be revealed and repaired... This strategy of continual collaborative interpreting of each new concept introduced by the Balanda educators, as well as Yol N u sharing their knowledge, facilitated a more in-depth understanding than passive listening to an explanation in English (Lowell et al., 2021, 171).", "This suggests a new opportunity for language technology, not how to improve translation for under-resourced' languages, but how to support people to work together in a third space, and to navigate a metaphysical divide (Fig.", "3(b)).", "The field of language technology has placed the world's languages on a spectrum according to the available machine-readable resources, a self-serving position that I have called poverty conscious language technology.", "Our category of under-resourced' languages conflates the qualitatively different situations of local languages and would-be standardised languages.", "Our talk of technology for languages of any type and of language technology for all betrays our Eurocentrism.", "When we speak of acutely under-resourced' languages and zero expert resources' we commit an epistemic injustice.", "I have described a multipolar model which respects local language ecologies with their orality and multilingualism, and I have articulated implications for the agenda of language technology.", "I have suggested ways that we can take a local turn and work with local speech communities from the ground up.", "We still need to be on guard for the colonial impulse in its many guises (cf. Dourish and Mainwaring, 2012).", "We still need to properly theorise language technology development outside the space of standardised languages.", "The result of this program, I hope, will be language technologies that address the distinct opportunities presented in three high-resource scenarios: the global information society with its standardised languages, the culture areas with their local languages, and their intersection in third spaces with their contact languages and local language development activities.", "I acknowledge the Bininj people of the Kuwardde-wardde Stone Country' and thank them for wel-coming me into their community.", "Karrimurrng-rayekmen kunwok!", "I am particularly grateful to Dean Yibarbuk and Lois Nadjamerrek and the Warddeken Rangers for their support.", "My work in Arnhem Land has been approved by traditional owners, the board of Warddeken Land Management, a research permit from the Northern Land Council, and a human research ethics protocol approved by Charles Darwin University.", "An earlier version of this material was presented in a keynote address at the 2021 Conference on Empirical Methods in Natural Language Processing, and I am grateful to several participants for their feedback.", "I thank the following people for discussions that have helped my thinking, and for their feedback on earlier versions of the material presented here: Antonios Anastasopoulos, Laurent Besacier, Mat Bettinson, Michael Christie, ric Le Ferrand, William Lewis, Teresa Lynn, Helen Verran, Fei Xia, and several anonymous reviewers.", "This work was supported by a grant from the Australian Research Council." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other" ]
[ "Although multi-document summarization (MDS) of the biomedical literature is a highly valuable task that has recently attracted substantial interest, evaluation of the quality of biomedical summaries lacks consistency and transparency.", "In this paper, using systematic reviews as an example of biomedical MDS, we examine the summaries generated by two current models in order to understand the deficiencies of existing evaluation approaches in the context of the challenges that arise in the MDS task.", "Based on this analysis, we propose a new approach to human evaluation and identify several challenges that must be overcome to develop effective biomedical MDS systems.", "With the number of biomedical publications doubling every two years (Cios et al., 2019), it is difficult for medical professionals to incorporate new, often contradictory, evidence into their daily work, as it would require appraising, comparing and synthesising the outcomes of multiple primary studies (Sackett and Rosenberg, 1996).", "Systematic reviews, which aggregate such evidence from multiple clinical trials, provide only a partial solution, as they are very time-consuming to write and thus can be unavailable for newer clinical questions or quickly become outdated.", "In this context, the ability to automatically summarize evidence from multiple studies is of high practical importance.", "The task, however, is more challenging than general multi-document summarization (MDS), as the summaries must correctly draw conclusions based on often contradictory studies, and aggregate details such as groups of patients or names and doses of treatments, in addition to dealing with often-cited difficulties posed by biomedical text such as complex lexical and semantic relationships between concepts (Plaza et al., 2011).", "Though recent approaches to biomedical summarization acknowledge the additional challenges of the task and try to incorporate some domain-specific knowledge to deal with them (Wallace et al., 2021; Shah et al., 2021; DeYoung et al., 2021), we still lack a solid understanding of how well current models capture such knowledge, how useful the generated summaries are, or how to measure progress.", "In this paper, we propose a systematic approach to human evaluation of biomedical summaries, and apply it to analyse the summaries generated by two state-of-art systems.", "We examine the common errors in generated summaries and the correlation of automatic metrics such as ROUGE (Lin, 2004) with our evaluation results.", "We choose summarization models proposed by DeYoung et al. (2021), as they not only demonstrate the abilities of end-to-end neural models, but also incorporate domain-specific knowledge such as entity prompts.", "The contributions of this paper are as follows: (1) We propose a new approach to human evaluation of biomedical summaries based on binary categorical ratings, which ensures that the results are interpretable, reliable, and easily reproducible by non-expert annotators.", "(2) We show that current approaches to summarization suffer from excessive copying from the prompt and an inability to aggregate important details from primary studies.", "(3) We show that automatic metrics such as ROUGE cannot reliably distinguish between factual and erroneous summaries.", "(4) We suggest several reasons which may explain the poor summarization performance, and show that it is necessary to redefine our approaches to biomedical MDS.", "Though our focus is on the biomedical field, we raise some issues common to cross-domain summarization, and propose a consistent approach to human evaluation and error classification which can easily be transferred to other domains.", "Although the importance of MDS in the biomedical domain was recognized around 20 years ago with studies such as McKeown et al. (1998) and Becher et al. (2002) defining some requirements and operations specific to biomedical summarization (e.g. the ability to resolve contradicting statements), until recently there have been few end-to-end systems (e.g. PERSIVAL (Elhadad et al., 2005)) due to the complexity of the task.", "In the last few years, apart from several shared tasks and challenges dedicated to multi-answer biomedical summarization including MEDIQA 2021 (Ben Abacha et al., 2021) and BIOASQ (Nentidis et al., 2021) several major threads of research have emerged.", "Wallace et al. (2021) and DeYoung et al. (2021) incorporate entityand discourse-level prompts into their end-to-end neural summarization models.", "Shah et al. (2021) revived the idea of symbolic MDS (Radev and McKeown, 1998) by combining a deterministic content plan with a pre-trained language model.", "Here, we are particularly interested in the model by DeYoung et al. (2021) as it reflects the setting of summarization systems in the wild: their input is all clinical trials cited by a systematic review rather than a sample of trials which the review was based on (Wallace et al., 2021) or a curated list of trials relevant to the summary (Shah et al., 2021).", "In terms of evaluation metrics, there has been a growing awareness of the inability of ROUGE to reflect the factual accuracy of summaries, so some other automatic metrics, including inference-based (Maynez et al., 2020) and question-answering-based methods (Chen et al., 2018; Wang et al., 2020) have been proposed.", "There have also been attempts to make the human evaluation more objective and systematic by defining linguistically grounded error categories and evaluation criteria (Huang et al., 2020; Pagnoni et al., 2021).", "In the biomedical domain, although there are some new automatic measures proposed, such as Aggregation Cognisance (Shah et al., 2021) which measures the ability of the model to recognize if the input texts are in agreement or contradiction and EI (DeYoung et al., 2021) which reflects the alignment of summaries in terms of direction of their findings human evaluation has primarily been based on the Likert scale (Wallace et al., 2021; Shah et al., 2021), making it difficult to reproduce and interpret.", "In this work we aim to close this gap by establishing a more reliable, grounded and objective human evaluation framework, and applying it by assessing the summaries generated by the state-of-the-art MDS system of DeYoung et al. (2021).", "The models we evaluate were trained on a large-scale dataset comprising 20K systematic reviews and 470K primary studies (DeYoung et al., 2021).", "The conclusions, taken from the abstract of the review, are the target for the summarization.", "The input consists of a prompt in form of the Background section of the systematic review, and the abstracts of up to 25 studies cited in the review.", "1 As the prompt ( Background ) describes the review's objective, the task is similar to query-based summarization, but with a highly detailed prompt.", "We use the two summarization models explored in DeYoung et al. (2021): BART (Lewis et al., 2020) and LongFormer (Beltagy et al., 2020).", "Both models are similar in architecture but differ in their approach to handling long input sequences: for LongFormer (LED henceforth) Background is concatenated with all studies and encoded together before feeding to the decoder, while for BART each study is concatenated with Background and encoded separately; then their encodings are concatenated together and fed to the decoder.", "To adapt the models to the biomedical domain, the authors decorate the inputs by adding special tags around PICO (Richardson et al., 1995) elements, namely <pop>, <int>, <out> , and also by marking the different sections such as Background .", "We sampled 100 reviews each from test summaries generated by BARTand LED-based models.", "To evaluate them in a more systematic manner, we define the following quality dimensions which capture both factuality and fluency.", "Though factual errors are often attributed to hallucinations (where the model generates entities not present in the source), they can also be due to other reasons, such as omission of important details, incorrect order of tokens, or inappropriate syntactic relations between them.", "Rather than classify the factuality errors by reason, however, we treat the 1 If the list of references contains more than 25 studies, it is truncated to the first 25.", "summaries as a combination of important biomedical entities and the relations between them, and define the quality dimensions related to them as follows.", "The PICO ( P atient/problem, I ntervention, C omparison, O utcome) scheme captures the most important entities for answering biomedical questions (Richardson et al., 1995), such as Does the acupunture ( intevention ) help to decrease inter-ocular pressure ( outcome ) in patients with glaucoma ( patient )?.", "We consider a generated summary to be correct from the point of view of PICO when it mentions the same patient population, intervention and outcome (in the same lexical form or paraphrased) as the original summary.", "2 When doing so, we apply strict restrictions regarding the semantic hierarchy of PICO concepts in the generated and target summaries: if one of the concepts is a hypernym of another (for example, acetaminophen and analgetics ), we consider it to be a factual error, as the findings of clinical trials should not be generalized or narrowed to other intervention types, patient groups, or outcomes.", "Note that though the PICO schema is more applicable to treatment trials, we apply these categories more broadly, as there are also clinical trials related to diagnostics, risk factors, biomarkers, etc. 3 Direction correctness Lehman et al. (2019) defined three directions of the intervention's effect with regards to the outcome: significantly increases , significantly decreases and no significant difference .", "We keep this three-way classification, but redefine it as positive effect , negative effect , or no effect , which allows us to judge 2 Following Nye et al. (2018) and DeYoung et al. (2021), we omit the Comparison (alternative intervention), as it is usually a no-treatment or placebo control which is implied rather than mentioned explicitly.", "Based on the sample we examined, Comparison was explicitly mentioned only in around 20% of systematic reviews' abstracts.", "3 For example, in a study examining risk factors influencing poor response to a treatment, such risk factors as young age , rather than the treatment itself, are interventions, while the therapy response is the outcome.", "In the sample we analysed, 78% of reviews were synthesising the results of treatment interventions including surgical, medical, nursing and alternative, such as music or acupuncture; among the rest, the majority (12%) were etiology studies with such interventions as risk factors.", "The remaining 10% of studies had unique combinations of interventions and outcomes.", "For example, in prognosis studies or studies of patients' experiences, a disease itself serves as an intervention.", "based on the semantics and sentiment orientation of expression rather than the surface form.", "As an example, consider the following: Generated : NIV is associated with an improvement in mortality.", "Target : NIV had great advantage ... in reducing mortality.", "If we follow the classification proposed by Lehman et al. (2019), these summaries have different directions in relation to mortality (im-provement shows the direction of increases , while reducing has the direction of decreases ), thus the generated summary would be erroneously considered wrong.", "The proposed classification of positive / negative / no effect avoids that, capturing the semantic orientation rather than literal meaning, similar to aspect-based sentiment analysis (Liu, 2012).", "It also more naturally extends to situations where the intervention does not directly affect the outcomes (so that no increase or decrease is possi-ble), such as when we talk about the effectiveness of a diagnostics method, and to other clinical question types.", "For example, we assign the positive label if the review identifies the optimal intervention ( Which intervention works best? ), negative if it shows the most undesired intervention ( What are the most important risk factors? ), and no effect if such interventions cannot be identified.", "As a linguistic category, modality reflects the possibility of a proposition (i.e. X might increase Y vs X increases Y ), but here we define it in a more pragmatic way to denote how certain we are of available evidence and thus how strong our claim is.", "In particular, we define the following levels of certainty: strong claim , moderate claim , and weak claim .", "There are also two labels for statements where the author cannot draw any conclusions based on the evidence available to them ( no evidence ) or when the statement is descriptive and does not contain any claims regarding the direction of effect ( no claim ).", "Below we briefly describe the ways the modality is expressed: Strong claim : these claims are modified by strengthening expressions such as remarkably or considerably : MSC infiltrations ... [lead] to an overall remarkable improvement .", "The author can also directly appeal to the quality 5100 of available evidence: High-quality evidence indicates that diet ... can reduce the risk of excessive GWG .", "Moderate claim : this is usually an unmodified proposition, such as Warming-up before an operative procedure improves a trainee's ... performance .", "Weak claim : such statements can be hedged in multiple ways, including modal verbs (e.g. may ), introductory clauses ( It appears that ... ), or adverbs ( likely ).", "However, the author can directly comment on the reliability of evidence ( There is initial evidence supporting the effectiveness ) or discrepancy of the results ( denosumab ... has shown a positive but variable histological response ).", "No evidence : there is either no primary evidence regarding the clinical question, or no conclusions can be drawn from it on account of its low quality or conflicting results.", "These statements are usually introduced by such clauses as There is insufficient evidence to support ... .", "No claim : a summary can mention the clinical question, but make no statements regarding the effect of the intervention: [This] is the first systematic review to assess the effect of inhaled steroids on growth in children with asthma.", ".", "It should be noted that modality is different from statistical significance of an intervention's effect, which is captured by direction .", "For example, even if a clinical trial has a statistically significant effect, we can be uncertain of its results due to bias in the cohort, e.g. a small sample size.", "In the case of MDS, even if each of the underlying studies has shown a significant effect, their direction can be contradictory, which results in the no evidence judgement.", "On the other hand, we can be very certain that an intervention does not have any effect ( There is ... strong evidence of no significant difference between acupuncture and sham acupuncture ).", "Probably the most important distinction to make here is between cases where we have no evidence ( There is insufficient evidence to determine whether ... LCPUFA improves ... growth of preterm infants ) vs where we have enough evidence to state that there is no effect ( no clear long-term benefits or harms were demonstrated for preterm infants receiving LCPUFA ).", "4 The reason we include modality as a separate evaluation aspect is that it reflects the quality of the evidence and its potential usefulness to the medical professionals; thus, if primary studies report that a treatment may work, we do not want their summary to assert that the treatment works .", "Likewise, if it is impossible to aggregate the evidence with any certainty, the summary must state that the current evidence is insufficient rather than draw a particular conclusion.", "In this respect, modality is related to the newly-introduced category of scientific ignorance (Boguslav et al., 2021) as it helps to evaluate the state of our knowledge regarding a particular clinical question.", "5 Though based on our examples, modality can seem to be a category specific only to the biomedical domain, we believe that it is also important for other summarization domains where facts, rather than opinions, are involved, such as news or scientific articles, so it can be a valuable dimension of evaluation for summaries in general.", "Errors in this category can make it difficult to read and understand the summary, but do not affect its meaning.", "This category includes morphology and syntax mistakes, such as incorrect verb form or clause structure, but also lexical mistakes (incorrect word choice) leading to grammar errors.", "For example, a phrase the is instead of there is would be classified as a grammar rather than lexical error.", "This category is for spelling mistakes which do not affect grammar and meaning.", "Neural summarization systems commonly generate repetitive content, which can affect fluency to", "4 One simple test to distinguish them is that we can add a modality -modifying expression on top of the no effect statement ( Long-chain omega-3 probably has ... no effect on new neurocognitive outcomes ), while it is impossible to do this for no evidence or no claim propositions which already express the modality.", "5 How exact such evaluation can be and how well it correlates with objective measures of evidence quality such as risk of bias is still an open question.", "Despite this, we believe that modality is a useful linguistic category reflecting the author's subjective evaluation of the evidence quality.", "the point of unintelligibility.", "Here, repetitions are regarded as a fluency mistake only when they do not make the sentence factually or grammatically incorrect.", "The first author of the paper (main annotator) judged each pair of target and generated summaries as correct or wrong based on the categories outlined above.", "6 To be considered valid, the summary must be correct across all these dimensions; to be considered useful or factually correct, it must be aligned with the target summary in the first three dimensions ( PICO , Direction , and Modality ).", "Although it might seem that some errors are worse than others (e.g. completely mixing up the interventions can seem to be a more severe mistake than mentioning a more generic concept), we treat the errors as binary.", "The reason behind this is twofold: first, it allows us to decompose the complex task of human evaluation into a series of pairwise yes/no decisions and thus make it easier and more objective (similar to what is already a standard practice in human evaluation of biomedical machine translation (Jimeno Yepes et al., 2017)); second, we argue that the minor errors are more dangerous in practice: while a completely irrelevant answer is likely to be spotted as incorrect by a medical professional, a tiny mistake in the summary can go unnoticed and thus the conclusions can be applied to a different situation than intended or with a different degree of certainty.", "To assess the robustness of our evaluation criteria, we asked five external annotators, one of whom was a medical professional, to evaluate the quality of 40 generated summaries.", "The details of evaluation process together with the annotation instructions and metrics used can be found in Appendix A. Table 1 presents the average agreement between each of five external annotators and the main annotator (in terms of percentage of agreement and Gwet's AC1), as well as Fleiss' for all six annotators.", "In general, we found high agreement of external annotators with the main annotator, and substantial agreement between all annotators, which is remarkable considering the difficulty of the task 6 In cases where the target review contained several statements, while the generated summary had only one proposition (53% of the cases), we matched it to the closest statement in the target summary; if we required a perfect multi-proposition to multi-proposition match, the results would have been much poorer.", "and the size of the rater group.", "Most of the mistakes were not systematic, though some annotators struggled to differentiate between no evidence and no effect statements.", "Despite some discrepancy in the category-level annotation, when we apply Boolean AND to the first three categories to determine if a summary is factually correct ( PICO Direction Modality ), the results are highly reliable, with almost perfect agreement with the main annotator and strong agreement among all annotators, which shows that our method can be used to robustly evaluate the usability of summaries.", "As shown in Table 2, less than 5% of generated summaries did not have any errors; even if we disregard the fluency errors, only around 10% of summaries are factually correct and thus usable.", "Overall, the generated summaries are quite fluent, with surprisingly low redundancy; it is the factual accuracy, especially in terms of PICO and modality, that is problematic.", "In the following sections we provide more detailed statistics and some typical errors for these categories; some examples of incorrectly generated summaries and their errors can be found in Appendix B. 5.1.1 PICO Among the PICO categories, Intervention is the most problematic, while Patient is usually generated correctly (Table 3).", "Below we outline some typical PICO errors: More narrow concepts in the generated summary, usually copied from the primary studies: women with pre-eclampsia instead of women as Patient, robocat instead of companion-type robots as Intervention, preventing HPV 16/18 instead of preventing HPV as Outcome.", "More generic concepts in the generated summary, usually copied from the Background .", "For example, the generated summary mentions topical agents , while the review deals specifically with their innovative reformulation ; the review is about a particular drug ( nedocromil sodium ) while the generated summary mentions the drug category ( inhaled corticosteroids ).", "examines the effect of constipation on physical and mental well-being .", "In some cases, the elements are correct, but the relation between them is reversed: a review studies whether depressive symptoms lead to sleep disturbances , while the generated summary is about the effect of insomnia on depression .", "Hallucinated elements : surprisingly, some incorrect PICO elements have the same stem as the correct ones: developing countries instead of developed countries and congenital hypothyroxinaemia instead of congenital hypothyroidism , which seems to be due to generating a more prominent candidate continuation in a multi-token entity.", "We calculate the direction accuracy only for the samples where the consistency of direction can be reliably determined, that is, where none of the two summaries have no evidence or no claim modality.", "Remarkably, if we keep the direction separate from modality, the performance for this category is quite good, which shows that getting the semantic orientation of the proposition right is relatively easy if the model is certain enough to make a statement.", "However, the confusion matrix for this category (Figure", "1) shows that both high accuracy of this category and the highest number of mistakes can be attributed to the overwhelming presence of findings with the positive direction in the data.", "Therefore, the easiness of this dimension is not because the models learn to correctly capture the direction of primary studies, but rather because the default positive direction is most often correct due to the positive negative no change Target po s i t i v e nega t i v e n o c hange G ene r a t ed 72.6% 3.2% 14.5% 3.2% 1.6% 0.0% 0.0% 0.0% 4.8% BART positive negative no change Target po s i t i v e nega t i v e n o c hange G ene r a t ed 69.1% 0.0% 25.5% 3.6% 1.8% 0.0% 0.0% 0.0% 0.0% LongFormer Figure 1: Direction of the generated vs target summaries.", "In contrast to the previous category, the models produce more varied content in terms of Modality , which reflects a less skewed distribution in the data (see Figure 2).", "Though there is still a clear majority category ( moderate claim ), most of the errors are not due to generating too many moderate claims.", "In fact, for both BART and LED the most common problem is generating no evidence sentences instead of moderate and weak claims; for LED, there is also a good proportion of errors due to not making any claim at all.", "Interestingly, the number of times when the adjacent categories were mixed up (weak moderate, moderate strong) is lower than the number of mistakes due to confusing the quite distinct categories of no evi-dence/no claim and moderate evidence .", "Thus, even though the models sometimes correctly pick up cues showing weakness of evidence or its moderate quality, they often give up on trying to make any conclusion.", "This is especially true for LED, which generates substantially more no claim summaries than BART. 5103 Figure 2: Modality of generated vs target summaries.", "The mistakes in these categories are quite uniform in the sense that they seem to be an artefact of tok-enization and decoding.", "For example, the vast majority of spelling errors are due to incorrect merging of subwords including the article The at the beginning of a sentence, for example TheCLUSIONS instead of The CONCLUSIONS .", "The grammar mistakes are also usually caused by incorrect token The at the initial position: The is insufficient evidence , though some other errors occur at this position: There systematic review of strategies .", "Contrary to our expectations, the amount of repetitions was small, so it is difficult to make conclusions regarding their patterns.", "However, there was a tendency to include prominent tokens, often paraphrased, both in the outcome and patient slots', which sometimes led to redundancy: acupuncture for LBP in patients with chronic low back pain 5.2 A closer look at the output How much is copied from the Background ?", "As the evaluation results in the previous section were discouraging, we found it necessary to examine the way summaries were generated.", "Upon further analysis, the majority (91% for BART and 85% for LED%) of the generated summaries are very similar in content to the Background section of the systematic review, which is supposed to contain a prompt for the model rather than the content to be actually summarized.", "More specifically, they copy the objectives or hypothesis sentence with various degree of paraphrasing.", "A typical example of such copying is provided in Table 4; though some paraphrasing is present, the generated summaries do not contain any information which cannot be inferred from the objectives sentence.", "Worse of all, they do not answer the question but rather restate it ( no claim ).", "To check whether this tendency is present in generated summaries in general, we calculated the unigram overlap (ROUGE-1), bigram overlap (ROUGE-2) and the longest n-gram overlap (ROUGE-L) between them and two golden summaries: the target summaries and Background text for all samples in the test set.", "As can be seen from Table 5, the generated summaries are much closer to the Background section than to the Target summaries; high ROUGE-2 and ROUGE-L scores against the Background also reflect the tendency to copy longer sequences literally.", "Only a third of examined summaries (34% for BART and 30% for LED) included any details taken from primary studies that were meant to be summarized rather than from the prompt ( Background ).", "Though this in itself is concerning, it is even more striking that for only 4 of the BART summaries and 2 of the LED ones did the model manage to copy some useful information from the studies, whereas in the majority of cases copying from studies actually caused mistakes.", "These mistakes can be divided into two roughly equal groups: (1) the entity copied from the studies was too narrow, which means that there was no aggregation of entities across studies which examined different groups of patients, interventions or outcomes; 7 and (2) an entity unrelated to the clinical question but frequently mentioned in the studies was copied.", "8 We hypothesize that such inability to synthesize the information from the input studies together with the intensive copying from the prompt can be explained by the over-reliance on the Background (preambula) due to the higher-weighted global attention set on it (DeYoung et al., 2021).", "Though hallucinations are a widely known issue with neural abstractive summarization, in the data we analysed less than 4% of summaries had incorrect details which could not be attributed to either the prompt or the included studies.", "7 More specifically, this can be due to adding an adjective modifier ( primiparous women instead of women ) or copying one of the concept's hyponyms ( robocat instead of companiontype robots ).", "8 For example, a purpose of one review was to identify dry eye symptoms rated as most uncomfortable, but as the majority of primary studies mentioned artificial tears for treating this condition, this concept was included in the generated summaries.", "Around 68% of the analysed summaries are prepended by standard phrases such as This systematic review suggests ... .", "To check how wide-spread such phrases are in generated summaries in general, we also calculate their frequency in the whole test set: There is insufficient evidence to support ... occurs in 25% of BART and 19% of LED summaries; and The results of this systematic review suggest ... in 15% of BART and 14% of LED summaries.", "As was shown above in Section 5.1.3, LED makes more no claim statements than BART: 12% of LED summaries begin with The is the first systematic review , while only 2% of BART summaries do so.", "Overall, at least 55% of all summaries have the canned phrases we identified, which means that the models learned to identify and fluently reproduce some important elements of scientific style and discourse.", "Though we used ROUGE to determine the amount of lexical overlap and copying in Section 5.2 above, we do not consider it to be a reliable metric for quality estimation, especially in terms of factuality, as it does not correlate with any factuality dimensions we examined or factual accuracy in general.", "To determine whether the factually correct summaries had higher ROUGE scores than incorrect ones we performed a series of Student t-tests comparing summaries with correct and incorrect PICO, direction and modality, as well as summaries with no mistakes in any of these categories versus summaries with at least one mistake.", "There was no statistically significant difference in terms of ROUGE-1, ROUGE-2, and ROUGE-L scores between correct and incorrect summaries in all of these tests for both BART and LED.", "9 As an example, the distribution of ROUGE-1 scores for generated BART summaries with correct vs incorrect PICO elements, direction and modality, as well as for factually correct and wrong summaries, is shown in Figure 3.", "In this section we point out some issues which could explain the poor performance of the summarization systems in terms of generating conclusions in the manner of systematic reviews, and show how they relate to the principles underlying the aggregation of medical evidence.", "We present these as challenges to be tackled in MDS system development.", "A large number of reviews (53% in the analysed subset) had multiple propositions, that is, sets of PICO elements and relationships between them.", "For example, a review can study effects of a drug in terms of different outcomes, and each of these outcomes can have a different direction and modality.", "As a result, we are dealing with multi-aspect summarization, and it can be difficult for the model to correctly identify and reproduce several sets of prominent entities and relationships.", "Primary studies are rarely, if ever, conducted for all possible groups of patients, drugs in a particular class, or outcomes.", "Thus to answer a clinical question, we need to aggregate across such entities.", "For example, if a systematic review studies the effects of counselling on breastfeeding rates across the globe, and the majority of underlying studies mention developing countries while other refer to 9 We performed the same experiments with BERTScore (Zhang et al., 2020), and though it was marginally able to differentiate between the summaries with correct and incorrect PICO, it could not capture the direction or the modality of the claim, so overall the results were statistically insignificant.", "specific locations such as Baltimore , the generated summary can have a narrower Patient group ( developing countries ) than it should.", "Similarly, if primary studies examine the effects of different types of HPV vaccine (HPV-6, 11, 18, etc.) for different groups of patients, we would need to aggregate across them to be able to make conclusions about the effectiveness of HPV vaccines at large.", "In many cases, the primary studies are not considering exactly the same question that the review needs to answer.", "For example, the review may be about the effects of depression on sleep quality, while the underlying studies examine the effects of disrupted sleep on depression.", "Sometimes the answer needs to be inferred based on prior knowledge.", "One of the reviews, for example, explored the risks of mortality due to salmeterol, while the studies included in it did not even mention mortality but rather examined potentially lethal side effects.", "While the majority of clinical questions (80% in the analysed subset) are in the yes/no form (Does the intervention A have an effect on the outcome B?), and the model can answer them by rephrasing the question, some questions require more difficult operations.", "For example, a clinical question might ask which strategy is more effective for preventing asthma (which requires comparing interventions), what education methods exist to manage hyperphosphatemia (which requires listing different in-terventions), or even why behavioral interventions work (which requires reasoning about various aspects of interventions).", "In the analysed subset, 11% of the reviews required ranking multiple alternatives which could be compared head-to-head or with the control or choosing the best treatment options; in 4% the study's purpose was to list the known interventions, risk factors or even research questions; several studies compared the costs of the treatment with its benefits or the expectations of the patients with their actual experiences.", "In this research, we attempted to bring the importance of factuality in biomedical MDS into attention, and demonstrated that the current models are still unreliable in this respect.", "Moreover, we showed that they fail to pick up and aggregate important details from multiple documents, excessively relying on the prompt.", "To support our analysis, we established a simple and reproducible human evaluation benchmark which reflects aspects of quality important for biomedical MDS but can be translated into other domains.", "Finally, we showed that the progress in biomedical MDS will be limited unless we acknowledge the domain-specific challenges of the task and work towards overcoming them.", "Though we focused our efforts on a particular domain, we hope that this work prompts taking a closer look at the summarization results in other areas, as only objective evaluation of what the models are capable of and prone to do will allow us to improve them.", "Done right, biomedical MDS can significantly facilitate the practice of evidence-based medicine; done wrong, however, it creates risk of misinterpretation of evidence and subsequent malpractice.", "For this reason, we argue that the factual accuracy of biomedical summaries should be decided on a rigid yes/no scale, and only the summaries matching in all details and intents should be considered factually correct and thus useful.", "In this paper, we show that we still have a long way to go before biomedical summarization systems can be reliably 5106 used and trusted, and highlight the importance of robust human evaluation in this domain.", "The authors would like to thank Rahmad Mahendra, Seungsu Oh, Yiyuan Pu, Simon uster, and Hung-Thinh Truong for their contribution to annotation and discussions.", "This research was conducted by the Australian Research Council Training Centre in Cognitive Computing for Medical Technologies (project number ICI70200030) and funded by the Australian Government." ]
[ "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "abstain", "objective", "result", "result", "result", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "method", "result", "objective", "abstain", "method", "abstain", "other", "other" ]
[ "Recurrent neural network language models (RNNLM) form a valuable foundation for many NLP systems, but training the models can be computationally expensive, and may take days to train on a large corpus.", "We explore a technique that uses large corpus n-gram statistics as a regularizer for training a neural network LM on a smaller corpus.", "In experiments with the Billion-Word and Wikitext corpora, we show that the technique is effective, and more time-efficient than simply training on a larger sequential corpus.", "We also introduce new strategies for selecting the most informative n-grams, and show that these boost efficiency.", "Recurrent neural network models of language (RNNLMs) form a foundation for many natural language processing systems.", "However, the networks can be expensive to train: training a single model over several million tokens can take hours, and searching through the large hyperparameter space of RNNLMs often entails training and testing hundreds of different models.", "This makes it burdensome to experiment with new RNNLM architectures on large corpora, or to train RNNLMs for new textual domains.", "RNNLMs are typically trained on sequential text.", "In this paper, we investigate how to efficiently augment the training of RNNLMs by regularizing the models to match n-gram statistics taken from a much larger corpus.", "The motivation is that large-corpus n-gram statistics may be informative to an RNNLM trained on a smaller sequential corpus, but unlike RNNLM training, n-gram statistics are inexpensive to compute even over large corpora.", "Moreover, the statistics only need to be computed once and can be re-used for training many different smaller-corpus RNNLMs.", "Naively, regularizing an RNNLM to match a given set of n-gram statistics is non-trivial, because the marginal probabilities that n-gram statistics represent are not parameters of the RNNLM.", "In recent work, Noraset et al. (2018) showed that it was possible to regularize an RNNLM to match given n-gram statistics by training the network, when started from a zero state, to match each n-gram probability.", "However, the regularization approach in that work had tractability limitations the time cost of the regularization was sufficiently high that using it was inferior to simply training the RNNLM on more sequential text.", "In this paper, we present an efficient n-gram regularization technique and show that the technique can improve RNNLM training.", "Our method has three distinctions from previous work that provide efficiency.", "First, we prioritize regularizing only the n-grams that are most likely to improve the model, by focusing on cases where the RNNLM's sequential training corpus diverges significantly from the n-gram statistics.", "Secondly, we regularize the entire output softmax of the RNN to match given conditional n-gram statistics, which means we can impose a large number of statistical constraints using only one softmax evaluation.", "Finally, we use an ensemble of multiple loss functions in our regularizer, which provides an additional boost.", "In experiments, we show how n-gram regularization with these enhancements results in better models using the same amount of training time, compared to standard sequential training.", "We also plan to release our code base and to the research community.", "1 2 Methods RNNLMs are trained to optimize a loss function L d , which is defined as the average negative log-likelihood of a training corpus.", "We regularize the RNNLM to match n-gram statistics by introduc-1 https://github.com/yangyiben/ Conditional-N-gram-Regularization ing another penalty term L p to the loss function that captures how well the model matches n-gram statistics, giving a combined loss L : L = L d + L p , where is a hyperparameter to control the regularization strength.", "We use the term large corpus to refer to the text utilized to compute the n-gram statistics.", "We expect the large corpus to be multiple orders of magnitude larger than the small corpus utilized for computing the RNN's sequential loss L d .", "In the rest of this section, we first define what we mean by conditional n-gram statistics, and then present our regularization methods.", "An orderN n-gram is a sequence of N words.", "For a given corpus c , we denote the k th distinct order-N n-gram w k 1 , w k 2 , ..., w kN as w Nk , and denote the corresponding order N 1 n-gram formed by the first N 1 words as w N 1 k .", "Here, in order to eliminate ambiguity, the notation N 1 is exclusively used to represent the prefix.", "For instance, w 3 k is the k -th trigram, while w 4 1 k is the trigram prefix of the k -th 4-gram.", "We define conditional n-gram statistics as the empirical conditional probabilities of observing each next word w kN given the previous N 1 gram w N 1 k for all k s : P ( w kN | w N 1 k ) = count ( w Nk ) count ( w N 1 k ) , w Nk ( c ) , where w kN and w N 1 k are the N th word and the corresponding previous N 1 gram of w Nk respectively, and ( c ) is the set of all unique w Nk s contained in the corpus c .", "For a given RNNLM, the model conditional probability for some n-gram w N k is defined as: P ( w kN | w N 1 k ) = E h ( P ( w kN | w N 1 k , h )) , where h is the model's hidden state prior to encountering the N -gram.", "However, it is difficult to express that expectation in terms of the model parameters, so we adopt the approach from Noraset et al. (2018), which has shown preliminary evidence of forming an effective regularizer: E h ( P ( w kN | w N 1 k , h )) P ( w kN | w N 1 k , h 0 ) , where h 0 is a zero hidden state.", "We propose three forms of regularization loss functions that penalize the divergence of the model's conditional probabilities from the conditional n-gram statistics.", "where R is a set of n-grams w Nk to regularize, and P and P are conditional n-gram statistics and model conditional probabilities as defined in Section 2.1.", "This penalty is similar to that of (No-raset et al., 2018).", "However, instead of computing the loss with multiple forward passes for different w Nk s that have the same w N 1 k , we propose to only perform one forward pass for each w N 1 k , and regularize all subsequent N th words.", "This makes our loss much more computationally efficient.", "As this penalty only accounts differences in point-wise probabilities for specific words w N , it can be used for partially specified distributions where we only know the desired probabilities for some N th words in a given context, but not the entire distribution.", "P w N 1 k = P ( w | w N 1 k ) , Q w N 1 k = P ( w | w N 1 k ) L KLp = 1 (cid:107) R (cid:107) (cid:88) w N 1 k RDKL ( P w N 1 k || Q w N 1 k ) ,", "where here R is a set of prefixes w N 1 k to regularize.", "This penalty regularizes all possible subsequent words w N , thus it only works for fully-specified reference distributions.", "Because the above two penalty functions differ, we hypothesize that they may be complementary and propose a combined penalty L cp = L sqp + L KLp .", "Note that we do one forward pass for each unique w N 1 k , so only the number of distinct prefixes w N 1 k will significantly affect the computational", "cost of our regularization methods.", "Naively regularizing all unique prefixes in a large corpus usually requires a large number of forward passes, which could be expensive.", "We hypothesize that some prefixes are more useful than others, so we attempt to select the ones that will improve the model the most.", "We propose to select prefixes w N 1 k that maximize the E xpected L og-likelihood C hange (ELC), defined as: ELC ( w N 1 k ) = (cid:88) P ( w Nk ) log P ( w kN | w N 1 k ) P ( w kN | w N 1 k ) .", "Ideally, P would reflect the statistics of the RNNLM, updated during training, but these are expensive to obtain.", "Thus we propose to train a inexpensive n-gram model (Chen and Goodman, 1999) on the sequential corpus to serve as P , and we use that to select a fixed set of n-grams to regularize.", "We now present our experiments measuring the effectiveness of conditional n-gram regularization.", "We experiment on a medium-size (2 layers with 650 hidden states) LSTM language model (Zaremba et al., 2014) over two corpora: Wikitext (Merity et al., 2016) and Google Billion-Word (Chelba et al., 2013) (1B).", "We adopt weight tying (Inan et al., 2016) and variational dropout (Gal and Ghahramani, 2016).", "All models are trained by SGD for 30 epochs with batch size 64 and truncated backpropagation (Mikolov et al., 2011) with 35 time steps.", "The learning rate starts at 20 and then is reduced to 5 at epoch 20.", "For the 1B corpus, we follow the same procedure in Yang et al. (2017) to generate training, validation and test sets, except that we use only the top 50K vocabulary terms.", "For the Wikitext corpus, we adopt the Wikitext-2 vocabulary.", "All of the RNNLM sequential training sets are small subsets sampled from the Wikitext-103 and 1B training corpora.", "For each dataset, we use the whole training set as the large corpus for building our reference conditional n-gram statistics.", "In this study, we only consider conditional trigram regularization for all experiments.", "The regularization takes additional time during training.", "To enable a fair experiment, we equalize the training time between regularized models and the baselines, by providing the baselines with more sequential training data than the regularized models.", "The three proposed penalties are almost equally fast, thus they can be compared against the same baseline.", "Unlike RNNs, there are no hyperparameter settings or decisions involved in computing the n-gram counts, so this can be done once and re-used across the many RNN training runs that adequate hyperparameter search for RNNs often entails.", "Moreover, counting n-grams is fast.", "We approximate that it takes about one minute to obtain n-gram statistics from the Wikitext-103 training corpus, for example.", "Thus, we set up our experiments to equalize neural network training time, and we ignore the small onetime cost of computing n-gram statistics.", "We fix the number of bigrams per batch to be 500, and employ the proposed strategy to select the top X most informative bigrams ( X depends on the number of batches in sequential data).", "Finally, instead of tuning the regularization strength hyperparameter for each setting, we fix to be 1.0, 0.75 and 0.5 for the three sequential data sizes based on the heuristic that larger sequential data may need less regularization.", "More carefully tuning the regularization strength might yield somewhat better results for our methods.", "Also, using different regularization strengths in the combined penalty might further improve results.", "In Table 1, we compare the performance of our proposed methods against equal-time controlled baselines under different token sizes for both the Wikitext and 1B data sets.", "In the table, the numbers in the column headings indicate the token count of the sequential corpus used to train the regularized methods.", "The baselines train on larger corpora, to ensure an equal-time setting as described above.", "All regularized models outperform their baseline counterparts for all token sizes.", "Among them, the models regularized by the combined penalty consistently perform best.", "This illustrates that conditional n-gram regularization is effective at incorporating large-corpus statistics to improve an RNNLM trained on a relatively small corpus.", "Performing regularization using the combined selection strategy yields more accurate models compared to simply training on a larger sequential corpus.", "In Figure 1, we plot validation perplexities after each training epoch for models trained on the 5M-token 1B corpus, against an equal-time baseline.", "The plot shows that the relative performance of the methods remains similar across training epochs.", "In Table 2, we demonstrate how the number of regularized bigrams affects the performance.", "Here, the regularized models always train on wikitext-2 as a sequential corpus, and the baseline trains on larger corpora as the fractions of bigrams increases.", "The regularized model achieves 66 test perplexity on Wikitext-2 corpus, which is about 2.7 points worse than a state of the art mixture of softmax model (Yang et al., 2017) even though our model has fewer parameters (25M vs. 35M).", "Including more bigrams helps lower the perplexity, while it also demands extra computational time.", "ELC performs best when using less than 20% of the bigrams.", "Regularizing randomly selected n-grams does not outperform the equal-time baseline, indicating that not all n-grams are equally useful.", "In order to be time efficient, it is important to select informative n-grams, and ELC is an % of total bigrams baseline random ELC 0% 86 86 86 5% 76 83 69 10% 71 77 67 20% 66 71 67 40% 61 68 66 Table 2: Test perplexities of RNNLMs trained on Wikitext-2 regularized with different numbers of bigrams using the combined penalty.", "In Table 3, we consider ensembling a standard RNN with a KN-smoothed trigram model.", "This achieves a ppl of 65, but requires 51M more parameters, whereas our regularization with the n-grams achieves most of the perplexity improvement at the cost of zero additional parameters.", "Further, somewhat surprisingly we find that an ensemble of our regularized RNNLM with the n-gram model achieves much better perplexity of 59.", "Another possible way of efficiently utilizing a large corpus would involve training a Word2vec model on the large corpus, and using the pre-trained embeddings within a RNNLM trained on a small corpus.", "This approach can utilize larger corpora since training a Word2vec model is much faster than training a RNNLM.", "However, in our preliminary experiments with this approach, we did not observe any improvement when using word embeddings trained on a large corpus.", "Further experiments with variants of this approach are an item of future work.", "Chelba et al. (2017) trained large-order n-gram models using a recurrent neural network trained over limited context to produce the conditional probability estimates.", "Our regularizer is trained in a similar way, but by contrast we are focused on how the regularizer can be used in concert with standard sequential RNNLM training to improve the training procedure.", "We introduce n-gram selection techniques and distinct loss functions that increase the effectiveness of the combined training.", "Ganchev et al. (2010) presents a posterior regularization method for restricting posterior distributions of probabilistic models with latent variables to obey predefined constraints using the EM algorithm.", "This approach shares our goal of imposing constraints on probabilistic models, but we focus on RNNLMs, which do not estimate distributions over latent state variables and are not trained using EM.", "Finally, Mikolov et al. (2011), Jozefowicz et al. (2016) and Chelba et al. (2013) trained ensembles of RNNLMs and KN-smoothed n-gram models, and showed that one can obtain a better model when ensembling RNNLMs with n-gram models.", "Our experiments show that compared to ensemble methods, conditional n-gram regularization achieves similar results at the cost of zero additional parameters, and can perform even better when combined with ensembling.", "In this paper, we have proposed methods to utilize using large corpus n-gram statistics to regularize RNNLMs trained on a smaller corpus.", "Our experiments demonstrate that the proposed regularization penalties are effective in improving model performance, and can be more time efficient than training RNNLMs on a larger sequential corpus.", "Selecting informative n-grams is shown to be important.", "In future work, we would like to obtain a better theoretical understanding of why starting the RNNLM from a zero state forms an effective n-gram regularizer.", "We would also like to extend our regularization approach to BiLSTMs (Peters et al., 2017) and Transformers (Alec Radford and Sutskever, 2018; Devlin et al., 2018).", "This work was supported in part by NSF Grant IIS-1351029 and the Allen Institute for Artificial Intelligence." ]
[ "abstain", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "objective", "method", "method", "result", "abstain", "abstain", "other", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "abstain", "other", "objective", "other", "abstain", "objective", "objective", "abstain", "result", "abstain", "other" ]
[ "Generating sequential natural language descriptions from graph-structured data (e.g., knowledge graph) is challenging, partly because of the structural differences between the input graph and the output text.", "Hence, popular sequence-to-sequence models, which require serialized input, are not a natural fit for this task.", "Graph neural networks, on the other hand, can better encode the input graph but broaden the structural gap between the encoder and decoder, making faithful generation difficult.", "To narrow this gap, we propose DUALENC , a dual encoding model that can not only incorporate the graph structure, but can also cater to the linear structure of the output text.", "Empirical comparisons with strong single-encoder baselines demonstrate that dual encoding can significantly improve the quality of the generated text.", "Data-to-text generation aims to create natural language text to describe the input data (Reiter and Dale, 2000).", "Here we focus on structured text input in a particular form such as a tree or a graph.", "Figure 1 shows an example where the input data is a mini knowledge graph, and the output text is its corresponding natural language description.", "Generating text from such data is helpful for many NLP tasks, such as question answering and dialogue (He et al., 2017; Liu et al., 2018; Moon et al., 2019).", "During generation, the structure of the data as well as the content inside the structure jointly determine the generated text.", "For example, the direction of the edge capital in Figure 1 determines that London is the capital of U.K. is an accurate description, but not vice versa.", "Current generation methods are based on sequence-to-sequence (Seq2Seq) encoder-decoder architecture (Sutskever et al., 2014), which requires the input data to be Figure 1: Illustration of the WebNLG challenge: the source data is an RDF graph and the target output is a text description of the graph.", "Recent research has shown the utility of incorporating structural information during generation.", "By replacing the sequential encoder with a structure-aware graph encoder, such as a graph convolutional network (GCNs) (Kipf and Welling, 2017) or graph-state LSTMs (Song et al., 2018), the resulting graph-to-sequence (Graph2Seq) methods can encode the structural information of the input and thus outperform Seq2Seq models on certain tasks.", "However, these architectures broaden the structural gap between the encoder and decoder.", "That is, while the encoder receives the input data as a graph, the decoder has to create the output text as a linear chain structure.", "This structural gap increases the difficulty of establishing alignments between source and target, which is believed to play a key role in text generation.", "For example, in machine translation, pre-reordering the source words into a word order that is close to that of the target sentence can yield significant improvements in translation quality (Bisazza and Federico, 2016).", "This suggests a need for an intermediate planning stage (Reiter and Dale, 2000; Puduppully et al., 2019) to help with organizing the output.", "In this work, we present a dual encoding model that is not only aware of the input graph structure but also incorporates a content planning stage.", "To encode the structural information in the input graph, we use a GCN based graph encoder .", "To narrow the ensuing structural gap, we use another GCN-based neural planner to create a sequential content plan of this graph, which is represented as a re-ordered sequence of its nodes.", "The plan is then encoded by an LSTM based sequential encoder .", "During generation, an LSTM based decoder simultaneously conditions on the two encoders, which helps it in capturing both the graph structure of the input data and the linear structure of the plan.", "We expect such a dual encoding (DUALENC ) structure can integrate the advantages of both graph and sequential encoders while narrowing the structural gap present in single-encoder methods.", "We evaluate the proposed planning and generation models on the WebNLG dataset (Colin et al., 2016; Gardent et al., 2017) a widely used benchmark for data-to-text generation.", "Experimental results show that our neural planner achieves a 15% absolute improvement on accuracy compared to the previous best planning method.", "Furthermore, DUALENC significantly outperforms the previous start-of-the-art on the generation task.", "The human evaluation confirms that the texts generated by our model are preferred over strong baselines.", "The contributions of this paper are three-fold: We propose a dual encoding method to narrow the structural gap between data encoder and text decoder for data-to-text generation; We propose a neural planner, which is more efficient and effective than previous methods; Experiments show that our method outperforms all baselines on a variety of measures.", "This work is inspired by two lines of research: Seq2Seq generation and Graph2Seq generation.", "Traditional data-to-text generation follows a planning and realization pipeline (Reiter and Dale, 2000; Stent et al., 2004).", "More recent methods use Seq2Seq architecture (Sutskever et al., 2014) to combine planning and realization into an end-to-end network and have achieved the state-of-the-art on a variety of generation tasks (Lebret et al., 2016; Trisedya et al., 2018; Juraska et al., 2018; Reed et al., 2018).", "Despite the fair fluency and grammatical correctness, the generated text suffers from several problems such as repetition, omission, and unfaithfulness, which are less likely to happen in traditional planning-and-realization frameworks.", "Recent work has shown that neural models can also benefit from an explicit planning step to alleviate the above-mentioned problems.", "The input of these planners ranges from unstructured keyphrases (Hua and Wang, 2019) to structured tables (Pudup-pully et al., 2019) and graphs (Ferreira et al., 2019; Moryossef et al., 2019a).", "Our work also focuses on planning from graph data.", "Compared with previous methods, we show that our neural planning method is more feasible and accurate.", "More importantly, rather than serializing the planning and realization stages in a pipeline, our dual encoding method simultaneously captures information from the original data and the corresponding plan.", "Graph neural networks (GNN) (Scarselli et al., 2009) aim to learn a latent state representation for each node in a graph by aggregating local information from its neighbors and the connected edges.", "Previous work has explored different ways of aggregating this local information, such as in GCNs (Kipf and Welling, 2017), gated graph neural networks (GGNNs) (Li et al., 2016), and Graph attention networks (GANs) (Velickovic et al., 2018) Several works have applied GNNs instead of Seq2Seq models for text generation (Beck et al., 2018; Marcheggiani and Perez-Beltrachini, 2018; Guo et al., 2019; Li et al., 2019), and some of them outperform Seq2Seq models.", "However, Damonte and Cohen (2019) use both types of encoders and show that GCN can help LSTM capture reentrant structures and long-range dependencies, albeit on a different problem than ours.", "Our method also uses the two types of encoders but instead of using one to assist the other, it combines them simultaneously to capture their complementary effects.", "In this work we focus on text generation from RDF data.", "1 The input for this task is a set of RDF triples, where each triple ( s, p, o ) contains a subject, a predicate, and an object.", "For example, (U.K., cap-1 https://www.w3.org/TR/rdf-concepts/ Figure 2: The architecture of the proposed DUALENC model. The input triples are converted as a graph and then fed to two GCN encoders for plan and text generation (Planner and Graph Encoder, top center). The plan is then encoded by an LSTM network (Plan Encoder, bottom center). Finally an LSTM decoder combines the hidden states from both the encoders to generate the text (Text Decoder, middle right). ital, London) is a RDF triple.", "The output is a natural language text with one or more sentences to describe the facts represented by this graph.", "Figure 1 shows an example of this task.", "For a given input RDF graph, the aim of our method is not only to capture its structural information, but also to facilitate the information alignment between the input and output.", "The first goal can be achieved by employing a GCN encoder .", "To achieve the second goal, we first serialize and re-order the nodes of the graph as an intermediate plan using another GCN, and then feed the plan into an LSTM encoder .", "Finally, an LSTM decoder is used to generate the output by incorporating the context representations of both encoders.", "Notice that the graph and the plan are dual representations of the same input data.", "We encode them with two independent encoders, which can provide complementary information for decoding.", "The architecture of our dual encoding method is shown in Figure 2.", "We describe the two encoders and the decoder in the following three subsections.", "To make it easier for GCNs to encode information from both entities and predicates, we reconstruct the input graph by regarding both entities and predicates as nodes, which is different from Figure 1.", "Formally, for each RDF triple ( s, p, o ) , we regard the s , p , and o as three kinds of nodes.", "s and o are identified by their entity mentions, and p is identified by a unique ID.", "That is, two entities from different triples that have the same mentions will be regarded as the same node.", "However, since we want to use predicates to distinguish between different triples, two predicates with the same mentions will be regarded as separate nodes.", "2 Figure 3: The graph obtained from an RDF triple.", "We use the same edge structure as Beck et al. (2018).", "As Figure 3 shows, a triple contains four directed edges to connect its nodes: s p , p s , o p , and p o .", "These edges help in information exchange between arbitrary neighbor pairs.", "There is also a special self-loop edge n n for each node n to enable information flow between adjacent iterations during feature aggregation.", "After building the graph G = ( V , E ) from the RDF data, we use a relational GCN (R-GCN) (Schlichtkrull et al., 2018) to encode the graph and learn a state representation h v R d for each node v V using the following iterative method: h tv = (cid:88) r R (cid:88) u N rv 1 c v,r W r h ( t 1) u + b r (1) where h 0 v = x v is the input embedding of the node v , and h tv is its hidden state at time-step t .", "We use the average embedding of the node mentions as x v .", "R is the set of all possible edge types, and N r v is the set of in-neighbors of node v with the edge 2 For example, capital's in (U.K., capital, London) and (U.S., capital, Washington D.C.) are different nodes. Figure 4: The sequential decision-making process of the planning stage. type as r . W r and b r are parameters for each edge type, which allow transformations of message to become relational-specific. c v,r = 1 / |N rv | is a normalization term and () is an activation function. 4.2 Planning Creation and Encoding In the planning stage, we determine the content plan or order of triples (identified by their predicates) for text realization. For example, the content plan for the text in Figure 1 is: assembly capital successor manufacturer . 3 Learning a plan can be naturally regarded as a sequential decision-making process. That is, given a set of triples, we first determine which triple to mention/visit first, and then select the second triple from the remaining triples that have not been visited so far. This process continues until all the triples have been visited. During each decision step, the selection of the next triple can be regarded as a classification task, where the output space is all the remaining unvisited triples. Figure 4 shows how our model implements this process. We first utilize the GCN encoder described in Section 4.1 to get the state representation of each node. However, while obtaining a predicate's representation, we concatenate two extra bits to the input feature X t . One is to indicate whether or not the predicate has been visited, the other to indicate the last predicate that has been visited. After the encoding, we get the final hidden state h r i = h ( T ) r i for each predicate r i R as its representation, and calculate its probability of being selected as P ( r i ) = softmax ( h Tr i W h R ) (2) where h R is the average pooling of all the predicate embeddings. For obtaining a plan, we select the predicate with the highest probability, append it onto the plan sequence, and then repeat the above process until all the predicates have been visited. 3 Here we only consider the order of triples. Future plans could explore ordering of subjects and/or objects. After determining an order of input predicates, we complete the plan's triples by adding the corresponding subjects and objects. To better help the plan encoder (described below) capture the semantic roles of each entity and predicate, we add special tokens before S ubjects, P redicates, and O bjects as delimiters. For example, the plan of the example in Figure 1 will be: <S> Aston Martin V8 <P> assembly <O> United Kingdom <S> United Kingdom <P> capital <O> London <S> Aston Martin V8 <P> successor <O> Aston Martin Virage <S> Aston Martin Virage <P> manufacturer <O> Aston Martin Finally, we use an LSTM to encode the plan obtained above. We choose LSTM because it excels at capturing sequential information. 4.3 Decoding During decoding, we adopt an LSTM-based decoder with an attention and copy mechanism. Since we have two representations of the input triple-set: the original graph and the serialized plan, we adopt two strategies for inputting context to the decoder. The first strategy is to only use hidden states of the plan encoder as context. We refer to this strategy as PLANENC . While the serialized plan may contain some structural information, it cannot preserve all the information of the original graph. We therefore propose a second strategy, DUALENC , to incorporate the information from both the graph and the plan. More concretely, when calculating the context state m t of the LSTM decoder at time step t , we concatenate the previous hidden state z t 1 and the two context vectors c 1 t and c 2 t , and then update the current hidden state, z t as: m t = MLP([ z t 1 ; c 1 t ; c 2 t ]) , (3) z t = LSTM ( z t 1 , [( y t 1 ; m t ]) , (4) where c 1 t and c 2 t are the attention-based weighted sum of the context memories from GCN and RNN encoders, respectively, and y t 1 is the embedding of the previously generated token. The initial hidden state z 0 is the summation of the final states from the two encoders. For the plan encoder, we use the final state HT of LSTM as the context representation. For the graph encoder, we use an average of all the hidden states following a two-layer perceptron to produce the final state. 5 Experiments We conduct experiments to evaluate our Planner (Section 5.2) and the overall generation system (Section 5.3). 4 5.1 Dataset We conduct experiments on the WebNLG dataset (Gardent et al., 2017; Castro Ferreira et al., 2018) used in the WebNLG challenge. 5 For each instance, the input is a set of up to 7 RDF triples from DBPedia, and the output is their text descriptions. Each triple-set is paired with a set of (up to three) human-generated reference texts. Each reference is also paired with the order of triples it realized. We use them to train and evaluate our Planner. Overall, the dataset contains 9 , 674 unique triple-sets and 25 , 298 text references, and is divided into training, development, and test set. The test set contains two subsets, the SEEN part where the instances belong to one of the nine domains that are seen in the training and development set (such as Astronaut and Food ), and the UNSEEN part where the instances are from the other five unseen domains. The UNSEEN part is designed to evaluate models' generalizability to out-of-domain instances.", "As previous work suggests, planning plays a crucial role in text generation.", "We, therefore, first investigate the performance of our planner.", "During the graph encoding, we initialize the node embeddings with 100 -dimensional random vectors.", "Our GCN model has two layers, with the hidden size of each layer as 100 .", "The activation function is ReLU (Nair and Hinton, 2010).", "We optimize the training objective using Adam (Kingma and Ba, 2015) with a learning rate of 0 .", "001 and an early stopping on the development set.", "The batch size is 100 .", "We compare our results with the following six baseline planners: Random: returns a random permutation of the input triples as a plan; Structure-Random: returns a random traversal over the input graph.", "We report the highest score among three random strategies: random walk, random BFS, and random DFS; 4 Code is available on https://github.com/ zhaochaocs/DualEnc 5 http://webnlg.loria.fr/pages/index.", "Step-By-Step (Moryossef et al., 2019a): a transition-based statistical ranking method; Step-By-Step II (Moryossef et al., 2019b): a DFS-based method with a neural controller; GRU & Transformer (Ferreira et al., 2019): two neural Seq2Seq methods with attention;", "We report the performance on three test sets: SEEN , UNSEEN , and ALL (SEEN & UNSEEN ).", "We remove all one-triple instances for planner's evaluation since the planning for these instances is trivial.", "Results are evaluated with accuracy and BLEU-n (Papineni et al., 2002).", "For accuracy, we regard a plan as correct only if it exactly matches one of the human-generated plans.", "BLEU-n is more forgiving than accuracy.", "It is also adopted in Yao et al. (2019) for plan evaluation.", "Here we choose n = 2 .", "Table 1 shows results of the planning experiments.", "Our GCN method significantly outperforms all the baselines (approximate randomization (Noreen, 1989; Chinchor, 1992), p < 0 .", "05 ) by a large margin on all the test sets and both measures, indicating the effectiveness of our planner.", "The most competitive baseline on ALL and UNSEEN sets is Step-By-Step, but our method is more time-efficient.", "For example, Step-By-Step needs 250 seconds to solve one 7-triple instance, but our method solves all 4928 instances in less than 10 seconds.", "For the SEEN set, the most competitive models are GRU and Transformer.", "However, while their accuracies drop by 0.46 on UNSEEN test set, our method drops only slightly by 0.02, indicating our method's better generalization power.", "We believe that this superior generalization capacity comes from the modeling of the graph structure.", "While the surface forms of triples in UNSEEN set do not overlap with those in the training data, the graph-level structural features are still shared, making it a key factor for generalization.", "GRU and Transformer linearize the graph as a sequential input, making them miss the structural information and resulting in poorer generalization capacity.", "Step-By-Step II also considers graph structure, but our model achieves better performance because we use GCN to encode the node representation, which can aggregate richer information from both the graph structure and the surface information.", "We also investigated the effect of the graph size on the plan quality.", "In Figure 5, we separate the ALL test set into six subsets according to the size of input triple-sets, to reflect the model's capacity Accuracy BLEU-2 SEENUNSEENALLSEENUNSEENALL Random 0.28 0.34 0.31 54.1 62.1 57.9 Structure-random 0.32 0.38 0.34 56.6 62.9 59.5 Transformer (Ferreira et al., 2019) 0.56 0.09 0.34 74.3 20.9 49.3 GRU (Ferreira et al., 2019) 0.56 0.10 0.35 75.8 25.4 52.2 Step-By-Step II (Moryossef et al., 2019b) 0.45 0.44 0.44 67.7 67.3 67.5 Step-By-Step (Moryossef et al., 2019a) 0.49 0.44 0.47 73.2 68.0 70.8 GCN 0.63 0.61 0.62 80.8 79.3 80.1 Table 1: Planning results of three test sets evaluated by accuracy and BLEU-2.", "at a fine-grained level.", "Fewer input triples make the planning task easier, while the 7-triple case is the most difficult one.", "The accuracy of seven out of eight baselines drops to around 0 in this case, while our method achieves an accuracy of 0 .", "19 .", "Besides this, our method consistently outperforms all the baselines for all the triple-set sizes.", "We implement the generator based on the OpenNMT toolkit.", "6 For the graph encoder, we use a similar setting as above.", "Since the generation task is more complicated than planning, we increase the dimension of the input and the hidden states to 256 .", "The plan encoder is a 2-layer bidirectional LSTM with the same dimension setting of the GCN to ease the information fusion.", "During encoding, for UNSEEN test set, we adopt delexicalization (Gardent et al., 2017) to enhance the model's generalizability to unseen domains.", "6 https://github.com/OpenNMT/OpenNMT-py", "training until the perplexity of the development set does not decrease.", "We also apply dropout on the decoding output layer with a rate of 0 .", "3 .", "The quality of the generated text (as well as those of the baselines) is evaluated through a variety of automatic measures, such as BLEU, METEOR, and TER, which are strictly the same as those applied in the official challenge.", "7 Following Marcheggiani and Perez-Beltrachini (2018), we report averaged performances over ten runs of the models.", "We compare our method with the top systems of the WebNLG challenge and published state-of-the-art systems.", "The WebNLG systems are: ADAPT: a neural system with sub-word representations to deal with rare words and sparsity.", "TILB-SMT: a statistical machine translation method using Moses and delexicalization.", "MELBOURNE: a Seq2Seq model with enriched delexicalization from DBPedia.", "The published research models are: GTR-LSTM (Trisedya et al., 2018): a graph-based triple encoder; GCN-EC (Marcheggiani and Perez-Beltrachini, 2018): a GCN-based triple encoder with glove embedding and copy; GRU & Transformer (Ferreira et al., 2019): two pipeline methods with 5 sequential steps and GRU or Transformer as the encoder; STEP-BY-STEP (Moryossef et al., 2019a): a pipeline method that generates the text from plans with OpenNMT and a copy mechanism.", "Table 2 shows the results of the automatic evaluation on the generation task.", "Our PLANENC achieves the best performance on BLEU and TER, while DUALENC performs best under METEOR.", "Both PLANENC and DUALENC significantly out-7 That is why some of the numbers in our table are not exactly the same as those in the cited works.", "perform the previous state-of-the-art (bootstrapping (Koehn and Monz, 2006), p < 0 .", "05 ).", "For the SEEN part, while no existing published work performed better than ADAPT, our PLANENC achieves a 3 .", "83 performance gain on BLEU.", "It also outperforms the single GCN encoder by 8 .", "52 BLEU, which confirms the advantage of the planning stage for bridging the structural gap between the encoder and decoder.", "For the UNSEEN part, PLANENC and DUALENC improve BLEU by 3.82 and 2.32 compared with the previous state-of-the-art.", "While it is difficult to distinguish the performance of DUALENC and PLANENC by automatic measures, our human experiments (see Section 5.3.4) show that dual encoding generates better text compared with PLANENC .", "When comparing with the pipeline methods, one difference from the data perspective is how to obtain the plans of each instance to train the planner.", "While Step-By-Step uses heuristic string matching to extract plans from the referenced sentences, other methods (GRU and transformer), as well as ours, use plans provided in the enriched WebNLG dataset (Castro Ferreira et al., 2018).", "However, Step-By-Step reported worse BLEU results on these plans.", "To further analyze what factors contribute to the performance gain, we conduct an ablation study by", "removing the following components: Copy mechanism: The text is generated without copying from the source; Triple planning: The input triples are shuf-fled before feeding into RNN, but the ( s, p, o )", "inside a triple are not shuffled.", "Entity mentions: We join the words in a node mention with underlines (e.g., Aston Martin instead of Aston Martin ).", "Plan delimiter: We concatenate the ( s, p, o ) without separating them with role delimiters.", "We conduct the ablation study on the SEEN test-set using our PLANENC .", "Table 3 shows the average performance and standard deviations.", "Compared with PLANENC , replacing plans with a random sequence of triples hurts the BLEU score by 6.61 points, indicating that the accuracy of planning is essential for the quality of generation.", "Our planning also makes the model more stable to random seeds (by decreasing the standard deviation from 0.82 to 0.17).", "Removing the copy mechanism also decreases the BLEU score by 2.78 points.", "It demonstrates the effectiveness of copying words from the source triples rather than generating them from the vocabulary set.", "Removing the mention information, decreases the BLEU score by 2.93.", "It reflects two benefits of word mentions: to alleviate data sparsity and to coordinate with the copy mechanism.", "However, removing delimiters does not affect the BLEU much.", "Intuitively, we expected the delimiters to Absolute(%) Pairwise(%) CVGEFAITHCVGEFAITHFLCYALLMELBOURNE 83.0 75.2 -35.0 -42.5 -38.8 -68.8 STEP 96.1 89.3 5.0 -3.7 -45.0 -55.0 E2E-TRANS 85.5 78.0 -21.2 -32.5 -21.2 -46.3 GCN 79.8 76.8 -48.7 -50.0 -26.3 -67.5 PLANENC 92.3 88.2 -7.5 -12.5 -7.5 -21.2 DUALENC 94.5 91.8 Table 4: Results of human evaluation.", "help the LSTM capture the boundaries and semantic roles of each node, but the ablation study does not support it.", "We provide an example in Table 5 to show that the LSTM indeed has trouble learning such semantic roles.", "Automatic measures are based on lexical similarities and are not good measures of text quality in general.", "We therefore further conduct a human evaluation on Amazon Mechanical Turk to better access the quality of the generated texts.", "We evaluate the results for MELBOURNE, Step-By-Step, Transformer, GCN, as well as our PLANENC and DUALENC .", "We randomly select 80 test instances (440 triples in total) with the size of tripleset between 4 to 7, since they are more challenging than those with fewer triples.", "Then we evaluate the generation quality of each system with the following three measures: Coverage: the percentage of triples that are covered by the generated text (all < s, p, o > values in the triples are realized); Faithfulness: the percentage of triples that are faithfully described by the text (the text correctly expresses the predicate and also the subject and object as its arguments. No substitutions or hallucinations); Fluency: a measure of the fluency or naturalness of the generated text.", "For coverage and faithfulness, workers are asked to check each triple of an instance, and judge whether the triple is covered and faithfully described by the generated text.", "For fluency, we ask another group of workers to compare between two outputs of the same instance and identify which one is more fluent.", "Table 5 shows examples where these qualities are compromised.", "In Table 4, we report the absolute scores of coverage and faithfulness, which range from 0 to 100%.", "We also provide pairwise scores of all three measures by comparing the outputs of DUALENC with each of the other five systems.", "We report the percentage of instances that were judged to be worse/better/same than those of DUALENC , yielding a score ranging from -100% (unanimously worse) to 100% (unanimously better).", "For example, MELBOURNE performs better/worse/same than DUALENC for 10%/45%/45% of the instances, yielding a pairwise score as 10%-45%=-0.35%.", "We also report an overall pairwise score combining all three measures.", "For each instance, the overall score of one output is higher than the other iff it outperforms the other on at least one of the three measures and has a better or equal vote on the other two.", "Our PLANENC and DUALENC outperform most of the baselines on all of the measures by a large margin (approximate randomization, p < 0 . 05 . ), which is consistent with the automatic results.", "The only exception is Step-By-Step, which has high Coverage and Faithfulness (not significant).", "It first separates the input triples into smaller subsets and then realizes them separately.", "This greatly reduces the difficulty of long-term generation but at the expense of Fluency (worst among all the baselines).", "GCN does not perform well on Coverage, which demonstrates that the structural gap between encoding and decoding indeed makes generation more difficult.", "However, it has the smallest difference between Coverage and Faithfulness among all the baselines, indicating that the fidelity of generation can benefit from the encoding of graph-level structural information.", "By combining GCN and PLANENC , our DUALENC incorporates the advantages of both encoders while ameliorating their weaknesses, and therefore achieves the best OVERALL performance on human evaluation.", "Table 5 shows examples of generated texts by various systems for an input of six triples.", "Colored fonts represent missing, unfaithful, and unfluent information.", "For example, PLANENC misses Buzz Aldrin and also wrongly expresses the subject of retirement as Frank Borman, indicating that LSTM is less powerful at capturing the semantic roles of entities.", "This disadvantage can be well complemented by GCN, which is designed to capture the graph structure and the relations between entities.", "Hence, by incorporating information from Tripleset (William Anders | birthPlace | British Hong Kong), (William Anders | was a crew member of | Apollo 8), (Apollo 8 | crewMembers | Frank Borman), (Apollo 8 | backup pilot | Buzz Aldrin), (Apollo 8 | operator | NASA), (William Anders | dateOfRetirement | 1969-09-01) MELBOURNE william anders (born in british hong kong) was a crew member of apollo 8' s apollo 8 8 mission along with buzz aldrin as backup pilot and buzz aldrin on 1969-09-01 .", "both GCN and LSTM, DUALENC correctly expresses the subject argument of retirement.", "This paper proposes DUALENC , a dual encoding method to bridge the structural gap between encoder and decoder for data-to-text generation.", "We use GCN encoders to capture the structural information of the data, which is essential for accurate planning and faithful generation.", "We also introduce an intermediate content planning stage to serialize the data and then encode it with an LSTM network.", "This serialized plan is more compatible with the output sequence, making the information alignment between the input and output easier.", "Experiments on WebNLG dataset demonstrate the effectiveness of our planner and generator by outperforming the previous state-of-the-art by a large margin.", "Future work will validate the effectiveness of this method on more varied data-to-text generation tasks." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "objective", "abstain", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "other", "other", "other", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "method", "method", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "method", "method", "abstain", "objective", "abstain" ]
[ "Practical applications of abstractive summarization models are limited by frequent factual inconsistencies with respect to their input.", "Existing automatic evaluation metrics for summarization are largely insensitive to such errors.", "We propose QAGS, 1 an automatic evaluation protocol that is designed to identify factual inconsistencies in a generated summary.", "QAGS is based on the intuition that if we ask questions about a summary and its source, we will receive similar answers if the summary is factually consistent with the source.", "To evaluate QAGS, we collect human judgments of factual consistency on model-generated summaries for the CNN/DailyMail (Hermann et al., 2015) and XSUM (Narayan et al., 2018) summarization datasets.", "QAGS has substantially higher correlations with these judgments than other automatic evaluation metrics.", "Also, QAGS offers a natural form of interpretability: The answers and questions generated while computing QAGS indicate which tokens of a summary are inconsistent and why.", "We believe QAGS is a promising tool in automatically generating usable and factually consistent text.", "Code for QAGS will be available at https://github.", "com/W4ngatang/qags .", "Automatic summarization aims to produce summaries that are succinct, coherent, relevant, and crucially factually correct.", "Recent progress in conditional text generation has led to models that can generate fluent, topical summaries (Lewis et al., 2019).", "However, model-generated summaries frequently contain factual inconsistencies, limiting their applicability (Kryscinski et al., 2019a).", "The problem of factual inconsistency is due in part to the lack of automatic evaluation metrics that can detect such errors.", "Standard metrics for 1 Pronounced kags.", "evaluating generated text are predominantly based on counting n -grams, which weigh all n -grams equally and are insensitive to semantic errors.", "This inadequacy leaves human evaluation as the primary method for evaluating the factual consistencies, which has been noted to be challenging even for humans (Daume III and Marcu, 2005; Kryscinski et al., 2019b), in addition to being slow and costly.", "We argue that evaluation metrics that are able to capture subtle semantic errors are required to build better models.", "In this work, we introduce a general framework for evaluating conditional text generation that is designed to detect factual inconsistencies in generated text with respect to some input.", "Our framework consists of three steps: (1) Given a generated text, a question generation (QG) model generates a set of questions about the text.", "(2) We then use question answering (QA) models to answer these questions given both the input and the generated text.", "(3) A quality score is computed based on the similarity of corresponding answers.", "This approach leverages recent progress in QA and QG to ask and answer human readable, on-topic questions (Devlin et al., 2019; Song et al., 2019).", "It only assumes access to a question answering dataset to train the QG and QA models, and is applicable to any modality where a QA model is available, e.g. text, images, or knowledge graphs.", "We use this framework to develop QAGS (Ques-tion Answering and Generation for Summariza-tion), a metric for evaluating the factual consistency of abstractive document summaries.", "Compared to commonly used automatic metrics such as ROUGE (Lin, 2004), QAGS shows dramatically higher correlations with human judgements of factuality, for example achieving a Pearson correlation coefficient of 54.52 on the CNN/DailyMail summarization task, compared to 17.72 for ROUGE-2.", "QAGS also achieves new state-of-the-art results on evaluating the factuality of summaries, outperforming recently proposed NLI models for this task (Kryscinski et al., 2019b).", "Finally, we analyse the robustness of QAGS through an ablation study.", "QAGS shows robustness to the quality of the underlying QG and QA models, the domain of the models, and the number of questions asked.", "Even under the worst ablation settings, QAGS still has stronger correlation with human judgments than other automatic metrics.", "Overall, we contribute the following: (1) We introduce QAGS, an automatic model-based evaluation metric for measuring the factual consistency of model-generated text.", "(2) We collect a new set of human judgments of factual consistency of model-generated summaries for two summarization datasets.", "We demonstrate that QAGS correlates with these judgments significantly better than other automatic metrics.", "(3) We show via ablations that QAGS is robust to a number of factors including underlying model quality and domain mismatch.", "(4) We analyze the questions and answers produced in computing QAGS to illustrate which parts of summaries are inconsistent.", "(5) We will release models and code to compute QAGS.", "Standard approaches to evaluating generated text are primarily based on counting n -gram overlap.", "These methods assume access to one or more reference texts, and score a generated summary based on the precision and recall of all reference n -grams in the generated summary.", "We briefly describe the most common metrics in this family, and refer readers to Liu et al. (2016) for further discussion.", "ROUGE (Lin, 2004) was developed specifically for evaluating automatic summarization, and its variants are the de facto standard for such.", "The most common variant is ROUGEn (typically n { 1 , 2 } ), which computes the F1 score for all reference n -grams in the generated summary.", "ROUGE-L , another commonly used variant, is the length of the longest common subsequence (possibly nonconsecutive) between a summary and references.", "BLEU (Papineni et al., 2002) is closely related to ROUGE but was developed for machine translation.", "BLEU computes the precision of the reference n grams in the generated summary.", "METEOR (Lavie and Agarwal, 2007) extends BLEU by using an alignment between the generated text and a reference, as well as using stemming and synonym replacement for more flexible n -gram matching.", "We identify two key deficiencies when using these n -gram based evaluation metrics to detect factual inconsistencies in generated text.", "First, these metrics require one or more reference texts to compare against.", "Obtaining references can be expensive and challenging, and as such many text generation datasets contain only a single reference.", "This problem is exacerbated with high-entropy generation tasks, such as summarization or dialogue, where there is a very large number of acceptable outputs.", "In these settings, comparing against a single reference is woefully inadequate.", "Second, given a reference to compare against, n -gram based approach weigh all portions of the text equally, even when only a small fraction of the n -grams carry most of the semantic content.", "Factual inconsistencies caused by minor changes may be drowned out by otherwise high n -gram overlap, making these metrics insensitive to these errors.", "For example, the sentences I am writing my paper in Vancouver. and I am not writing my paper in Vancouver. share nearly all unigrams and bigrams despite having the opposite meaning.", "We introduce a framework for automatically detecting factual inconsistencies in generated text while also addressing the deficiencies of current approaches.", "Let X and Y be sequences of tokens coming from a vocabulary V where X is a source text and Y is a summary of X .", "We define p ( Q | Y ) as a distribution over all possible questions Q given summary Y , and p ( A | Q, X ) and p ( A | Q, Y ) as distributions over all possible answers A to a particular question Q given either the source X or the summary Y .", "We constrain the questions Q and answers A to also be sequences of tokens from V .", "Then the factual consistency of the summary Y is EQ p ( Q | Y ) (cid:2) D (cid:0) p ( A | Q, X ) , p ( A | Q, Y ) (cid:1)(cid:3) , (1) where D is some function measuring the similarity of the two answer distributions.", "This expression is maximized when Y contains a subset of the information in X such that it produces the same answer for any question from p ( Q | Y ) .", "This happens trivially when Y = X , i.e. we take X as its own summary, but in many cases this solution is unacceptable.", "This framework addresses the two issues with n gram based approaches.", "Instead of requiring a reference to compare against, our framework asks questions based on the generation itself, and compares answers with the provided source text.", "Also, the use of questions focuses the metric on the semantically relevant parts of the generated text, rather than weighting all parts of the text equally.", "In practice, exactly computing the expectation in Equation 1 is intractable due to the large space of possible questions.", "One potential workaround is to randomly sample questions from p ( Q | Y ) , but this suffers from high variance and requires many samples to obtain a good estimate.", "Instead, we focus on producing highly probable questions, e.g. as produced by beam search, which may be biased in the limit, but will require fewer questions to estimate because of the higher quality of the questions.", "Using this framework requires specifying the question distribution p ( Q | Y ) , the answer distributions p ( A | Q, ) , and the answer similarity function D .", "We apply this framework to summarization to develop QAGS and describe our instantiations of these components.", "Question Generation To instantiate p ( Q | Y ) , we draw on recent work on automatic question generation (QG), which models this distribution using neural seq2seq models (Du et al., 2017; Krishna and Iyyer, 2019).", "We over-sample questions, and then filter out low quality questions as follows.", "First, we train and generate from answer-conditional QG models.", "During training, the model receives both the answer and the source article, and is trained to maximize the likelihood of the paired question.", "At test time, given a summary Y , we determine candidate answers.", "We condition on these answers and the summary to generate questions.", "Next, we filter out low-quality questions using a number of heuristics, such as duplicates and questions less than three tokens long.", "We also found it especially useful to run the QA model (see next section) on all of the candidate questions, and filter out questions for which the QA model predicted no answer or a different answer than expected.", "Question Answering We instantiate the answer distributions p ( A | Q, ) as extractive QA models, for simplicity.", "In using extractive QA models, we assume the facts are represented as text spans in the article and summary.", "Future work should explore using abstractive QA models, which could match paraphrases of the same answer.", "F 1(arg max p ( A | Q, X ) , arg max p ( A | Q, Y ))", "The QAGS Score Given these components, we obtain the QAGS score of a generation by (1) generating K questions conditioned on the summary, (2) answering the questions using both the source article and the summary to get two sets of answers, (3) comparing corresponding answers using the answer similarity metric, and (4) averaging the answer similarity metric over all questions.", "We depict this process in Figure", "1. 5 Experiments 5.1 Human Evaluation We test whether QAGS accurately measures the factual consistency of a summary with respect to a source article by computing correlations with human judgments of factual consistency.", "Datasets We focus on abstractive summarization, which is particularly interesting because factual consistency with the original text is crucial to usability, and a lack of such consistency has plagued abstractive neural summarization models (Cao et al., 2018; Falke et al., 2019; Kryscinski et al., 2019b, i.a.).", "To compare with prior work on evaluating summarization, we use two common abstractive summarization datasets, CNN/Daily Mail (CNNDM, Hermann et al., 2015; Nallapati et al., 2016) and XSUM (Narayan et al., 2018).", "CNN/DM is a standard dataset for summarization that consists of CNN and DailyMail articles.", "Each reference summary consists of the concatenation of three editor-written, bullet point highlights.", "For summaries, we use 235 test outputs from Gehrmann et al. (2018).", "XSUM was created by taking the first sentence of a news article as the summary, and using the rest of the article as the source.", "Consequently, XSUM summaries are significantly more abstractive than Metric CNN/DM XSUM ROUGE-1 28.74 13.22 ROUGE-2 17.72 8.95 ROUGE-L 24.09 8.86 METEOR 26.65 10.03 BLEU-1 29.68 11.76 BLEU-2 25.65 11.68 BLEU-3 23.96 8.41 BLEU-4 21.45 5.64 BERTScore 27.63 2.51 QAGS 54.53 17.49 Table 1: Summary-level Pearson correlation coefficients between various automatic metrics and human judgments of correctness for summarization datasets.", "We found that while the XSUM summaries are more abstractive, frequently there are facts (e.g. first names) in the summary that are not available in the article.", "This quirk made it especially difficult for humans and QAGS to tell when factual errors were being made by the summarization model.", "To remedy this, for human evaluation and QAGS, we prepend the summary back to the article.", "We use a subset of 239 test outputs from BART fine-tuned on XSUM (Lewis et al., 2019).", "Annotation Protocol We collect human judgments on Amazon Mechanical Turk 2 via ParlAI (Miller et al., 2017).", "We present summaries one sentence at a time, along with the entire article.", "For each summary sentence, the annotator makes a binary decision as to whether the sentence is factually consistent with the article.", "Workers are instructed to mark non-grammatical sentences as not consistent, and copies of article sentences as consistent.", "Workers are paid $ 1 per full summary annotated.", "See Appendix A for further details.", "We collect 3 annotations per summary.", "To obtain a single consistency score per summary, we first take the majority vote for each sentence, then average the binary scores across summary sentences to produce a final score.", "Inter-annotator agreement as measured by Krip-2 https://www.mturk.com/ pendorff's is 0.51 and 0.34 for CNN/DM and XSUM, respectively indicating moderate and fair agreement (Ageeva et al., 2015).", "While not perfect, these agreement numbers are in-line with similar figures from previous work on summarization evaluation (Daume III and Marcu, 2005).", "Question Generation We train answer-conditional QG models by fine-tuning a pretrained BART language model (Lewis et al., 2019) on NewsQA (Trischler et al., 2017), a dataset consisting of CNN articles and crowdsourced questions.", "During training, the model receives the concatenation of the source article and an answer, and is trained to predict the question.", "The answer, source article, and question are concatenated with intervening special tokens to mark the boundaries.", "At test time, the model receives the concaten-tation of a summary and an expected answer, and outputs question candidates.", "For each summary, we extract 10 named entities and noun phrases as answer candidates using the en-web-sm spaCy model.", "3 For each summary-answer pair, we generate questions using beam search with width 10, for a total of 100 question candidates.", "We experimented with generating via topk (Holtzman et al., 2019) and topp (Fan et al., 2018) sampling, but the generated questions, while diverse, were noisy and frequently nongrammatical.", "After filtering, we use the K = 20 most probable questions.", "If a summary has too few filtered questions, we randomly sample questions to reach the required number.", "For additional filtering and training details, see Appendix B. We implement these models with fairseq (Ott et al., 2019).", "Question Answering We train extractive QA models by fine-tuning BERT (Devlin et al., 2019) on SQuAD2.0 (Rajpurkar et al., 2018).", "We use the large-uncased BERT variant via the transformers library (Wolf et al., 2019).", "We found that allowing the model to predict that a question is unanswerable, as is the case in SQuAD2.0, is particularly useful in filtering out bad questions, as questions based on hallucinated facts in the summary should be unanswerable using the source article.", "Baselines We compare against a number of automatic evaluation metrics: ROUGE (Lin, 2004), 3 https://spacy.io/api/entityrecognizer METEOR (Lavie and Agarwal, 2007), BLEU (Pa-pineni et al., 2002), and BERTScore (Zhang et al., 2019).", "The latter uses BERT representations to compute an alignment between generation and reference tokens, and which is then used to compute a soft version of unigram F1.", "We use the large-uncased BERT variant.", "We present Pearson correlations between human-judged consistency scores and various automatic metrics in Table", "1. For CNN/DM, all results are significant with p < 0 .", "01 ; for XSUM, all results are significant with p < .", "05 .", "QAGS strongly outperforms other automatic evaluation metrics in terms of correlation with the summary-level human judgments of factual consistency.", "BLEU and ROUGE perform comparably, and lower order n -gram metrics work better.", "BERTScore matches the best n gram metrics on CNN/DM, but the worst overall on XSUM.", "On CNN/DM, QAGS obtains nearly twice the correlation of the next best automatic metric (BLEU-1).", "We speculate that this large increase is due to the sensitivity of the QA model to the sentence fusing behavior exhibited in many summarization models trained on CNN/DM (Lebanoff et al., 2019).", "When two sentences are fused to produce an incorrect summary statement, the QA model produces different answers when using the source article than when using the summary.", "On XSUM, all metrics correlate worse with human judgments than on CNN/DM, which reflects the fact that XSUM is more abstractive.", "QAGS still outperforms the next best automatic metric.", "A potential issue with model-based evaluation is that the quality of the evaluation metric may depend heavily on specific hyperparameter settings.", "We explore the extent to which this is true with QAGS by performing ablations on several factors.", "Model Quality We first consider the degree to which the quality of the underlying models impacts their evaluation capabilities.", "For QA quality, we answer this question by training QA models of varying quality by fine-tuning different versions of BERT on SQuAD.", "We present results in Table", "2. The QA models perform similarly despite substantially different performances on the SQuAD develop-QA model SQuAD CNN/DM XSUM (F1) (Pear.) (Pear.) bert-base 75.95 55.20 20.71 bert-large 81.57 54.53 17.49 bert-large-wwm 84.36 51.36 18.07 Table 2: Pearson correlations between human judgments of factual consistency and QAGS using QA models of different qualities, as measured by performance on the SQuAD2.0 development set (F1).", "ment set.", "Surprisingly, using the best QA model ( bert-large-wwm ) does not lead to the best correlations with human judgments.", "On CNN/DM, bert-large-wwm slightly under-performs bert-base and bert-large .", "On XSUM, bert-base slightly outperforms the other two BERT variants.", "These results indicate that QAGS is fairly robust to the quality of the underlying QA model, though we note that BERT is a strong QA baseline, and using weaker QA models might lead to larger performance dropoffs.", "To ablate QG quality, we use models with increasing perplexity on the NewsQA development set.", "Results in Table 3 show that QAGS is robust to the QG model quality, with some decrease in correlation with human judgments as perplexity increases on CNN/DM, and no clear trend on XSUM.", "Even the weakest QG model still significantly outperforms all other automatic metrics in Table", "1. Domain Effects Our approach relies on having a labeled dataset to train QG and QA models.", "However, for relatively niche domains, such a labeled QA/QG dataset may not exist.", "Instead, we may need to resort to using models trained on out-of-domain data, leading to domain shift effects that negatively impact the quality of the QAGS scores.", "We simulate this setting by fine-tuning the # Questions CNN/DM XSUM 5 41.61 15.63 10 41.17 15.49 20 54.53 17.49 50 57.94 17.74 Table 4: Pearson correlation coefficients between QAGS scores with varying number of questions and human judgments of correctness for summarization datasets.", "QG model on SQuAD, which is of similar size to NewsQA but drawn from Wikipedia articles rather than CNN articles, which exactly matches the genre of the summarization datasets.", "Evaluating with this QG model, we get correlations of 51.53 and 15.28 with human judgments on CNN/DM and XSUM respectively, versus 54.53 and 17.49 when using the NewsQA-tuned QG model.", "The drop in performance indicates a negative domain shift effect.", "However using the SQuAD-tuned QG model still substantially outperforms all other automatic metrics, again pointing to the robustness of QAGS.", "Number of Questions Next, we investigate the correlation with human judgments when varying the number of questions used.", "Results in Table 4 show that increasing the number of questions used improves correlations with human judgments.", "We observe a large increase when moving from 10 to 20 questions, and a smaller increase from 20 to 50 questions, indicating decreasing marginal benefit moving beyond 50 questions.", "However, we observe frequent clusters of generated questions that only differ by a few tokens.", "Encouraging greater diversity when generating questions might lead to better correlations when more questions are used.", "Still, With just 5 questions used QAGS substantially outperforms other automatic metrics, which indicates its robustness.", "Answer Similarity Metric Finally, we consider using exact match as an alternative answer similarity metric.", "Exact match is another common evaluation metric for extractive QA, and is more restrictive than F1.", "When using EM, we obtain Pearson correlations with human judgments of 45.97 and 18.10 on CNN/DM and XSUM, as opposed to 54.53 and 17.49 when using F1.", "Several works explore the use of natural language inference (NLI) models to detect factual consistency in generated text (Welleck et al., 2019; Falke et al., 2019).", "We compare against these methods by evaluating on the sentence ranking experiment from Falke et al. (2019).", "The experiment uses 373 triplets of source sentences from CNN/DM and two summary sentences generated from the model from Chen and Bansal (2018).", "One summary sentence is factually consistent with the source sentence, and the other is inconsistent.", "A metric (or model) is evaluated based on how often it ranks the consistent sentence higher than the inconsistent sentence.", "We present the results in Table", "5. Results using two NLI models fine-tuned on MultiNLI (Williams et al., 2018), BERT NLI, and ESIM (Chen et al., 2017), are from Falke et al. (2019).", "FactCC (Kryscinski et al., 2019b) is an NLI-based fact-checking model that is trained on a dataset tailor made for detecting factual inconsistencies in generated text.", "QAGS outperforms these methods, while requiring no special supervision for this task.", "Interpreting QAGS The questions and answers produced in computing QAGS are directly interpretable, and highlight errors in summaries.", "We present examples of articles, summaries, and the QAGS questions and answers in Table", "6. On the first example (Table 6, top), QAGS detects several factual inconsistencies in the generated summary: The summary mistakes the first name of the attacker, the location of the attack, and the weapons used.", "Because the QG model focuses on these details, QAGS is able to correctly penalize the summary for its hallucinations.", "Because the answer candidates used are mostly named entities and noun phrases, QAGS is particularly effective at detecting errors of this kind.", "Using more diverse answer candidates may broaden the set of inconsistencies that QAGS is able to detect.", "The second example (Table 6, bottom), illustrates failure modes of QAGS.", "For example, the QA model incorrectly marks question 2 as unanswerable.", "On question 4, both answers produced are correct, but because they have no common tokens, they are marked inconsistent by QAGS.", "Error Analysis The interpretability of QAGS allows for error analysis on the metric.", "We manually annotate 400 triplets of generated questions, article answers, and summary answers that are produced in computing QAGS on the XSUM summaries, and label them by the quality of the generated questions, predicted answers, and answer similarity scores.", "Among the generated questions, 8.75% are nonsensical, while 3.00% are well-formed but unanswerable using the generated summary they were conditioned upon.", "These figures indicate that the vast majority of questions are understandable and on-topic.", "We frequently observe multiple questions with slightly different wordings, which is likely due to the low number of answer candidates in XSUM summaries (which are one sentence long) and due to beam search.", "8.25% of questions are well-formed but unanswerable using the source, which is usually due to a hallucinated fact in the summary that the QG model turns into a question.", "Among predicted answers, 1.75% of questions are potentially answerable using the summary, but are incorrectly answered.", "This percentage increases to 32.50% for the article, which indicates that the transfer ability of the QA model is lacking.", "In a small number of cases, we found that while a question had a single answer in the summary, it could have multiple answers in the article.", "Finally, for 8.00% of the examples, the question is answered correctly using both the article and summary, but the answers have high lexical variation such that F1 score fails to detect their similarity.", "While this happens in a relatively small number of cases, exploring similarity metrics other than n -gram based approaches could be useful.", "Limitations We emphasize that QAGS and our overall framework are specifically designed to detect factual inconsistencies in generated summaries relative to the source article.", "QAGS does not measure other desirable properties of generated text, Article: On Friday, 28-year-old Usman Khan stabbed reportedly several people at Fishmongers' Hall in London with a large knife, then fled up London Bridge.", "including fluency, readability, or factual recall.", "We therefore recommend using QAGS in conjunction with complementary evaluation metrics.", "The choices of QG and QA models in QAGS are particular to abstractive summarization and may require adaptation to be used for other conditional text generation tasks.", "For example, we expect that extractive summarization models may obtain nearly perfect QAGS scores because facts and statements are directly copied from the source article.", "Automatic summarization and its evaluation are long-standing lines of work in NLP, dating at least", "as far back as the Document Understanding Conferences (Chali and Kolla, 2004).", "The primary evaluation metric then and now is ROUGE (Lin, 2004), though much work has demonstrated the limited ability of ROUGE and its relatives to evaluate summaries (Dorr et al., 2004; Liu and Liu, 2009; Kedzie et al., 2018, i.a.).", "Other metrics have focused on specific aspects of summarization quality, including content selection (Nenkova and Passon-neau, 2004), relevance prediction (Daume III and Marcu, 2005), and many more.", "The idea of evaluating summaries by their ability to answer a set of questions is also long-standing (Mani et al., 1999).", "Like our work, Eyal et al. (2019) and Scialom et al. (2019) extend this line of work by incorporating neural network modules.", "We diverge from these works in two important ways.", "First, both works use Cloze-style questions, which are generated by masking entities in either the source document or the reference summary.", "We instead generate the questions with a model, allowing a much greater range of questions.", "Second, we produce questions conditioned on the generated summary, rather than the reference summary or source article.", "Producing questions from the generated summary is more appropriate for verifying the accuracy of the text, whereas using the reference or source measures content selection.", "There has been a recent resurgence of work leveraging NLU models for evaluating the factuality of generated text.", "Goodrich et al. (2019) use information extraction models to measure factual overlap, but facts are restricted to pre-defined schemas.", "Falke et al. (2019) investigate the use of NLI models to evaluate the factual correctness of CNN/DM summaries, and conclude that current NLI models are too brittle to be reliably used in this manner.", "Kryscinski et al. (2019b) train a NLI-based fact-checking model by building a dataset of factual inconsistencies based on noise heuristics.", "Our QA approach allows a finer-grained analysis, because NLI operates on complete sentences, whereas QAGS can ask many different questions about the same sentence.", "We introduce a framework for automatically detecting factual inconsistencies in conditionally generated texts and use this framework to develop QAGS, a metric for measuring inconsistencies in abstractive summarization.", "QAGS correlates with human judgments of factuality significantly better than standard automatic evaluation metrics for summarization, and outperforms related NLI-based approaches to factual consistency checking.", "QAGS is naturally interpretable: The questions and answers produced in computing QAGS indicate which tokens in a generated summary are inconsistent and why.", "The framework we present is general, and extending it to other conditional text generation tasks such as image captioning or machine translation is a promising directions.", "Inspecting the generated questions and answers, we identify the transfer ability of QA models and the rigidity of F1 score as a measure of answer similarity as two key performance bottlenecks.", "We expect improvements in either would straightforwardly improve the quality of QAGS evaluation.", "Additionally, incorporating a content selection mechanism to focus the generated questions on salient facts is a promising direction.", "Overall, we believe QAGS demonstrates the potential of this framework to quantify and incentivize factually consistent text generation.", "We thank Margaret Li and Jack Urbanek for help with Amazon Mechanical Turk.", "AW is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No.", "DGE 1342536.", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.", "KC was partly supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI) and Samsung Research (Improving Deep Learning using Latent Structure)." ]
[ "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "method", "method", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "method", "other", "method", "abstain", "other", "other", "method", "other", "method", "other", "other", "other", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "method", "method", "other", "other", "other", "other", "other", "abstain", "objective", "abstain", "abstain", "objective", "method", "result", "abstain", "objective", "other", "other", "other", "other", "other" ]
[ "Modern Machine Translation (MT) systems perform remarkably well on clean, in-domain text.", "However most human generated text, particularly in the realm of social media, is full of typos, slang, dialect, idiolect and other noise which can have a disastrous impact on the accuracy of MT. In this paper we propose methods to enhance the robustness of MT systems by emulating naturally occurring noise in otherwise clean data.", "Synthesizing noise in this manner we are ultimately able to make a vanilla MT system more resilient to naturally occurring noise, partially mitigating loss in accuracy resulting therefrom 1 .", "Machine Translation (MT) systems have been shown to exhibit severely degraded performance when required to translate of out-of-domain or noisy data (Luong and Manning, 2015; Sakaguchi et al., 2016; Belinkov and Bisk, 2017).", "This is particularly pronounced when systems trained on clean, formalized parallel data such as Europarl (Koehn, 2005), are tasked with translation of unedited, human generated text such as is common in domains such as social media, where accurate translation is becoming of widespread relevance (Michel and Neubig, 2018).", "Improving the robustness of MT systems to naturally occurring noise presents an important and interesting task.", "Recent work on MT robustness (Belinkov and Bisk, 2017) has demonstrated the need to build or adapt systems that are resilient to such noise.", "We approach the problem of adapting to noisy data aiming to answer two primary research questions: These authors contributed equally 1 Code available at https://github.com/ MysteryVaibhav/robust_mtnt", "1. Can we artificially synthesize the types of noise common to social media text in otherwise clean data?", "2. Are we able to improve the performance of vanilla MT systems on noisy data by leveraging artificially generated noise?", "In this work we present two primary methods of synthesizing natural noise, in accordance with the types of noise identified in prior work as naturally occurring in internet and social media based text (Eisenstein, 2013; Michel and Neubig, 2018).", "Specifically, we introduce a synthetic noise induction model which heuristically introduces types of noise unique to social media text and labeled back translation (Sennrich et al., 2015a), a data-driven method to emulate target noise.", "We present a series of experiments based on the Machine Translation of Noisy Text (MTNT) data set (Michel and Neubig, 2018) through which we demonstrate improved resilience of a vanilla MT system by adaptation using artificially noised data.", "Szegedy et al. (2013) demonstrate the fragility of neural networks to noisy input.", "This fragility has been shown to extend to MT systems (Belinkov and Bisk, 2017; Khayrallah and Koehn, 2018) where both artificial and natural noise are shown to negatively affect performance.", "Human generated text on the internet and social media are a particularly rich source of natural noise (Eisenstein, 2013; Baldwin et al., 2015) which causes pronounced problems for MT (Michel and Neubig, 2018).", "Robustness to noise in MT can be treated as a domain adaptation problem (Koehn and Knowles, 2017) and several attempts have been made to handle noise from this perspective.", "Notable approaches (Li et al., 2010; Axelrod et al., 2011) include training on varying amounts of data from the target domain.", "Luong and Manning (2015) suggest the use of fine-tuning on varying amounts of target domain data, and Barone et al. (2017) note a logarithmic relationship between the amount of data used in fine-tuning and the relative success of MT models.", "Other approaches to domain adaptation include weighting of domains in the system objective function (Wang et al., 2017) and specifically curated datasets for adaptation (Blodgett et al., 2017).", "Kobus et al. (2016) introduce a method of domain tagging to assist neural models in differentiating domains.", "Whilst the above approaches have shown success in specifically adapting across domains, we contend that adaptation to noise is a nuanced task and treating the problem as a simple domain adaptation task may fail to fully account for the varied types of noise that can occur in internet and social media text.", "Experiments that specifically handle noise include text normalization approaches (Baldwin et al., 2015) and (most relevant to our work) the artificial induction of noise in otherwise clean data (Sperber et al., 2017; Belinkov and Bisk, 2017).", "To date, work in the adaptation of MT to natural noise has been restricted by a lack of available parallel data.", "Michel and Neubig (2018) recently introduced a new data set of noisy social media content and demonstrate the success of fine-tuning which we leverage in the current work.", "The dataset consists of naturally noisy data from social media sources in both English-French and English-Japanese pairs.", "In our experimentation we utilize the subset of the data for English to French which contains data scraped from Reddit 2 .", "The data set contains training, validation and test data.", "The training data is used in fine-tuning of our model as outlined below.", "All results are reported on the MTNT test set for French-English.", "We additionally use other datasets including Europarl (EP) (Koehn, 2005) and TED talks (TED) (Ye et al., 2018) for training our models as described in 5.", "Our baseline MT model architecture consists of a bidirectional Long Short-Term Memory (LSTM) network encoder-decoder model with two layers.", "The hidden and embedding sizes are set to 256 and 512, respectively.", "We also employ weight-tying (Press and Wolf, 2016) between the embedding layer and projection layer of the decoder.", "For expediency and convenience of experimentation we have chosen to deploy a smaller, faster variant of the model used in Michel and Neubig (2018), which allows us to provide comparative results across a variety of settings.", "Other model parameters reflect the implementation outlined in Michel and Neubig (2018).", "In all experimental settings we employ Byte-Pair Encoding (BPE) (Sennrich et al., 2015b) using SentencePiece 3 .", "We propose two primary approaches to increasing the resilience of our baseline model to the MTNT data, outlined as follows:", "For this method, we inject artificial noise in the clean data according to the distribution of types of noise in MTNT specified in Michel and Neubig (2018).", "For every token we choose to introduce the different types of noise with some probability on both French and English sides in 100k sentences of EP.", "Specifically, we fix the probabilities of error types as follows: spelling (0.04), profanity (0.007), grammar (0.015) and emoticons (0.002).", "To simulate spelling errors, we randomly add or drop a character in a given word.", "For grammar error and profanity, we randomly select and insert a stop word or an expletive and its translation on either side.", "Similarly for emoticons, we randomly 3 https://github.com/google/ sentencepiece Figure 1: Pipeline for injecting noise through back translation.", "select an emoticon and insert it on both sides.", "Algorithm 1 elaborates on this procedure.", "We further propose two experimental methods to inject noise into clean data using the back-translation technique (Sennrich et al., 2015a).", "We first train both our baseline model for fr-en and an en-fr model using TED and MTNT.", "We subsequently take 100k French sentences from EP and generate a noisy version thereof by passing them sequentially through the trained models as shown in Figure", "1. The resulting translation will be inherently noisy as a result of imperfect translation of the intervening MT system.", "The intuition behind this method is to generate noise in clean data whilst leveraging the particular style of the intermediate corpus.", "Both models are trained using TED and MTNT as in the preceding setting, save that we additionally append a tag in front on every sentence while training to indicate the origin data set of each sentence (Kobus et al., 2016).", "For generating the noisy version of 100k French sentences from EP, we append Training data BLEU Baselines Baseline Europarl (EP) 14.42 + FT w/ MTNT-train-10k 22.49 + FT w/ MTNT-train-20k 23.74 Baseline FT w/ TED-100k 10.92 + FT w/ MTNT-train-20k 24.10 Synthetic Noise Induction Baseline FT w/ EP-100k-SNI 13.53 + FT w/ MTNT-train-10k 22.67 + FT w/ MTNT-train-20k 25.05 Un-tagged Back Translation Baseline FT w/ EP-100k-UBT 18.71 + FT w/ MTNT-train-10k 22.75 + FT w/ MTNT-train-20k 24.84 Tagged Back Translation Baseline FT w/ EP-100k-TBT 20.49 + FT w/ MTNT-train-10k 23.89 + FT w/ MTNT-train-20k 25.75 Table 2: BLEU scores are reported on MTNT test set.", "MTNT tag in front of the sentences before passing them through the pipeline shown in Figure", "1. 6 Results We present quantitative results of our experiments in Table", "2. Of specific note is the apparent correlation between the amount of in-domain training data and the resulting BLEU score.", "The tagged back-translation technique produces the most pronounced increase in BLEU score of +6.07 points (14 . 42 20 . 49) .", "This represents a particularly significant result given that we do not fine-tune the baseline model on in-domain data, attributing this gain to the quality of the noise generated.", "The results for all our proposed experimental methods further imply that out-of-domain clean data can be leveraged to make the existing MT models robust on a noisy dataset.", "However, sim-Systems Output REFERENCE > And yes, I am an idiot with a telephone in usb-c...", "F*** that's annoying, I had to invest in new cables when I changed phones.", "Baseline (trained on EP) And yes, I am an eelot with a phone in the factory ...", "P***** to do so, I have invested in new words when I have changed telephone.", "FT w/ MTNT-train-20k > And yes, I am an idiot with a phone in Ub-c.", "Sh**, it's annoying that, I have to invest in new cable when I changed a phone.", "FT w/ EP-100k-TBT And yes, I'm an idiot with a phone in the factory...", "Puard is annoying that, I have to invest in new cables when I changed phone.", "FT w/ EP-100k-TBT > And yes, I am an idiot with a phone in USb-c...", "Sh** is annoying that, I have to invest in new cables when I changed a phone.", "+ MTNT-train-20k Table 3: Output comparison of decoded sentences across different models.", "ply using clean data is not that beneficial as can be seen from the experiment involving FT Baseline w/ TED-100k .", "We further present analysis of both methods introduced above.", "Figure 2 illustrates the relative effect of varying the level of SNI on the BLEU score as evaluated on the newsdiscuss2015 4 dev set, which is a clean dataset.", "From this we note that the relationship between the amount of noise and the effect on BLEU score appears to be linear.", "We also note that the most negative effect is obtained by including profanity.", "Our current approach involves inserting expletives, spelling and grammatical errors at random positions in a given sentence.", "However we note that our approach might under-represent the nuanced linguistic usage of expletives in natural text, which may result in its above-mentioned effect on accuracy.", "Table 3 shows the decoded output produced by different models.", "We find that the output produced by our best model is reasonably successful at imitating the language and style of the reference.", "The output of Baseline + FT w/ EP-100k-TBT is far superior than that of Baseline , which highlights the quality of obtained back translated noisy EP through our tagging method.", "4 http://www.statmt.org/wmt15/test.tgz", "amount of supervision which is added for fine-tuning the model.", "From Table 4 we note that the Baseline + FT w/ EP-100k-TBT model already produces a reasonable translation for the input sentence.", "However, if we further fine-tune the model using only 10k MTNT data, we note that the model still struggles with generation of *very*.", "This error dissipates if we use 20k MTNT data for fine-tuning.", "These represent small nuances which the model learns to capture with increasing supervision.", "To better understand the performance difference between UBT and TBT, we evaluate the noised EP data.", "Figure 1 shows an example where we can clearly see that the style of translation obtained from TBT is very informal as opposed to the output generated by UBT.", "Both the outputs are noisy and different from the input but since the TBT method enforces the style of MTNT, the resulting output is perceptibly closer in style to the MTNT equivalent.", "This difference results in a gain of 0.9 BLEU of TBT over UBT.", "This paper introduced two methods of improving the resilience of vanilla MT systems to noise occurring in internet and social media text: a method of emulating specific types of noise and the use of back-translation to create artificial noise.", "Both of these methods are shown to increase system accuracy when used in fine-tuning without the need for the training of a new system and for large amounts of naturally noisy parallel data.", "The authors would like to thank the AWS Educate program for donating computational GPU resources used in this work." ]
[ "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "other", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "result", "method", "result", "abstain", "result", "result", "other", "abstain", "method", "method", "method", "abstain", "method", "result", "abstain", "abstain", "result", "abstain", "other" ]
[ "Wasifur Rahman 1 , Md.", "Kamrul Hasan 1* , Sangwu Lee 1* , Amir Zadeh 2 , Chengfeng Mao 2 , Louis-Philippe Morency 2 , Ehsan Hoque 1 1 Department of Computer Science, University of Rochester, USA 2 Language Technologies Institute, SCS, CMU, USA [email protected], [email protected], [email protected],[email protected],[email protected],[email protected],[email protected] Abstract Recent Transformer-based contextual word representations, including BERT and XLNet, have shown state-of-the-art performance in multiple disciplines within NLP.", "Fine-tuning the trained contextual models on task-specific datasets has been the key to achieving superior performance downstream.", "While finetuning these pre-trained models is straightforward for lexical applications (applications with only language modality), it is not trivial for multimodal language (a growing area in NLP focused on modeling face-to-face com-munication).", "Pre-trained models don't have the necessary components to accept two extra modalities of vision and acoustic.", "In this paper, we proposed an attachment to BERT and XLNet called Multimodal Adaptation Gate (MAG).", "MAG allows BERT and XLNet to accept multimodal nonverbal data during fine-tuning.", "It does so by generating a shift to internal representation of BERT and XLNet; a shift that is conditioned on the visual and acoustic modalities.", "In our experiments, we study the commonly used CMU-MOSI and CMU-MOSEI datasets for multimodal sentiment analysis.", "Fine-tuning MAG-BERT and MAG-XLNet significantly boosts the sentiment analysis performance over previous baselines as well as language-only finetuning of BERT and XLNet.", "On the CMU-MOSI dataset, MAG-XLNet achieves human-level multimodal sentiment analysis performance for the first time in the NLP community.", "Human face-to-face communication flows as a seamless integration of language, acoustic, and vision modalities.", "In ordinary everyday interactions, we utilize all these modalities jointly to convey our * Equal contribution intentions and emotions.", "Understanding this face-to-face communication falls within an increasingly growing NLP research area called multimodal language analysis (Zadeh et al., 2018b).", "The biggest challenge in this area is to efficiently model the three pillars of communication together.", "This gives artificial intelligence systems the capability to comprehend the multi-sensory information without disregarding nonverbal factors.", "In many applications such as dialogue systems and virtual reality, this capability is crucial to maintain the high quality of user interaction.", "The recent success of contextual word representations in NLP is largely credited to new Transformer-based (Vaswani et al., 2017) models such as BERT (Devlin et al., 2018) and XLNet (Yang et al., 2019).", "These Transformer-based models have shown performance improvement across downstream tasks (Devlin et al., 2018).", "However, their true downstream potential comes from finetuning their pre-trained models for particular tasks (Devlin et al., 2018).", "This is often done easily for lexical datasets which exhibit language modality only.", "However, this fine-tuning for multimodal language is neither trivial nor yet studied; simply because both BERT and XLNet only expect linguistic input.", "Therefore, in applying BERT and XLNet to multimodal language, one must either", "(a) forfeit the nonverbal information and fine-tune for language, or", "(b) simply extract word representations and proceed to use a state-of-the-art model for multimodal studies.", "In this paper, we present a successful framework for fine-tuning BERT and XLNet for multimodal input.", "Our framework allows the BERT and XLNet core structures to remain intact, and only attaches a carefully designed Multimodal Adaptation Gate (MAG) to the models.", "Using an attention conditioned on the nonverbal behaviors, MAG essentially maps the informative visual and acoustic factors to a vector with a trajectory and magnitude.", "During fine-tuning, this adaptation vector modifies the internal state of the BERT and XLNet, allowing the models to seamlessly adapt to the multimodal input.", "In our experiments we use the CMU-MOSI (Zadeh et al., 2016) and CMU-MOSEI (Zadeh et al., 2018d) datasets of multimodal language, with a specific focus on the core NLP task of multimodal sentiment analysis.", "We compare the performance of MAG-BERT and MAG-XLNet to the above", "(a) and", "(b) scenarios in both classification and regression sentiment analysis.", "Our findings demonstrate that fine-tuning these advanced pre-trained Transformers using MAG yields consistent improvement, even though BERT and XLNet were never trained on multimodal data.", "We propose an efficient framework for finetuning BERT and XLNet for multimodal language data.", "This framework uses a component called Multimodal Adaptation Gate (MAG) that introduces minimal overhead to both the models.", "MAG-BERT and MAG-XLNet set new state of the art in both CMU-MOSI and CMU-MOSEI datasets, when compared to scenarios", "(a) and", "(b).", "For CMU-MOSI, MAG-XLNet achieves performance on par with reported human performance.", "The studies in this paper are related to the following research areas:", "Multimodal language analyses is a recent research trend in natural language processing (Zadeh et al., 2018b) that helps us understand language from the modalities of text, vision and acoustic.", "These analyses have particularly focused on the tasks of sentiment analysis (Poria et al., 2018), emotion recognition (Zadeh et al., 2018d), and personality traits recognition (Park et al., 2014).", "Works in this area often focus on novel multimodal neural architectures (Pham et al., 2019; Hazarika et al., 2018) and multimodal fusion approaches (Liang et al., 2018; Tsai et al., 2018).", "MARN, MFN, RMFN and MulT.", "Tensor Fusion Network (TFN) (Zadeh et al., 2017) creates a multi-dimensional tensor to explicitly capture all possible interactions between the three modalities: unimodal, bimodal and trimodal.", "Multi-attention Recurrent Network (MARN) (Zadeh et al., 2018c) uses three separate hybrid LSTM memories that have the ability to propagate the cross-modal interactions.", "Memory Fusion Network (Zadeh et al., 2018a) synchronizes the information from three separate LSTMs through a multi-view gated memory.", "Recurrent Memory Fusion Network (RMFN) (Liang et al., 2018) captures the nuanced interactions among the modalities in a multi-stage manner, giving each stage the ability to focus on a subset of signals.", "Multimodal Transformer for Unaligned Multimodal Language Sequences (MulT) (Tsai et al., 2019) deploys three Transformers each for one modality to capture the interactions with the other two modalities in a self-attentive manner.", "The information from the three Transformers are aggregated through late-fusion.", "Learning word representations from large corpora has been an active research area in NLP community (Mikolov et al., 2013; Pennington et al., 2014).", "Glove (Pennington et al., 2014) and Word2Vec (Mikolov et al., 2013) contributed to advancing the state-of-the-art of many NLP tasks.", "A major setback of these word representations is their non-contextual nature.", "Recently, contextual language representation models trained on large text corpora have achieved state of the art results on several NLP tasks including question answering, sentiment classification, part-of-speech (POS) tagging and similarity modeling(Peters et al., 2018; Devlin et al., 2018).", "The first two notable contextual representation based models were ELMO (Peters et al., 2018) and GPT (Radford et al., 2018).", "However, they only captured unidirectional context and therefore, missed more nuanced interactions among words of a sentence.", "BERT (Bidirectional Encoder Representations from Transformers) (De-vlin et al., 2018) outperforms both ELMO and GPT since it can provide better representation through capturing bi-directional context using Transformers.", "XLNet(Dai et al., 2019) gives new contextual representations through building an auto-regressive model capable of capturing all possible factorizations of the input.", "Fine-tuning pretrained models for BERT and XLNet has been a key factor in achieving state of the art performance for downstream tasks.", "Even though previous works have explored using BERT to model multimodal data (Sun et al., 2019), to the best of our knowledge, directly fine-tuning BERT or XLNet for multimodal data has not been explored in previous works.", "To better understand the proposed multimodal framework in this paper, we first present an overview of both the BERT and XLNet models.", "We start by quickly formalizing the operations within Transformer and Transformer-XL models, followed by an overview of BERT and XLNet.", "Transformer is a non-recurrent neural architecture designed for modeling sequential data (Vaswani et al., 2017).", "The superior performance of Transformer model is largely credited to a Multi-head Self-Attention module.", "Using this module, each element of a sequence is attended by conditioning on all the other sequence elements.", "Figure 2 summarizes internal operations of a Transformer layer (for M such layers).", "Commonly, a Transformer uses an encoder-decoder paradigm.", "A stack of encoders is followed by a stack of decoders to map an input sequence to an output sequence.", "An additional embedding step with Positional Input Embedding is applied before the input goes through the stack of encoders and decoders.", "Transformer-XL (Dai et al., 2019) is an extension of the Transformer which offers two improvements:", "a) it enhances the capability of the Transformer to capture long-range dependencies (specifically for the case of context fragmentation), and", "b) it improves the capability to better predict first few symbols (which are often crucial for the rest of the sequence).", "It does so with a recurrence mechanism designed to pass context information from one segment to the next and a relative positional encoding mechanism to enable state reuse without causing temporal confusion.", "BERTBERT is a successful language model that provides rich contextual word representation (Devlin et al., 2018).", "It follows an auto-encoding approach masking out a portion of input tokens and predicting those tokens based on all other non-masked tokens and thus learning a vector representation for the masked out tokens in that process.", "We use the variant of BERT used for Single Sentence Classification Tasks .", "First, input embeddings are generated from a sequence of word-piece tokens by adding token embeddings, segment embeddings and position embeddings .", "Then multiple Encoder layers are applied on top of these input embeddings.", "Each Encoder has a Multi-Head Attention layer and a Feed Forward layer, each followed by a residual connection with layer normalization.", "A special [CLS] token is appended in front of the input token sequence.", "So, for a N length input sequence, we get N + 1 vectors from the last Encoder layer the first of those vectors is used to predict the label of the input after that vector undergoes an affine transformation.", "XLNet (Yang et al., 2019) sets out to improve two critical aspects of the BERT model:", "a) indepen-dence among the masked out tokens and", "b) pretrain-finetune discrepancy in training vs inference, since inference inputs do not have masked out tokens.", "XLNet is an auto-regressive model and therefore, is free from the need of masking out certain tokens.", "However, auto-regressive models usually capture the unidirectional context (either forward or back-ward).", "XLNet can learn bidirectional context by maximizing likelihood over all possible permutations of factorization order.", "In essence, it randomly samples multiple factorization orders and trains the model on each of those orders.", "Therefore, it can model input by taking all possible permutations into consideration (in expectation).", "XLNet utilizes two key ideas from Transformer-XL (Dai et al., 2019): relative positioning and segment recurrence mechanism.", "Like BERT, it also has a Input Embedder followed by multiple Encoders.", "The Embedder converts the input tokens into vectors after adding token embedding, segment embedding and relative positional embedding information.", "Each encoder consists of a Multi-Head attention layer and a feed forward layer each followed by a residual addition and normalization layer.", "The embedder output is fed into the encoders to get a contextual representation of input.", "In multimodal language, a lexical input is accompanied by visual and acoustic information simply gestures and prosody co-occurring with language.", "Consider a semantic space that captures latent concepts (positions in the latent space) for individual words.", "In absence of multimodal accompaniments, the semantic space is directly conditioned on the language manifold.", "Simply put, each word falls within some part of this semantic space, depending only on the meaning of the word in a linguistic structure (i.e. sentence).", "Nonverbal behaviors can have an impact on the meaning of words, and therefore on the position of words in this semantic space.", "Together, language and nonverbal accompaniments decide on the new position of the word in the semantic space.", "In this paper, we regard to this new position as addition of the language-only position with a displacement vector; a vector with trajectory and magnitude that shifts the language-only position of the word to the new position in light of nonverbal behaviors.", "This is the core philosophy behind the Multimodal Adaptation Gate (MAG).", "A particularly appealing implementation of such displacement is studied in RAVEN (Wang et al., 2018), where displacements are calculated using cross-modal self-attention to highlight relevant nonverbal information.", "Figure 1 shows the studied MAG in this paper.", "Essentially, a MAG unit receives three inputs, one is purely lexical, one is visual, and the last one is acoustic.", "Let the triplet ( Z i , A i , V i ) denote these inputs for i th word in a sequence.", "We break this displacement into bimodal factors [ Z i ; A i ] and [ Z i ; V i ] by concatenating lexical vector with acoustic and visual information respectively and use them to produce two gating vectors g vi and g ai : g vi = R ( W gv [ Z i ; V i ] + b v ) (1) g ai = R ( W ga [ Z i ; A i ] + b a ) (2) where W gv , W ga are weight matrices for visual and acoustic modality and b v and b a are scalar biases.", "R ( x ) is a non-linear activation function.", "These gates highlight the relevant information in visual and acoustic modality conditioned on the lexical vector.", "We then create a non-verbal displacement vector H i by fusing together A i and V i multiplied by their respective gating vectors: H i = g ai ( W a A i ) + g vi ( W v V i ) + b H (3) where W a and W v are weight matrices for acoustic and visual information respectively and b H is the bias vector.", "Subsequently, we use a weighted summation between Z i and its nonverbal displacement H i to create a multimodal vector Z i : Z i = Z i + H i (4) = min ( Z i 2 H i 2 , 1 ) (5) where is a hyper-parameter selected through the cross-validation process.", "Z i 2 and H i 2 denote the L 2 norm of the Z i and H i vectors respectively.", "We use the scaling factor so that the effect of nonverbal shift H i remains within a desirable range.", "Finally, we apply a layer normalization and dropout layer to Z i .", "MAG-BERTMAG-BERT is a combination of MAG applied to a certain layer of BERT network (Figure 2 demonstrates the structure of MAG-BERT as well as MAG-XLNet).", "Essentially, at each layer, BERT contains lexical vectors for i th word in the sequence.", "For the same word, nonverbal accompaniments are also available in multimodal language setup.", "MAG essentially forms an attachment to the desired layer in BERT; an attachment that allows for multimodal information to leak into the BERT model and displace the lexical vectors.", "The operations within MAG allows for the lexical vectors within BERT to adapt to multimodal information by changing their positions within the semantic space.", "Aside from the attachment of MAG, no change is made to the BERT structure.", "Given an N length language sequence L = [ L 1 , L 2 , . . . LN ] carrying word-piece tokens, a [CLS] token is appended to L so that we can use it later for class label prediction.", "Then, we input L to the Input Embedder which outputs E = [ ECLS , E 1 , E 2 , . . . EN ] after adding token, segment and position embeddings.", "Then, we input E to the first Encoding layer and then apply j Encoders on it successively.", "After that encoding process, we get the output Z j = [ Z jCLS , Z j 1 , Z j 2 , . . . Z jN ] which denotes the Lexical Embeddings after j layers of Encoding.", "For injecting audio-visual information into these embeddings, we prepare a sequence of triplets [( Z ji , A i , V i ) i { CLS, [ 1 , N ]}] by pairing Z ji with the corresponding ( A i , V i ) .", "Each of these triplets are passed through the Multimodal Adaptation Gate which transforms the i th triplet into Z ji a unified multimodal representation of the corresponding Lexical Embedding.", "As there exists M = 12 Encoder layers in our BERT model, we input Z j = [ Z j 1 , Z j 2 , . . . Z jN ] to the next Encoder and apply M j Encoder layers on it successively.", "At the end, we get ZM from the M th Encoder layer.", "As the first element ZMCLS represents the [CLS] token, it has the information necessary to make a class label prediction.", "Therefore, ZMCLS goes through an affine transformation to produce a single real-value which can be used to predict a class label.", "Like MAG-BERT, MAG-XLNet also has the capability of injecting audio-visual information at any of its layers using MAG.", "At each position j of any of its layer, it holds the lexical vector corresponding to that position.", "Utilizing the audio-visual information available for that position, it can invoke MAG to get an appropriately shifted lexical vector in multimodal space.", "Although it mostly follows the general paradigm presented in Figure 2 verbatim, it uses the XLNet specific Embedder and Encoders.", "One other key difference is the position of the [CLS] token.", "Unlike BERT, the [CLS] token is appended at the right end of the input token sequence, and therefore in all the intermediate representations, the vector corresponding to the [CLS] will be the rightmost one.", "Following the same logic, the output from the final Encoding layer will be ZM = [ ZM 1 , ZM 2 , . . . ZMN , ZMCLS ] .", "The last item, ZMCLS can be used for class label prediction after it goes through an affine transformation.", "In this section we outline the experiments in this paper.", "We first start by describing the datasets, followed by description of extracted features, baselines, and experimental setup.", "CMU-MOSI (CMU Multimodal Opinion Sentiment Intensity) is a dataset of multimodal language specifically focused on multimodal sentiment analysis (Zadeh et al., 2016).", "CMU-MOSI contains 2199 video segments taken from 93 Youtube movie review videos.", "The dataset has real-valued high-agreement sentiment intensity annotations in the range [ 3 , + 3 ] .", "Youtube API followed by manual correction.", "Acoustic: COVAREP (Degottex et al., 2014) is used to extract the following relevant features: fundamental frequency, quasi open quotient, normalized amplitude quotient, glottal source parameters (H1H2, Rd, Rd conf), VUV, MDQ, the first 3 formants, PSP, HMPDM 0-24 and HM-PDD 0-12, spectral tilt/slope of wavelet responses (peak/slope), MCEP 0-24.", "Visual: For the visual modality, the Facet library (iMotions, 2017) is used to extract a set of visual features including facial action units, facial landmarks, head pose, gaze tracking and HOG features.", "For each word, we align all three modalities following the convention established in (Chen et al., 2017).", "Firstly, the word alignment between language and audio is obtained using forced alignment (Yuan and Liberman, 2008).", "Afterwards, the boundary of each word denotes the co-occurring visual and acoustic features (FACET and COVAREP).", "Subsequently, for each word, the co-occurring acoustic and visual features are averaged across each feature thus achieving A i and V i vectors corresponding to word i .", "We compare the performance of MAG-BERT and MAG-XLNet to a variety of state-of-the-art models for multimodal language analysis.", "These models are trained using extracted BERT and XLNet word embeddings as their language input: TFN (Tensor Fusion Network) explicitly models both intra-modality and inter-modality dynamics (Zadeh et al., 2017) by creating a multidimensional tensor that captures unimodal, bimodal and trimodal interactions across three modalities.", "MARN (Multi-attention Recurrent Network) models view-specific interactions using hybrid LSTM memories and cross-modal interactions using a Multi-Attention Block (MAB) (Zadeh et al., 2018c).", "MFN (Memory Fusion Network) has three separate LSTMs to model each modality separately and a multi-view gated memory to synchronize among them (Zadeh et al., 2018a).", "RMFN (Recurrent Memory Fusion Network) captures intra-modal and inter-modal information through recurrent multi-stage fashion (Liang et al., 2018).", "MulT (Multimodal Transformer for Unaligned Multimodal Language Sequence) uses three sets of Transformers and combines their output in a late fusion manner to model a multimodal sequence (Tsai et al., 2019).", "We use the aligned variant of the originally proposed model, which achieves superior performance over the unaligned variant.", "We also compare our model to fine-tuned BERT and XLNet using language modality only to measure the success of the MAG framework.", "All the models in this paper are trained using Adam (Kingma and Ba, 2014) optimizer with learning rates between { 0 .", "001 , 0 .", "0001 , 0 .", "00001 } .", "We use dropouts of { 0 .", "1 , 0 .", "2 , 0 .", "3 , 0 .", "4 , 0 .", "5 } for training each model.", "LSTMs in TFN, MARN, MFN, RMFN, LFN use latent size of { 16 , 32 , 64 , 128 } .", "For MulT, we use { 3 , 5 , 7 } layers in the network and { 1 , 3 , 5 } attention heads.", "All models use the designated validation set of CMU-MOSI for find-ing best hyper-parameters.", "We perform two different evaluation tasks on CMU-MOSI datset:", "i) Binary Classification, and", "ii) Regression.", "We formulate it as a regression problem and report Mean-absolute Error (MAE) and the correlation of model predictions with true labels.", "Besides, we convert the regression outputs into categorical values to obtain binary classification accuracy (BA) and F1 score.", "Higher value means better performance for all the metrics except MAE.", "We use two evaluation metrics for BA and F1, one used in (Zadeh et al., 2018d) and one used in (Tsai et al., 2019).", "Table 1 shows the results of the experiments in this paper.", "We summarize the observations from the results in this table as following: 6.1 Performance of MAG-BERT In all the metrics across the CMU-MOSI dataset, we observe that performance of MAG-BERT is superior to state-of-the-art multimodal models that use BERT word embeddings.", "Furthermore, MAG-BERT also performs superior to fine-tuned BERT.", "This essentially shows that the MAG component is allowing the BERT model to adapt to multimodal information during fine-tuning, thus achieving superior performance.", "A similar performance trend to MAG-BERT is also observed for MAG-XLNet.", "Besides superior performance than baselines and fine-tuned XLNet, MAG-XLNet achieves near-human level performance for CMU-MOSI dataset.", "Furthermore, we train MulT using the fine-tuned XLNet embeddings and get the following performance: 83 .", "6 / 85 .", "3 , 82 .", "6 / 84 .", "2 , 0 .", "810 , 0 .", "759 which is lower than both MAG-XLNet and XLNet.", "It is notable that the p-value for student t-test between MAG-XLNet and XLNet in Table 1 is lower than 10 e 5 for all the metrics.", "The motivation behind the experiments reported in Table 1 is as follows: we extracted word embeddings from pre-trained BERT and XLNet models and trained the baseline models using those embeddings.", "Since BERT and XLNet are often perceived to provide better word embeddings than Glove, it is not fair to compare MAG-BERT/MAG-XLNet with previous models trained with Glove embeddings.", "Therefore, we retrain previous works us-Task Metric BA F1 MAE Corr Original (glove) TFN 73.9/ 73.4/ 0.970/ 0.633/ MARN 77.1/ 77.0/ 0.968/ 0.625/ MFN 77.4/ 77.3/ 0.965/ 0.632/ RMFN 78.4/ 78.0/ 0.922/ 0.681/ LFN 76.4/ 75.7/ 0.912/ 0.668/ MulT /83.0 /82.8 /0.871 /0.698 BERT TFN 74.8/76.0 74.1/75.2 0.955 0.649 MARN 77.7/78.9 77.9/78.2 0.938 0.691 MFN 78.2/79.3 78.1/78.4 0.911 0.699 RMFN 79.6/80.7 78.9/79.1 0.878 0.712 LFN 79.1/80.2 77.3/78.1 0.899 0.701 MulT 81.5/84.1 80.6/83.9 0.861 0.711 BERT 83.5/85.2 83.4/85.2 0.739 0.782 MAG-BERT 84.2 / 86.1 84.1 / 86.0 0.712 0.796 XLNet TFN 78.2/80.1 78.2/78.8 0.914 0.713 MARN 78.3/79.5 78.8/79.6 0.921 0.707 MFN 78.3/79.9 78.4/79.1 0.898 0.713 RMFN 79.1/81.0 78.6/80.0 0.901 0.703 LFN 80.2/82.9 79.1/81.6 0.862 0.701 MulT 81.7/84.4 80.4/83.1 0.849 0.738 XLNet 84.7/86.7 84.6/86.7 0.676 0.812 MAG-XLNet 85.7 / 87.9 85.6 / 87.9 0.675 0.821 Human 85.7/-87.5/0.710 0.820 Table 1: Sentiment prediction results on CMU-MOSI dataset.", "ing BERT/XLNet embeddings to establish a more # Spoken words + acoustic and visual behaviors GroundTruth MAG-XLNet XLNet 1 And it really just lacked what made the other movies more enjoyable. + Frustrated and disappointed tone -1.4 -1.41 -0.9 2 But umm I liked it. + Emphasis on tone + positive shock through sudden eyebrow raise 1.8 1.9 1.2 3 Except their eyes are kind of like this welcome to the polar express. + tense voice + frown expression -0.6 -0.6 0.8 4 Straight away miley cyrus acting miley cyrus, or lack of, she had this same expression throughout the entire film + sarcastic voice + frustrated facial expression -1.0 -1.2 0.2 Table 3: Examples from the CMU-MOSI dataset.", "fair comparison between proposed approach in this paper, and previous work.", "Based on the information from Table 1, we observe that MAG-BERT/MAG-XLNet models outperforms various baseline models using BERT/XLNet/Glove models substantially.", "We also study the effect of applying MAG at different encoder layers of the XLNet.", "Specifically, we first apply the MAG to the output of the embedding layer.", "Subsequently, we apply the MAG to the layer j { 1 , 4 , 6 , 8 , 12 } of the XLNet.", "Then, we apply MAG at all the XLNet layers.", "From Table 2, we observe that earlier layers are more suitable for application of MAG.", "We believe that earlier layers allow for better integration of the multimodal information, as they allow the word shifting to happen from the beginning of the network.", "If the semantics of words should change based on the nonverbal accompaniments, then initial layers should reflect the semantic shift, otherwise, those layers are only working uni-modally.", "Besides, the higher layers of BERT learn more abstract and higher-level information about the syntactic and semantic structure of linguistic features (Coenen et al., 2019).", "Since, the acoustic and visual information present in our model corresponds to each word in the utterance, it will be more difficult for the MAG to shift the vector extracted from a later layer since that vector's information will be very abstract in nature.", "From Table 2, we see that both input-level concatenation and addition of modalities perform poorly.", "For Concatenation, we simply concatenate all the modalities.", "For Addition, we add the audio and visual information to the language embedding after mapping both of them to the language dimension.", "These results demonstrate the rationale behind using an advanced fusion mechanism like MAG.", "6.5 Results on Comparable Datasets We also perform experiments on the CMU-MOSEI dataset (Zadeh et al., 2018d) to study the generalization of our approach to other multimodal language datasets.", "Unlike CMU-MOSI which has sentiment annotations at utterance level, CMU-MOSEI has sentiment annotations at sentence level.", "The experimental methodology for CMU-MOSEI is similar to the original paper.", "For the sake of comparison, we suffice 1 to comparing the binary accuracy and f1 score for the top 3 models in Table 1.", "In BERT category, we compare the performance of MulT (with BERT embeddings), BERT and MAG-BERT which are respectively as follows: [ 83 . 5 , 82 . 9 ] for MulT, [ 83 . 9 , 83 . 9 ] for BERT, and [ 84 . 7 , 84 . 5 ] for MAG-BERT.", "Similarly for XLNET category, the results for MulT (with XLNet embeddings), XLNet and MAG-XLNet are as follows: [ 84 . 1 , 83 . 7 ] for MulT, [ 85 . 4 , 85 . 2 ] for XLNet and [ 85 . 6 , 85 . 7 ] for MAG-XLNet.", "Therefore, superior performance of 1 Since Transformer based models take a long time to train for CMU-MOSEIMAG-BERT and MAG-XLNet also generalizes to CMU-MOSEI dataset.", "We study whether or not the superior performance of the MAG-BERT and MAG-XLNet is related to successful finetuning of the models, or related to other factors e.g. any transformer with architecture like BERT or XLNet would achieve superior performance regardless of being pretrained.", "By randomly initializing the weights of BERT and XLNet within MAG-BERT and MAG-XLNet, we get the following performance on BA for the CMU-MOSI: 70.1 and 70.7 respectively.", "This indicates that the success of the MAG-BERT and MAG-XLNet is due to successful fine-tuning.", "Even on the larger CMU-MOSEI dataset we get BA of 76.8 and 78.4 for MAG-BERT and MAG-XLNet, which further substantiates the fact that fine-tuning is successful using MAG framework.", "In Table 3, we present some examples where MAG-XLNet adjusted sentiment intensity properly by taking into account nonverbal information.", "The examples demonstrate that MAG-XLNET can successfully integrate the non-verbal modalities with textual information.", "In both Example-1 and Example-2, XLNet correctly predicted the polarity of the displayed emotion.", "However, additional information was present in the acoustic and visual domain which XLNet could not utlize.", "Given those information, MAG-XLNet could better predict the magnitude of emotion displayed in both cases.", "Although the emotion in the text of Example-3 can be portrayed as a bit positive, the tense voice and frown expression helps MAG-XLnet reverse the polarity of predicted emotion.", "Similarly, the text in Example-4 is mostly neutral, but MAG-XLNet can predict the negative emotion through the sarcastic vocal and frustrated facial expression.", "In this paper, we introduced a method for efficiently finetuning large pre-trained Transformer models for multimodal language.", "Using a proposed Multimodal Adaptation Gate (MAG), BERT and XLNet were successfully fine-tuned in presence of vision and acoustic modalities.", "MAG essentially poses the nonverbal behavior as a vector with a trajectory and magnitude, which is subsequently used to shift lexical representations within the pre-trained Transformer model.", "A unique characteristic of MAG is that it makes no change to the original structure of BERT or XLNet, but rather comes as an attachment to both models.", "Our experiments demonstrated the superior performance of MAG-BERT and MAG-XLNet.", "The code for both MAG-BERT and MAG-XLNet are publicly available here 2 Acknowledgement This research was supported in part by grant W911NF-15-1-0542 and W911NF-19-1-0029 with the US Defense Advanced Research Projects Agency (DARPA) and the Army Research Office (ARO).", "Authors AZ and LM were supported by the National Science Foundation (Awards #1750439 #1722822) and National Institutes of Health.", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of US Defense Advanced Research Projects Agency, Army Research Office, National Science Foundation or National Institutes of Health, and no official endorsement should be inferred." ]
[ "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain" ]
[ "Distant supervision tackles the data bottleneck in NER by automatically generating training instances via dictionary matching.", "Unfortunately, the learning of DS-NER is severely dictionary-biased, which suffers from spurious correlations and therefore undermines the effectiveness and the robustness of the learned models.", "In this paper, we fundamentally explain the dictionary bias via a Structural Causal Model (SCM), categorize the bias into intra-dictionary and inter-dictionary biases, and identify their causes.", "Based on the SCM, we learn de-biased DS-NER via causal interventions.", "For intra-dictionary bias, we conduct backdoor adjustment to remove the spurious correlations introduced by the dictionary confounder.", "For inter-dictionary bias, we propose a causal invariance regularizer which will make DS-NER models more robust to the perturbation of dictionaries.", "Experiments on four datasets and three DS-NER models show that our method can significantly improve the performance of DS-NER.", "Named entity recognition (NER) aims to identify text spans pertaining to specific semantic types, which is a fundamental task of information extraction, and enables various downstream applications such as Relation Extraction (Lin et al., 2016) and Question Answering (Bordes et al., 2015).", "The past several years have witnessed the remarkable success of supervised NER methods using neural networks (Lample et al., 2016; Ma and Hovy, 2016; Lin et al., 2020), which can automatically extract effective features from data and conduct NER in an end-to-end manner.", "Unfortunately, supervised methods rely on high-quality labeled data, which is very labor-intensive, and thus severely restricts Corresponding authors 0.3 0.4 0.5 0.6 D1 D2 D3 D4 All mentions", "the application of current NER models.", "To resolve the data bottleneck, a promising approach is distant supervision based NER (DS-NER).", "DS-NER automatically generates training data by matching entities in easily-obtained dictionaries with plain texts.", "Then this distantly-labeled data is used to train NER models, commonly be accompanied by a denoising step.", "DS-NER significantly reduces the annotation cost for building an effective NER model, and therefore has attracted great attention in recent years (Yang et al., 2018; Shang et al., 2018; Peng et al., 2019; Cao et al., 2019; Liang et al., 2020; Zhang et al., 2021).", "However, the learning of DS-NER is dictionary-biased , which severely harms the generalization and the robustness of the learned DS-NER models.", "Specifically, entity dictionaries are often incomplete (missing entities), noisy (containing wrong entities), and ambiguous (a name can be of different entity types, such as Washington ).", "And DS will generate positively-labeled instances from the in-dictionary names but ignore all other names.", "Such a biased dataset will inevitably mislead the learned models to overfit in-dictionary names and underfit out-of-dictionary names.", "We refer to this as intra-dictionary bias .", "To illustrate this bias, Figure 1", "(a) shows the predicting likelihood of a representa-.02", "tive DS-NER model (RoBERTa + Classifier (Liang et al., 2020)).", "We can see that there is a remarkable likelihood gap between in-dictionary mentions and out-of-dictionary mentions: the average likelihoods of out-of-dictionary mentions are < 0 .", "2 , which means that a great majority of them cannot be recalled.", "Furthermore, such a skewed distribution makes DS-NER models very sensitive to slight perturbations.", "We refer to this as inter-dictionary bias , i.e., different dictionaries can result in very different model behaviors.", "In the example shown in Figure 1", "(b), we train the same DS-NER model by respectively using 4 dictionaries sampled from the same original dictionary, where each of them covers 90% of entities in the original one.", "We can see that the predicting likelihood diverges significantly even these 4 dictionaries share the majority part.", "Consequently, the dictionary-biased learning will undermine both the effectiveness and robustness of DS-NER models.", "In this paper, we propose a causal framework to fundamentally explain and resolve the dictionary bias problem in DS-NER.", "We first formulate the procedure of DS-NER from the causal view with a Structural Causal Model (SCM) (Pearl et al., 2000), which is shown in the left part of Figure 2.", "From the SCM, we identified that the intra-dictionary bias stemming from the dictionary which serves as a confounder during the model learning.", "The dictionary confounder will introduce two backdoor paths, one from positively-labeled instances ( X p ) to entity labels ( Y ) and the other from negatively-labeled instances ( X n ) to entity labels.", "These backdoor paths introduce spurious correlations during learning, therefore result in the intra-dictionary bias.", "Furthermore, the current learning criteria of DS-NER models is to optimize over the correlations between the instances ( X ) and entity types ( Y ) given one specific dictionary ( D ), namely P ( Y | X, D ) .", "Such criteria, however, diverges from the primary goal of learning a dictionary-free NER model (i.e., P ( Y | X ) ), and results in the inter-dictionary bias.", "Based on the above analysis, unbiased DS-NER should remove the spurious correlations introduced by backdoor paths and capture the true dictionary-free causal relations.", "To this end, we conduct causal interventions to de-bias DS-NER from the biased dictionary.", "For intra-dictionary bias, we intervene on the positive instances and the negative instances to block the backdoor paths in SCM, then the spurious correlations introduced by dictionary confounder will be removed.", "Specifically, we conduct backdoor adjustment to learn de-biased DS-NER models, i.e., we optimize the DS-NER model based on the causal distribution, rather than from the spurious correlation distribution.", "For inter-dictionary bias, we propose to leverage causal invariance regularizer (Mitrovic et al., 2021), which will make the learned representation more robust to the perturbation of dictionaries.", "For each instance in the training data, causal invariance regularizer will preserve the underlying causal effects unchanged across different dictionaries.", "The proposed method is model-free, which can be used to resolve the dictionary bias in different DS-NER models by being applied as a plug-in during model training.", "We conducted experiments on four standard DS-NER datasets: CoNLL2003, Twitter2005, Webpage, and Wikigold.", "Experiments on three state-of-the-art DS-NER models show that the proposed de-biasing method can effectively solve both intra-dictionary and inter-dictionary biases, and therefore significantly improve the performance and the robustness of DS-NER in almost all settings.", "Generally, the main contributions of this paper are: We proposed a causal framework, which not only fundamentally formulates the DS-NER process, but also explains the causes of both intra-dictionary bias and inter-dictionary bias.", "Based on the causal framework, we conducted causal interventions to de-bias DS-NER.", "For intra-dictionary bias, we conduct causal interventions via backdoor adjustment to remove spurious correlations introduced by the dictionary confounder.", "For inter-dictionary bias, we propose a causal invariance regularizer which will make DS-NER models more robust to the perturbation of dictionaries.", "Experimental results on four standard DS-NER datasets and three DS-NER models demonstrate that our method can significantly improve the performance and the robustness of DS-NER.", "In this section, we formulate DS-NER with a structural causal model (SCM), then identify the causes of both intra-dictionary bias and inter-dictionary bias using the SCM.", "An SCM captures the causal effect between different variables and describes the generative process of a causal distribution, which can be visually presented using a directed acyclic graph (DAG).", "In SCM, each node represents a random variable, and a directed edge represents a direct causal relationship between two variables.", "Based on SCM, the confounders and backdoor paths (Pearl et al., 2000) can be identified.", "In the following, we will describe the causal view of DS-NER and then identify the dictionary bias.", "Figure 2 shows the structural causal model for DS-NER, which contains 7 key variables in the DS-NER procedure: 1) the applied dictionary D for distant annotation; 2) the unlabeled instances X , where each instance is a pair of (mention candidate, context), and in training stage X will be", "automatically labeled by D ; 3) the positive training instances X p , which are instances in X being labeled as positive instances (i.e., entity mentions) by dictionary D ; 4) the negative training instances X n , which are instances being labeled as negative instances by dictionary D ; 5) the learned DS-NER model M , which summarizes NER evidences from DS-labeled data during training, and predicts new instances during testing; 6) the representations of instances R , which is encoded dense representations of instances X using the learned model M ; 7) the predicted entity labels Y of instances in X based on the representation R .", "Defining these variables, the causal process of DS-NER can be formulated using SCM into two steps: distant supervision (DS) step and NER step respectively.", "For DS step, the procedure will generate DS-labeled data and learn DS-NER models by following causal relations: D X p X and D X n X represent the distant annotation process, which uses dictionary D to annotate the unlabeled instances X and splits them into two sets: X p and X n .", "X p M X n represents the learning process, where model M is the learned DS-NER model using X p and X n .", "We denote the X p and X n generated from dictionary D as X p ( D ) and X n ( D ) respectively.", "And the causal relation in NER step can be summarized as: M R X is the representation learning procedure, which uses the learned model M to encode instances X .", "R Y represents the entity recognition process, where the labels of instances depend on the learned representation R and instances X .", "We denote the entity labels corresponding to X p and X n as Y p and Y n respectively.", "Given distant annotation X p and X n , the learning process of DS-NER will maximize the probability P ( Y p =1 , Y n =0 | X p , X n , D ) .", "Unfortunately, because D is a confounder for X p and X n in SCM, this criteria will introduce spurious correlations and result in the intra-dictionary bias: (1) When maximizing P ( Y =1 | X p , D ) , we want NER models to rely on the actual causal path X p Y .", "However, in SCM there exists a backdoor path X p D X n M which will introduce spurious correlation between Y and X p .", "Intuitively, this backdoor path appears as the false negative instances in X n .", "Because these false negative instances have correct entity contexts but out-of-dictionary names, they will mislead the models to underfit the entity context for prediction.", "(2) When maximizing P ( Y =0 | X n , D ) , we want NER models to rely on the actual causal path X n Y .", "However, in SCM there exists a backdoor path X n D X p M which will introduce spurious correlation between Y and X n .", "Intuitively, this backdoor path appears as the false positive instances in X p .", "Because these false positive instances have in-dictionary entity names but spurious context, they will mislead the models to overfit the names in dictionary.", "In general, the intra-dictionary bias is caused by backdoor paths introduced by D , and this bias will mislead the NER models to overfit names in dictionary and underfit the context of entities.", "As mentioned above, DS-NER models are learned by fitting P ( Y p =1 , Y n =0 | X p , X n , D ) .", "This criteria will mislead the model when learning the correlation between X and Y with spurious information in D because the learning criteria is conditioned on it.", "However, a robust NER model should fit the underlying distribution P ( Y | X ) , rather than the dictionary-conditioned distribution P ( Y | X, D ) .", "From the SCM, the dictionary D will significantly influence the learned NER models M , and in turn result in different learned causal effects in the path X R Y and entity prediction Y .", "As a result, DS-NER models will fit different underlying distributions given different dictionaries, and therefore results in inter-dictionary bias.", "However, in real-world applications, the dictionaries are affected by various factors, such as source, coverage or time.", "Therefore, to enhance the robustness of the learning process, it is critical to alleviate the spurious influence of dictionary D on the learned causal effects between X and Y .", "That is, we want DS-NER models to capture the dictionary-invariant entity evidence, rather than fit the dictionary-specific features.", "In this section, we describe how to de-bias DS-NER.", "Specifically, for intra-dictionary bias, we propose to use backdoor adjustment to block the backdoor paths.", "For inter-dictionary bias, we design a causal invariance regularizer to capture the dictionary-invariant evidence for NER.", "Based on the analysis in Section 2.2, the intra-dictionary bias is caused by the backdoor paths X p D X n M and X n D X p M .", "To remove these biases, we block both backdoor paths by intervening both X p and X n .", "After causal intervention, the learning of DS-NER models will fit the correct causal relation P ( Y p =1 | do ( X p ( D )) , X n ) and P ( Y n =0 | do ( X n ( D )) , X p ) .", "Here do ( X p ( D ))= do ( X p = X p ( D )) represents the mathematical operation to intervene X p and preserve it to be X p ( D ) in the whole population.", "the distribution P ( Y p =1 | do ( X p ( D ))) after causal intervention, we conduct backdoor adjustment according to causal theory (Pearl, 2009):", "P pos ( D ) (cid:44) P ( Y p =1 | do ( X p ( D ))) = (cid:88) i P ( Y p =1 | X p ( D ) , X n ( D i )) P ( D i ) (1)", "where X n ( D i ) denotes the negative instances generated from the DS dictionary D i .", "P ( Y p =1 | X p ( D ) , X n ( D i )) is the probability of predicting X p ( D ) into Y =1 , which can be formulated using a neural network-based DS-NER model parametrized by , i.e., P ( Y | X p , X n ) = P ( Y | X p , X n ; ) .", "Detailed derivations is shown in appendix A. Note the distribution P ( Y p =1 | do ( X p ( D ))) in the causal framework is not the marginalized distribution P ( Y p =1 | X p ( D )) in the probability framework.", "Otherwise the marginalization should take place in the conditional distribution P ( D i | X p ) rather than P ( D i ) .", "Furthermore, as shown in Figure 3, X p = X p ( D i ) and X n = X n ( D j ) can not happen together in probabilistic view unless D i = D j .", "However, in the causal view, they can happened together via the causal intervention.", "That is do ( X p = X p ( D i )) and X n = X n ( D j ) , which is Positive Negative", "shown in Figure 3", "(c).", "For more details, please refer to (Neal, 2020) for a brief introduction.", "Similarly, to block the backdoor paths and calculate the causal distribution P ( Y n =0 | do ( X n ( D ))) , we can conduct backdoor adjustment on X n by: P neg ( D ) (cid:44) P ( Y n =0 | do ( X n ( D ))) = (cid:88) i P ( Y n =0 | X n ( D ) , X p ( D i )) P ( D i ) (2) Estimating Dictionary Probabilities.", "Because we only have one global dictionary D , it is hard to estimate the probability of other dictionary D i used in the Equation (1) and (2).", "To tackle this problem, we sample K sub-dictionaries by sampling entities from the global dictionary D .", "The probability of each entity being sampled corresponds to its utterance frequency in a large-scale corpus.", "Then we applied a uniform probability assumption to these sampled dictionaries, which means that these sub-dictionaries will then be used to conduct backdoor adjustment with equal dictionary probabilities, i.e., P ( D i ) = 1 K .", "Learning DS-NER Models with Causal Relation.", "Given the above two causal distributions after backdoor adjustment, the DS-NER models can be effectively learned, and the intra-dictionary bias can be eliminated based on the causal relations between X p , X n and Y .", "Formally, we optimize DS-NER models by minimizing the following negative likelihood based on causal relation: LBA ( )= log P pos ( D ) log P neg ( D ) (3) Note that the proposed method is model-free, which means that it can be applied to the majority of previous DS-NER methods by adaptively changing the underlying parametrization of probability distribution P ( Y | X p , X n ; ) .", "This section describes causal invariance regularizer to eliminate the inter-dictionary bias.", "Specifically, after backdoor adjustment for intra-dictionary bias, the causal distribution we optimize (i.e., P pos ( D ) and P neg ( D ) ) still depends on the dictionary D .", "As a result, given different dictionaries, DS-NER models will fit different underlying causal distributions and result in inter-dictionary bias.", "Ideally, a robust DS-NER learning algorithm should be dictionary-free, i.e., we should directly optimize towards the implicit distribution of P ( Y | X ) .", "However, it is impossible to directly achieve this because the golden answer Y of X is invisible in DS-NER.", "To enhance the robustness of the learning process, this section proposes a causal invariance regularizer, which ensures DS-NER models to learn useful entity evidence for NER but not to fit dictionary-specific features.", "Specifically, the goal of causal invariance (Pearl et al., 2000) is to ensure learned NER models will keep similar causal effects using different dictionaries, which can be formulated as: inv = arg min (cid:107) P pos ( D i ) P pos ( D j ) + P neg ( D i ) P neg ( D j ) (cid:107) (4) Here || || measures the distance between two distributions.", "However, as we mentioned above, this distance cannot be directly optimized because the golden label Y of X is unknown.", "Fortunately, in the SCM, the impact from dictionary D to the entity label Y are all through the model M and the representation R , i.e., through the path D M R Y .", "As a result, the bias from the dictionary D can be eliminated by preserving the causal effects between X and any node in the path.", "A simple and reasonable solution is to preserve the causal invariance of the representation R .", "That is, given different dictionaries, we keep the causal effects from X to R unchanged, and therefore causal effects of X Y will remain unchanged.", "Specifically, when learning causal effects given an dictionary D , the causal invariance regularizer will further enhance its causal consistency with other dictionaries by minimizing its representation distances to other dictionaries: LCIR ( ; D )= K (cid:88) i =1 (cid:88) x X || RD ( x ; ) RD i ( x ) || 2 (5) Here RD ( x ; ) is the representation of instance x , which is derived from the NER model M by fitting the causal effects of dictionary D .", "The reference dictionary D i in the formulations are generated in the same way as we described in Section 3.1 and K is the number of sub-dictionaries.", "Therefore, this regularizer ensures that the representations learned using different dictionaries will be consistent, and the inter-dictionary bias is eliminated.", "Finally, we combine (3) and (5) to de-bias both intra-dictionary bias and inter-dictionary bias and obtain the final DS-NER models by optimizing: L = (cid:88) i L iBA + L CIR (6) where is a hyper-parameter which controls the relative importance of these two losses and is tuned on the development set.", "Datasets.", "We conduct experiments on four standard datasets: (1) CoNLL2003 (Tjong Kim Sang and De Meulder, 2003) is a well known open-domain NER dataset.", "It consists of 20744 sentences collected from 1393 English news articles and is annotated with four types: PER, ORG, LOC and MISC.", "(2) Twitter (Godin et al., 2015) is from the WNUT 2016 NER shared task.", "It consists of 7236 sentences with 10 entity types.", "(3) Webpage (Ratinov and Roth, 2009) contains 20 webpages, including personal, academic and computer-science conference homepages.", "It consists of 619 sentences with the four types the same as CoNLL2003.", "(4) Wikigold (Balasuriya et al., 2009) contains 149 articles from the May 22, 2008 dump of English Wikipedia.", "It consists of 1969 sentences with the same types of CoNLL2003.", "Distant Annotation Settings.", "We use two distant annotation settings: String-Matching and KB-Matching (Liang et al., 2020).", "String-Matching labels dataset by directly matching names in dictionary with sentences.", "KB-Matching is more complex, which uses a set of hand-crafted rules to match entities.", "We find KB-Matching can generate better data than String-Matching, but String-Matching is a more general setting.", "In our experiments, we report performance on both KB-Matching and String-Matching settings.", "Implementation Detail.", "We implement BiLSTM-CRF with AllenNLP (Gardner et al., 2017), an open-source NLP research library, and the input vector is the 100-dimension GloVe Embeddings (Pennington et al., 2014).", "For other baselines, we use the officially released implementation from the authors.", "We openly release our source code at github.com/zwkatgithub/DSCAU.", "The proposed de-biased training strategy is both model-free, and learning algorithm-free.", "Therefore, we use the following base DS-NER baselines and compare the performance of using/not using our de-biased training strategy: DictMatch , which perform NER by directly matching text with names in a dictionary, so no learning is needed.", "Fully-supervised baselines , including:", "(i) BiLSTM-CRF (Lample et al., 2016), which uses Glove (Pennington et al., 2014) for word embeddings;", "(ii) RoBERTa-base (Liu et al., 2019), which encodes text using RoBERTa-base then predict token label via a multi-layer perceptron.", "Naive Distant Supervision (Naive) , which directly uses weakly labeled data to train a fully-supervised model.", "It could be considered as the lower bound of DS-NER.", "Positive-Unlabeled Learning (PU-Learning) (Peng et al., 2019), which formulates DS-NER as a positive-unlabeled learning problem.", "It could obtain unbiased loss estimation of unlabeled data.", "However, it assumes that there are no false positive instances which may be incorrect in many datasets.", "BOND (Liang et al., 2020), which is a two-stage learning algorithm: In the first stage, it leverages pre-trained language model to improve the recall and precison of the NER model; In the second stage, it adopts a self-training approach to further improve the model performance.", "Table 1 and Table 2 show the overall performance (F1 scores) of different baselines and our methods.", "For our method, we use BA to denote backdoor adjustment, and CIR to denote causal invariance regularizer.", "We conduct our debiasing method on three base models: RoBERTa-base, PU-Learning and BOND, therefore we have 6 systems of our methods: RoBERTa+BA, RoBERT+BA+CIR, PU-Learning+BA, PU-Learning+BA+CIR, BOND +BA, BOND+BA+CIR.", "We can see that: (1) DS-NER models are severely influenced by the dictionary bias.", "Without debiasing, the naive DS-NER baselines BiLSTM-CRF and RoBERTa-base can only achieve comparable performance with the simple DictMatch baselines.", "And by taking the dictionary bias into consideration, PU-Learning, BOND with our method can significantly improve the performance of DS-NER.", "Compared with DictMatch, they correspondingly achieve 4.99%, 21.98% F1 improvements on average.", "This verified that the dictionary bias is critical for DS-NER models.", "(2) By debiasing DS-NER models via causal intervention, our method can achieve significant improvement.", "Compared with their counterparts, our full methods RoBERTa+BA+CIR, BOND+BA+CIR correspondingly achieve 4.91%, 3.18% improvements averaged on four datasets in KB-Matching (5.75%, 2.56% improvements on String-Matching) and PU-Learning+BA+CIR achieves 9.34% improvement on CoNLL2003 dataset in KB-Matching (5.80% improvement in String-Matching).", "This verified the effectiveness of using causal intervention for debiasing DS-NER.", "(3) Our method can effectively resolve both intra-dictionary and inter-dictionary biases.", "Both of backdoor adjustment and causal invariance regularizer can improve the NER performance.", "By conducting backdoor adjustment, our method can achieve a 3.27% F1 improvement averaged on all base models and all datasets.", "And further conducting causal invariance regularizer can future improve 4.63% average F1.", "To verify whether the causal invariance regularizer can significantly improve the robustness of DS-NER across different dictionaries, we further compared the predicting likelihood of golden mentions using different dictionaries.", "Specifically, we train the same RoBERTa-Classifier DS-NER models by sampling 4 dictionaries.", "Figure 4 shows the average predicting likelihood before/after using our de-biasing method.", "From Figure 4, we can see that the proposed causal invariance regularizer significantly reduced the likelihood gaps between different dictionaries.", "This verified that removing the inter-dictionary bias can significant benefit the robustness of DS-NER.", "Furthermore, we can see that the likelihoods of golden mentions are remarkably increased, which represents a better NER performance.", "These 0.652 0.466 0.654 0.499 0.656 0.441 0.673 0.472 0.4 0.5 0.6 0.7 After intervention Before intervention Figure 4: The likelihood variance between different dictionaries before/after using causal invariance regularizer (RoBERTa-Classifier on CoNLL2003), We can see that the performance variance significantly decreases, which verifies that causal invariance regularizer can significantly improve the robustness of DS-NER.", "40% 50% 60% 70% 80% Proportion BOND PUL RoBERTa", "5 Related Work DS-NER.", "Supervised NER models have achieved promising performance (Lample et al., 2016; Lin et al., 2019a,b).", "However, the reliance on labeled data limits their applications in open situations.", "Distant supervision (Mintz et al., 2009) is a promising technique to alleviate the data bottleneck for NER, which generates large-scale training data by matching sentences with external dictionaries.", "Current DS-NER studies focus on denoising the distantly labeled training data for better model learning.", "Yang et al. (2018) adopted reinforcement learning for denoising.", "Shang et al. (2018) proposed a sequence labeling framework TieOrBreak, which can avoid noise caused by a single word.", "Cao et al. (2019) promoted the quality of data by exploiting labels in Wikipedia.", "Peng et al. (2019) employed Positive-Unlabeled Learning to obtain unbiased estimation of the loss value.", "Liang et al. (2020) used self-training method which leverages a pretrained language model as teacher model to guide the training of student model.", "all demonstrate the effectiveness of the proposed causal invariance regularizer.", "To conduct causal intervention, our method needs to sample sub-dictionaries from the original one.", "To analyze the influence of the coverage and the quantity of sub-dictionaries, we conducted experiments on sub-dictionaries with different coverages and different quantities.", "Dictionary Coverage.", "Figure 5 shows the results with different dictionary coverages.", "We can see that our method is not sensitive to the coverage of sub-dictionaries: it can achieve robust performance from 40% to 80% coverage.", "All three models achieved the best performance at the 70% coverage.", "This result demonstrates the robustness of our method on dictionary coverage.", "Dictionary Quantity.", "Figure 6 shows the results with different sub-dictionary quantities.", "We can see that our method can achieve performance improvement by sampling more sub-dictionaries.", "This is because more sub-dictionaries will lead to more accurate estimation of both the dictionary probability in backdoor adjustment and the dictionary variance in causal invariance regularizer.", "Futhermore, we can see that the performance using only one sub-dictionary (i.e., DS-NER without causal intervention) is significantly worse than other settings, this further verifies the effectiveness of our method.", "Causal Inference.", "Causal Inference (Pearl, 2009; Pearl and Mackenzie, 2018) has been widely adopted in psychology, politics and epidemiology for years (MacKinnon et al., 2007; Richiardi et al., 2013; Keele, 2015).", "It can provide more reliable explanations by removing confounding bias in data, and also provide debiased solutions by learning causal effect rather than correlation effect.", "Recently, many causal inference techniques are used in computer vision (Tang et al., 2020; Qi et al., 2020) and natural language process (Wu et al., 2020; Zeng et al., 2020).", "This paper proposes to identify and resolve the dictionary bias in DS-NER via causal intervention.", "Specifically, we first formulate DS-NER using a structural causal model, then identity the causes of both intra-dictionary and inter-dictionary biases, fi-nally de-bias DS-NER via backdoor adjustment and causal invariance regularizer.", "Experiments on four datasets and three representative DS-NER models verified the effectiveness and the robustness of our method.", "This work is supported by the National Natural Science Foundation of China under Grants no.U1936207, Beijing Academy of Artificial Intelligence (BAAI2019QN0502), scientific research projects of the State Language Commission (YW135-78), and in part by the Youth Innovation Promotion Association CAS(2018141).", "Moreover, we thank all reviewers for their valuable comments and suggestions." ]
[ "abstain", "abstain", "objective", "abstain", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "method", "result", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "method", "objective", "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "other", "other" ]
[ "Pre-trained language models have recently shown that training on large corpora using the language modeling objective enables few-shot and zero-shot capabilities on a variety of NLP tasks, including commonsense reasoning tasks.", "This is achieved using text interactions with the model, usually by posing the task as a natural language text completion problem.", "While using language model probabilities to obtain task specific scores has been generally useful, it often requires task-specific heuristics such as length normalization, or probability calibration.", "In this work, we consider the question answering format, where we need to choose from a set of (free-form) textual choices of unspecified lengths given a context.", "We present ALC (Answer-Level Calibration), where our main suggestion is to model context-independent biases in terms of the probability of a choice without the associated context and to subsequently remove it using an unsupervised estimate of similarity with the full context.", "We show that our unsupervised answer-level calibration consistently improves over or is competitive with baselines using standard evaluation metrics on a variety of tasks including commonsense reasoning tasks.", "Further, we show that popular datasets potentially favor models biased towards easy cues which are available independent of the context.", "We analyze such biases using an associated F1-score.", "Our analysis indicates that answer-level calibration is able to remove such biases and leads to a more robust measure of model capability.", "Language models (LM), trained on large corpora, have been shown to exhibit few-shot and zero-shot learning capability (Radford et al., 2019; Brown et al., 2020) using only text interactions, as opposed to finetuning the model parameters using task specific training examples.", "Relying purely on text interactions for few-shot ability shifts the focus to designing and utilizing suitable task-specific natural language templates.", "In this work, we focus on free-form multiple choice question answering (and commonsense reasoning tasks in particular), where given a context and a set of choices of unspecified lengths, a model is required to select the most suitable choice.", "To enable zero-shot learning, the typical approach is to form textual sequences by concatenating the context independently with each choice and then scoring the concatenated strings using a pre-trained LM.", "While LM probabilities have been shown to provide useful estimates of choice probabilities given a context, there is no incentive to treat the choices as equal in the absence of the associated context.", "For example, the LM probabilities in a neutral context are likely to be determined by frequency.", "In this work, we explore the role of biases that are likely to be associated with the choices naturally due to the language modeling objective.", "We propose ALC 1 (Answer-Level Calibration), where we use a neutral context to model such biases and remove them using a scaling factor determined by how similarly a model handles the question context as compared to a neutral context.", "Further, we show that popular datasets favor models which rely on easy cues which are context independent.", "We use a bias-specific F1 score to analyze such biases.", "Our results indicate the need for answer-level calibration for more accurate estimates of model capabilities, or equivalently the design of better datasets.", "We hope our work will be useful for further research in both those directions.", "Specifically, we analyze context-independent biases related to length, part-of-speech (POS) and neutral context probabilities of the choices.", "1. We present ALC, a model-agnostic approach to improve the unsupervised performance of pretrained LMs for free-form multiple choice question answering, including commonsense reasoning tasks.", "2. We show that popular datasets favor models relying on context-independent easy cues and demonstrate the need for answer-level calibration to better estimate model capabilities.", "Prompts Jiang et al. (2020) show that manually created templates can be sub-optimal in extracting knowledge from LMs, and propose mining and paraphrasing-based approaches using training examples.", "Schick and Schtze (2021) highlight the importance of selecting templates for enabling few-shot learning.", "Calibration Probabilities output by neural networks are known to suffer from lack of calibration (Guo et al., 2017), including LM output probabilities (Braverman et al., 2020).", "Zhao et al. (2021) use token-level calibration to improve on few-shot classification and generation tasks.", "In contrast, we show that answer-level calibration is more suitable for the multiple choice setting that we consider.", "While we focus on free-form multiple choice questions in this work, when the choices are single tokens, for example in a classification task where the choices are True and False , answer-level calibration would behave similar to token-level calibration.", "As a result, answer-level calibration can be seen to have a more general scope as also illustrated empirically through our experiments.", "Further, our analysis (Section 3.4) shows that answer-level calibration provides a more reliable measure of model performance on datasets with potential biases.", "Finally, Jiang et al. (2021) explore supervised methods, including finetuning as well as post-hoc methods, to improve calibration using training examples.", "In this work, we focus mainly on unsupervised calibration.", "Answer-level calibration Brown et al. (2020) generally perform length normalization over the token probabilities for a choice, while observing that for a select few tasks they obtain performance gains when using an answer-level calibration scheme (which corresponds to the unscaled version in Equation 3 of ALC).", "They use task specific development sets to choose between length normalization and answer-level calibration which is undesirable for few-shot learning (Kann et al., 2019), and specifi-cally for zero-shot learning.", "In this work, we show that unscaled calibration (as in Equation 3) is suboptimal, compared to our proposed scaled version.", "More recently, Holtzman et al. (2021) also arrive at a formulation equivalent to the unscaled version of ALC but are motivated differently.", "Specifically, they hypothesize that the possibility of different surface forms of the same concept causes a competition between surface forms when scored by the LM.", "In contrast, we are motivated by calibration concerns and the presence of context-independent biases.", "We justify this motivation through bias associated evaluation (Section 5.2) for both the unscaled and scaled versions of ALC.", "One way to make the probability estimates of the choices more accurate is to enhance the context using more task-specific cues.", "For example, Brown et al. (2020) show that with just a few in-context examples, significant gains in performance can be obtained.", "At the same time, it has been shown that the order of examples as well as token-level calibration in such prompts can be critical for getting good performance (Zhao et al., 2021; Kumar and Talukdar, 2021).", "While the gains from enhanced context through additional examples may be complementary to answer-level calibration, we focus on the zero-shot setting in this work.", "In the zero-shot setting, Shwartz et al. (2020), working on the question answering format, propose generating textual clarifi-cations using the pre-trained LM itself, to enhance the context and improve zero-shot performance of pre-trained LM on commonsense reasoning tasks.", "While their method has a much higher computational cost, we use it as an unsupervised baseline and show improvement over it on most tasks we consider.", "We introduce the problem setting and notation in Section 3.1.", "We briefly describe our motivation in Section 3.2 and discuss the core idea of removing context-independent biases in Section 3.3.", "We provide the natural language formatting used in our experiments in Section A.3.", "We discuss bias associated measures in Section 3.4.", "We consider a problem setting where an example consists of a textual context C and K textual choices (or options) O k , k [ K ] , and we need to predict which choice O k fits best in context C .", "For example, in the case of question answering, this amounts to answering a question contained in the context C .", "Additionally, we define an instance-independent neutral context C , where we expect all choices to be equally likely.", "Denoting the gold answer by Y , the evaluation data is comprised of N instances defined by the tuples ( C i , [ O ik ] , Y i ) , k [ K ] , i [ N ] .", "Our main motivation is to evaluate the suitability of pretrained LMs for free-form multiple choice question answering where we contend that raw conditional phrase probabilities do not satisfy a natural requirement for such tasks (Equation 2).", "We suggest and evaluate modifications to meet this requirement.", "We aim to obtain a probabilistic model M which provides estimates PM ( O | C ) , the probability of a choice O given the context C .", "Predictions y for an example can subsequently be made using: y = argmax k ( PM ( O k | C )) (1) We wish to build such a model using a pretrained LM, e.g., GPT2.", "Such a LM, trained on the task of next word prediction, is expected to provide estimates of word probabilities given a textual context.", "For example, given the sequence of words w 1 w 2 ...w i , we expect GPT2 to provide probability estimates PL ( w i +1 | w 1 w 2 ...w i ) .", "Applying chain rule, we can obtain estimates of phrases given a textual context.", "For example, we could obtain estimates of PL ( O | C ) .", "Can PL ( O | C ) serve as a proxy for PM ( O | C ) ?", "It is tempting to expect the LM probabilities PL ( O | C ) to serve as a proxy for PM ( O | C ) when we can format the task in natural language.", "However, under the assumption that all choices O k are equally likely given a neutral context C , this approximation can be sub-optimal.", "For it to be optimal, we would need PL ( O 1 | C ) = PL ( O 2 | C ) ... = PL ( OK | C ) (2) However, given that these are task and instance specific choices, there is no incentive in the language modeling objective to ensure this condition.", "To address this, we define a new score SL ( O | C ) to behave as expected with a neutral context: SL ( O k | C ) = log PL ( O k | C ) log PL ( O k | C ) (3) Predictions can subsequently be made using: y (cid:48) = argmax k ( SL ( O k | C )) (4) Scaling the bias term: Equation 3, while desirable, makes a strong assumption about how the bias is present in the LM.", "While valid unquestionably for the neutral context, the bias in a trained (on task-specific data, or on a task-independent pretraining corpus) model is likely to depend on the context as well.", "For instance, a longer or more familiar context (in terms of similarity to training contexts) may mean the model is less reliant on context-independent cues.", "We therefore define a scaled version for removing biases, where the function g outputs the scaling term (ranging in [0 , 1] ): S (cid:48) L ( O k | C ) = log PL ( O k | C ) g ( C, C ) log PL ( O k | C ) (5) We would want this formulation to preserve the requirement in Equation 2 which was satisfied by the unscaled version in Equation", "3. Specifically, we want g ( C , C ) = 1 which would assign an equal score to each choice O k given a neutral context.", "To get a model-agnostic 2 estimate of g , we think of log PL ( O k | C ) and log PL ( O k | C ) as outputs from different models M and M respectively, and g as a measure of similarity between the models.", "Note that while M uses the available context C , M uses only the neutral context C .", "The intuition is that if M and M are identical, there is no new information provided by M and we want to set g ( C, C ) = 1 , leading to S (cid:48) L ( O k | C ) = 0 .", "On the other hand, if M and M are very dissimilar, we can rely on the contextual scores of M and set g ( C, C ) = 0 , leading to S (cid:48) L ( O k | C ) = log PL ( O k | C ) .", "Specifically, to estimate g , we compute a similarity metric between the token probabilities (across the model's entire vocabulary) output by the two models.", "where p fL indicates the probability vector output by the model across the vocabulary for the first token given the corresponding context.", "In this work, we consider Total Variation Distance (TVD), and Bhattacharyya Coefficient (BC) (Bhat-tacharyya, 1943).", "When using TVD, we subtract it from 1, to obtain a similarity estimate: g TVD ( C, C ) = 1 0 .", "while we directly use BC:", "Consider an instance and choice specific attribute A i ( O ik ) which can take values a j , j [ J ] .", "If we expect the attribute to be uncorrelated with task performance, we expect a model to perform similarly when evaluating subsets with different distributions of attributes A i ( O k ) = a j .", "If a model relies on specific values of the attribute and if the evaluation data has sufficient representation of that value, standard evaluation metrics which ignore this attribute may provide an erroneous estimate of the model capability.", "As an extreme example, consider A ( . ) to denote whether the selected choice corresponds to the shortest choice among all choice O k , k [ K ] , with the attribute values being true/false.", "Assume then that the evaluation data is dominated by instances where A i ( Y i ) = true, i.e., with a high probability, the correct answer in the evaluation data is the shortest choice.", "Consider also a model which always chooses the shortest choice, irrespective of the content.", "The model would return close to perfect scores using standard evaluation metrics such as accuracy against gold labels.", "To analyze the impact of such attributes, we use a macro F1 score which takes into account the partitions created by an attribute.", "Recalling that an instance is represented by the tuple ( C i , [ O i k ] , Y i ) , i [ N ] , and letting Y i be the model prediction, we define precision (P), recall (R) and F1 scores for each attribute value a j , and subsequently an attribute specific macro F1 score ( F1 A ).", "P ( A,a j ) = # { ( A i ( Y i ) = a j ) & ( Y i = Y i ) } # { A i ( Y i ) = a j } (9) R ( A,a j ) = # { ( A i ( Y i ) = a j ) & ( Y i = Y i ) } # { A i ( Y i ) = a j } (10) F1 ( A,a j ) = 2 P ( A,a j ) R ( A,a j ) P ( A,a j ) + R ( A,a j ) (11) F1 A = Average( { F1 ( A,a j ) } ) (12) where # { .", "} denotes the count of the corresponding set.", "If the model performs similarly irrespective of the attribute value, the macro F1 score F1 A is equal to the standard measure of accuracy: Accuracy = # { Y i = Y i } N (13) 4 Experimental Setup The datasets used and the corresponding prompts are described in Section 4.1.", "The LMs used are described in Section 4.2 and the baseline approaches in Section 4.3.", "Experimental results and analyses are presented in Section 5.", "We used a series of commonsense reasoning tasks and evaluated on the publicly available development sets.", "We used the same versions of the data as Shwartz et al. (2020) to allow for a direct comparison COPA (Gordon et al., 2012), CommonsenseQA (Talmor et al., 2019), MCTACO (Zhou et al., 2019), SocialIQA (Sap et al., 2019), PIQA (Bisk et al., 2020), WinoGrande (Sakaguchi et al., 2020).", "We also report on the adversarially generated large-scale SWAG dataset (Zellers et al., 2018).", "Further, we report on the AI2 Reasoning Challenge (ARC) (Clark et al., 2018), which has Easy and Challenge versions.", "As a representative dialog understanding task, we report on the DREAM (Sun et al., 2019) dataset.", "Finally, we report on a recent benchmark introduced for measuring multitask accuracy of pretrained models (referred to as Hendrycks in the following) Hendrycks et al. (2020).", "is associated with only one correct choice.", "For COPA, we also report on the test split due to the small size of COPA dev set.", "The sizes of the datasets used are reported in Appendix Table", "8. All datasets contain questions in English language.", "We briefly describe these datasets in Section A.2.", "Examples for each dataset along with contextual ( C ) and neutral ( C ) prompts used in this work are captured in Section A.3.", "We experiment with GPT2 (Radford et al., 2019) variants distilgpt2, gpt-small, gpt-medium, gpt2-large and gpt2-xl.", "The size of models used is reported in Appendix Table", "9. While the gpt-* models have been trained similarly as described in Radford et al. (2019), distilgpt2 has been pretrained with the supervision of GPT2 3 (Wolf et al., 2020).", "For most of our experiments, we utilize the gpt2-xl model.", "Please refer Section A.1 for additional details about the experimental setup.", "Uncalibrated : Predictions are made using uncalibrated probabilities from a LM, log PL ( O | C ) , computed as the sum of conditional log-probabilities output by the model for the tokens in O .", "Length normalized : Predictions are made using length-normalized probabilities from a LM, log PL ( O | C ) , computed as the mean of conditional log-probabilities output by the model for the tokens in O .", "Model ARC DREAM SWAG Hendrycks-test Easy Challenge Humanities STEM Social sciences Other Token calibration 35.09 20.40 40.20 29.53 23.38 22.70 25.45 25.51 Uncalibrated 58.25 27.76 48.14 49.30 26.99 24.16 31.52 31.55 Length normalized 50.70 29.43 48.77 65.36 29.33 26.47 30.84 32.85 ALC Unscaled 53.33 33.11 52.99 57.04 31.05 29.13 32.76 35.26 TVD 60.00 29.43 52.50 53.77 28.80 25.98 32.24 33.07 BC 56.49 33.78 53.14 59.16 30.31 27.60 32.60 34.58", "Self-talk : We use the official code repository 4 of self-talk (Shwartz et al., 2020) using gpt2-xl as both the scoring model and the knowledge source.", "Token calibration: Following Zhao et al. (2021), we use the probability vector output, p fN by the model at the first token given the neutral context to calibrate the model probabilities.", "Specifically, each token probability p is offset by p fN and renormalized: p (cid:48) = softmax( p p fN ) .", "We also tried an alternative variant suggested by Zhao et al. (2021) where p (cid:48) = softmax( p/p fN ) but this generally did worse and we skip the corresponding results.", "We aim to answer the following questions:", "Q1 How does ALC compare with baselines using standard evaluation (accuracy) on free-form multiple choice question answering tasks?", "(Section 5.1)", "Q2 Does the aforementioned evaluation reflect true model capability?", "To answer this question, we perform a series of bias associated evaluations (see Section 3.4) and also evaluate whether ALC helps overcome such biases.", "Specifically, we evaluate on biases related to answer length, POS tag and context-ignorant LM probability.", "(Section 5.2)", "Q3 Does ALC improve expected calibration error (Guo et al., 2017)?", "(Section 5.3) 5.1 Standard Evaluation The overall results for the commonsense reasoning tasks (considered by Shwartz et al. (2020)) using standard evaluation of ALC, as well as the base-670 Model POS = noun POS = verb POS = adj P R F1 P R F1 P R F1 Commonsenseqa Size 902 149 142 Uncalibrated 36.93 39.47 38.16 39.57 36.91 38.19 40.48 23.94 30.09 Length normalized 35.07 31.37 33.12 33 44.97 38.07 33.33 33.8 33.57 ALC (Unscaled) 48.68 45.01 46.77 43.75 56.38 49.27 52.74 54.23 53.47 ALC (BC) 49.32 48.34 48.82 48.84 56.38 52.34 59.32 49.3 53.85 Table 4: POS bias analysis: We consider subsets of data using the POS tag of the first token and report on P, R and F1 scores (lowest values are underlined).", "lines, with gpt2-xl are presented in Table 1 (top).", "We also report on an unscaled ablation of ALC.", "Note that ALC outperforms the uncalibrated baseline on all datasets except WinoGrande (where all models perform poorly and we drop it from further discussions).", "Further, the significant gains compared to token calibration (which generally does worse than the uncalibrated baseline) show that answer-level calibration is more suited for unsupervised commonsense question answering when there is no constraint on the lengths of candidate choices.", "Finally, ALC outperforms or is competitive with self-talk 5 while being significantly less computationally intensive.", "ALC requires scoring two strings (context input and neutral input) for each choice, while self-talk requires generating hundreds of clarification texts using data-dependent templates and subsequently scoring them.", "We also report on the average gain over the uncalibrated baseline across gpt2 models of varying sizes (Table 9) in Table 1 (middle) and observe similar trends as in the case of gpt2-xl.", "While our focus is zero-shot unsupervised evaluation, we also perform few-shot (1-shot and 4-shot) evaluation In general, for k-shot evaluation, we sample 100 sets of size k from an unseen split 6 of the dataset.", "A few-shot context is obtained by concatenating training examples with a newline token.", "We report the average performance on the evaluation set in Table 1 (bottom) and observe similar trends as before.", "We present the standard zero-shot evaluation on additional datasets in Table", "2. The trends are sim-5 Please see Section A.4 for a note explaining the unusually high relative performance of baselines on some tasks when compared to self-talk.", "6 For few-shot evaluation, we sample from the training split for all except COPA and MCTACO datasets where we sample from the dev set and report on the test set.", "ilar except for the SWAG (see Section 5.2 for an explanation) and the Hendrycks datasets (see Table 11).", "Finally, while our focus is causal language models, we also present results using RoBERTa-large (a masked language model) in Table 10.", "Again, we observe similar trends.", "In the subsequent sections, we show that the evaluation using the accuracy metric may not reveal true model capabilities as the datasets may favor models which utilize easy cues for predicting the answer.", "Next, to gain a better understanding of the model capabilities, we analyze the performance associated with undesirable biases related to length, POS tag and context-ignorant LM probability.", "Specifically, we define the following attributes (see Section 3.4): Shortest Attribute A i ( O ik ) is set to true if O ik is the shortest (number of tokens) choice among the choices O ik (cid:48) , k (cid:48) [ K ] .", "Otherwise, the attribute is set to false.", "Longest Defined similar to Shortest, but set to true if O ik is the longest answer and false otherwise.", "POS Attribute A i ( O ik ) is set to the POS tag of the first token in the choice O ik .", "We don't consider POS tags which occur less than a threshold (25) in the evaluation data.", "LM-Best Attribute A i ( O ik ) is set to true if O ik is the most likely choice using context-ignore (neu-tral input) LM probability.", "Otherwise, it is set to false.", "LM-Worst Defined similar to LM-Best, but set to true when O ik is the least likely choice and false otherwise.", "Finally, we consider length-normalized versions of LM-Best and LM-Worst, referred to as LM-671 Model LM-Best = true LM-Worst = True P R F1 P R F1 PIQA Size 1195 643 Uncalibrated 70.53 94.31 80.7 71.67 26.75 38.96 Length normalized 73.96 86.28 79.64 63.06 43.55 51.52 ALC (Unscaled) 77.42 54.23 63.78 45.35 70.61 55.23 ALC (BC) 75.74 81 78.29 59.46 51.79 55.36 ARC (Easy) Size 183 109 Baseline 52.96 83.06 64.68 72.22 23.85 35.86 Length normalized 60.56 59.56 60.06 39.47 41.28 40.36 ALC (Unscaled) 81.71 36.61 50.57 36.2 73.39 48.48 ALC (BC) 64.15 55.74 59.65 45.04 54.13 49.17 ARC (Challenge) Size 64 86 Uncalibrated 23.78 68.75 35.34 28.57 4.65 8 Length normalized 27.55 42.19 33.33 29.03 20.93 24.32 ALC (Unscaled) 38.71 18.75 25.26 33.87 48.84 40 ALC (BC) 29.76 39.06 33.78 36.99 31.4 33.96 Table 5: Context-ignorant LM bias analysis: We consider subsets of data where the correct choice corresponds to the best/worst choice as per the context-ignorant (neutral context) LM probability and report on P, R and F1 scores (lowest values are underlined).", "Norm-Best and LM-Norm-Worst respectively.", "Briefly, our experiments reveal that while the datasets considered don't share a similar bias pattern, each usually suffers from at least one bias considered in this work, i.e., there is a drop in performance when measured using the bias associated score.", "We present the detailed results for commonsense reasoning tasks in Appendix Table 12, using gpt2-xl model, while highlighting the key takeaways here.", "Recall that in the absence of biases in the model, the F1 score should match the accuracy score.", "In the following sections, we provide a more directed analysis on the presence of such biases, on datasets where such biases are most prominent, and if ALC helps alleviate such biases.", "We create subsets of the CommonsenseQA and SocialIQA dev set with specific properties to evaluate if the LM-Baseline has the associated biases and if they are addressed by ALC.", "First, we create subsets of examples where the shortest/longest answer is the correct answer.", "We expect longer sentences to have lower probabilities than shorter sentences with the uncalibrated baseline.", "Additionally, with the length normalized variant, where the final score is obtained as the mean of conditional log-probabilities instead of the sum (as in the uncalibrated baseline), longer sentences could potentially be favored.", "We report the uncalibrated and ALC's performance in Table", "3. Note that both uncalibrated baseline and the length-normalized variants favor one subset at the cost of the other, while ALC improves on both.", "In particular, the uncalibrated baseline has a much poorer recall when the longest answer is correct.", "On the other hand, the length normalized variant has a much poorer recall when the shortest answer is correct.", "The results indicate that ALC provides a viable alternative to length normalization for handling length biases.", "We analyze potential part of speech (POS) tag biases in Table", "4. Considering the CommonsenseQA dataset, we create subsets of the data where the correct answer is of the POS tag noun, verb or adjective.", "Note that ALC shows less variation in performance (F1) across these subsets when compared to uncalibrated baseline while improving on 672 Model PIQA SWAG LM-Norm-Best Baseline 65.38 49.38 Length normalized 60.16 60.97 ALC (Unscaled) 58.89 58.12 ALC (BC) 67.92 59.48 LM-Norm-Worst Baseline 65.38 39.90 Length normalized 60.16 46.99 ALC (Unscaled) 58.89 49.48 ALC (BC) 67.92 51.14 Table 6: Context-ignorant normalized scores: LM-Norm-Best (top) and LM-Norm-Worst (bottom) macro F1 evaluation on SWAG and PIQA datasets (lowest values are underlined and highest values are in bold).", "each subset.", "In particular, the maximum difference in F1 scores is 8.1 for the uncalibrated baseline while it is 5.03 for ALC (BC).", "ALC also improves over the length normalized variant for each subset.", "To understand how much of the unsupervised performance comes from context-independent LM biases, we analyze subsets where the correct answer is most/least likely without the context.", "We report the performance on the PIQA and ARC datasets in Table 5 and show that such biases indeed exist.", "The key takeaway is that the standard evaluation metrics may not give an accurate estimate of performance and that ALC provides more reliable estimates.", "Finally, we report macro F1 scores for LM-Norm-Best and LM-Norm-Worst evaluation in Table 6 on PIQA and SWAG datasets.", "The results indicate that the datasets favours length normalization aware scoring irrespective of the context.", "When we measure the bias associated score, ALC generally performs better.", "Given a score S ( O k | C ) for each choice O k , we can compute a confidence estimate conf( O k | C ) as:", "Guo et al. (2017) compute expected calibration error (ECE) by partitioning N confidence predictions into R equal bins B r , r [1 , R ] and computing the weighted average of the absolute difference between the confidence and accuracy in each bin: ECE = R (cid:88) r =1 | B r | N | acc( B r ) conf( B r ) | (15) where acc() and conf() measure the accuracy and mean confidence respectively in a bin.", "We set the number of bins to be 20.", "We report the average difference in accuracy and ECE compared to the uncalibrated baseline across the evaluation datasets (except WinoGrande) in Table 7.", "When compared to the uncalibrated baseline, ALC provides gains in calibration error while also improving performance.", "Length-normalization also improves ECE, presumably by correcting for length bias.", "However, length-normalization does not improve performance on an average.", "The relative performance gains of ALC can be explained through the handling of additional biases beyond length bias.", "We propose ALC (Answer-Level Calibration), an unsupervised method to improve performance of pretrained language models.", "We show that, when compared to existing baselines, ALC is more suitable for free-form multiple choice question answering, including commonsense reasoning tasks.", "We also show that popular datasets favor models which rely on easy cues for predictions, and that ALC provides more reliable estimates of model capabilities by getting rid of some of these biases." ]
[ "abstain", "abstain", "abstain", "method", "method", "result", "result", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "result", "method", "result", "abstain", "method", "result", "objective", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "other", "method", "other", "other", "objective", "other", "other", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result" ]
[ "The use of explicit object detectors as an intermediate step to image captioning which used to constitute an essential stage in early work is often bypassed in the currently dominant end-to-end approaches, where the language model is conditioned directly on a mid-level image embedding.", "We argue that explicit detections provide rich semantic information, and can thus be used as an interpretable representation to better understand why end-to-end image captioning systems work well.", "We provide an in-depth analysis of end-to-end image captioning by exploring a variety of cues that can be derived from such object detections.", "Our study reveals that end-to-end image captioning systems rely on matching image representations to generate captions, and that encoding the frequency, size and position of objects are complementary and all play a role in forming a good image representation.", "It also reveals that different object categories contribute in different ways towards image captioning.", "Image captioning (IC), or image description generation, is the task of automatically generating a sentential textual description for a given image.", "Early work on IC tackled the task by first running object detectors on the image and then using the resulting explicit detections as input to generate a novel textual description, e.g. (Kulkarni et al., 2011; Yang et al., 2011).", "With the advent of sequence-to-sequence approaches to IC, e.g. (Karpathy and Fei-Fei, 2015; Vinyals et al., 2015), coupled with the availability of large image description datasets, the performance of IC systems showed marked improvement, at least according to automatic evaluation metrics like Meteor (Denkowski and Lavie, 2014) and CIDEr (Vedantam et al., 2015).", "The currently dominant neural-based IC systems are often trained end-to-end, using parallel (image, caption) datasets.", "Such systems are essentially sequential language models conditioned directly on some mid-level image features, such as an image embedding extracted from a pre-trained Convolutional Neural Network (CNN).", "Thus, they bypass the explicit detection phase of previous methods and instead generate captions directly from image features.", "Despite significant progress, it remains unclear why such systems work.", "A major problem with these IC systems is that they are less interpretable than conventional pipelined methods which use explicit detections.", "We believe that it is timely to again start exploring the use of explicit object detections for image captioning.", "Explicit detections offer rich semantic information, which can be used to model the entities in the image as well as their interactions, and can be used to better understand image captioning.", "Recent work (Yin and Ordonez, 2017) showed that conditioning an end-to-end IC model on visual representations that implicitly encode object details yields reasonably good captions.", "Nevertheless, it is still unclear why this works, and what aspects of the representation allow for such a good performance.", "In this paper, we study end-to-end IC in the context of explicit detections (Figure 1) by exploring a variety of cues that can be derived from such detections to determine what information from such representations helps image captioning, and why .", "To our best knowledge, our work is the first experimental analysis of end-to-end IC frameworks that uses object-level information that is highly interpretable as a tool for understanding such systems.", "Our main contributions are as follows: 1. We provide an in-depth analysis of the performance of end-to-end IC using a simple, yet effective bag of objects' representation that 2180 Category ablation a man balances a bicycle on a bench .", "is interpretable, and generates good captions despite being low-dimensional and highly sparse (Section 3).", "2. We investigate whether other spatial cues can be used to provide information complementary to frequency counts (Section 3).", "3. We study the effect of incorporating different spatial information of individual object instances from explicit detections (Section 4).", "4. We analyze the contribution of the categories in representations for IC by ablating individual categories from them (Section 5).", "Our hypothesis is that there are important components derived from explicit detections that can be used to effectively inform IC.", "Our study con-firms our hypothesis, and that features such as the frequency, size and position of objects all play a role in forming a good image representation to match their corresponding representations in the training set.", "Our findings also show that different categories contribute differently to IC, and this partly depends on how likely they are to be mentioned in the caption given that they are depicted in the image.", "The results of our investigation will help further work towards more interpretable image captioning.", "Early work on IC apply object detectors explicitly on an image as a first step to identify entities present in the image, and then use these detected objects as input to an image caption generator.", "The caption generator typically first performs content selection (selecting a subset of objects to be described) and generates an intermediate representation (e.g. semantic tuples or abstract trees), and then performs surface realization using rules, templates, n -grams or a maximum entropy language model.", "The main body of work uses object detectors for 20 pre-specified PASCAL VOC (Visual Object Classes) (Everingham et al., 2015) (Yang et al., 2011; Kulkarni et al., 2011; Li et al., 2011; Mitchell et al., 2012), builds a detector inferred from captions (Fang et al., 2015), or assumes gold standard annotations are available (El-liott and Keller, 2013; Yatskar et al., 2014).", "Currently, deep learning end-to-end approaches dominate IC work (Donahue et al., 2015; Karpathy and Fei-Fei, 2015; Vinyals et al., 2015).", "Such approaches do not use an explicit detection step, but instead use a global' image embedding as input (generally a CNN) and learn a language model (generally an LSTM) conditioned on this input.", "Thus, they are trained to learn image caption generation directly from a parallel image caption dataset.", "The advantage is that no firm decisions need to be made about object categories.", "However, such approaches are hard to interpret and are dataset dependent (Vinyals et al., 2017).", "Some recent work use object-level semantics for end-to-end IC (Gan et al., 2017; Wu et al., 2016; You et al., 2016).", "Such systems represent images as predictions of semantic concepts occurring in the image.", "These predictions, however, are at a global, image level (does this image contain a chair?), rather than at object instance level (there is a big chair at position x ).", "In addition, most previous work regard surface-level terms extracted directly from captions as objects', while we use off-the-shelf predefined object categories which have a looser connection between the image and the caption (e.g. objects can be described in captions using different terms, depicted objects might not be mentioned in captions, and captions might mention objects that are not depicted).", "Yin and Ordonez (2017) propose conditioning an end-to-end IC model on information derived from explicit detections.", "They implicitly encode the category label, position and size of object instances as an object-layout' LSTM and condition the language model on the final hidden state of this LSTM, and produce reasonably good image captions based only on those cues, without the direct use of images.", "Our work is different in that we feed information from explicit object detections directly to the language model in contrast to an object-layout LSTM which abstracts away such information, thereby retaining the interpretability of the input image representation.", "This gives us more control over the image representation which is simply encoded as a bag of categorical variables.", "There is also recent work applying attention-based models (Xu et al., 2015) on explicit object proposals (Anderson et al., 2018; Li et al., 2017), which may capture object-level information from the attention mechanism.", "However, attention-based models require object information in the form of vectors, whereas our models use information of objects as categorical variables which allow for easy manipulation but are not compatible with the standard attention-based models.", "The model that we use, under similar conditions (i.e. under similar parametric settings), is comparable to the state-of-the-art models.", "We base our experiments on the MS COCO dataset (Lin et al., 2014).", "From our preliminary experiments, we found that a simple bag of object categories used as an image representation for end-to-end IC led to good scores according to automatic metrics, comparable to and perhaps even higher than those using CNN embeddings.", "This is surprising given that this bag of objects vector is low-dimensional (each element represents the frequency of one of 80 COCO categories) and sparse (mainly zeros, as only a few object categories tend to occur in a given image).", "In simple terms, it appears that the IC model can generate a reasonable caption by merely knowing what is in the image, e.g. that there are three person s, three benches and a bicycle in Figure 1. This observation raises the following questions.", "What is it in this simple bag of objects representation that contributes to the surprisingly high performance on IC?", "Does it lie in the frequency counts?", "Or the choice of categories themselves?", "It is also worth noting that the image captions in COCO were crowd-sourced independent of the COCO object annotations, i.e. image captions were written based only on the image, without object-level annotations.", "The words used in the captions thus do not correspond directly to the 80 COCO categories (e.g. a cup may not be mentioned in a description even though it is present in the image, and vice versa, i.e. objects described in the caption may not correspond to any of the categories).", "In order to shed some light into what makes bag of object categories representations work so well for IC, we first investigate whether the frequency counts is the main contributor.", "We then proceed to studying what else can be exploited from explicit object detections to improve on the bag of objects model, for example the size of object instances.", "We also perform an analysis on these representations to gain more insights into why the bag of objects model performs well.", "Our implementation is based on the end-to-end approach of Karpathy and Fei-Fei (2015).", "We use an LSTM (Hochreiter and Schmidhuber, 1997) language model as described in Zaremba et al. (2014).", "To condition the image information, we first perform a linear projection of the image representation followed by a non-linearity: x = ( W I m ) (1) where I m R d is the d -dimensional initial image representation, W R n d is the linear transformation matrix, is the non-linearity.", "We use Exponential Linear Units (Clevert et al., 2016) as the non-linear activation in all our experiments.", "We initialize the LSTM-based caption generator with the projected image representation, x .", "Training and inference.", "The caption generator is trained to generate sentences conditioned on x .", "We train the model by minimizing the cross-entropy, i.e. the sentence-level loss corresponds to the sum of the negative log likelihood of the correct word at each time step: Pr( S | x ; ) = X t log(Pr( w t | w t 1 ..w 0 ; x )) (2) where Pr ( S | x ; ) is the sentence-level loss con-2182 ditioned on the image feature x and Pr( w t ) is the probability of the word at time step t .", "This is trained with standard teacher forcing as described in Sutskever et al. (2014), where the correct word information is fed to the next state in the LSTM.", "Inference is usually performed using approximate techniques like beam search and sampling methods.", "As we are mainly interested in studying different image representations, we focus on the language output that the models can most con-fidently produce.", "In order to isolate any other variables from the experiments, we generate captions using a greedy arg max approach.", "We use a 2-layer LSTM with 128 -dimensional word embed-dings and 256 -dimensional hidden dimensions.", "As training vocabulary we retain only words that appear at least twice.", "We provide details about hyperparameters and tuning in Appendix A. 3.2 Visual representations The first part of our experiments studies the role of frequency counts of the 80-dimensional bag of objects representation.", "We explore the effects of using the following variants of the bag of objects representation:", "(i) Frequency: The number of instances per category;", "(ii) Normalized: The frequency counts normalized such that the vector sums to 1. This represents the proportion of object occurrences in the image;", "(iii) Binarized: An object category's entry is set to 1 if at least one instance of the category occurs, and 0 otherwise.", "Berg et al. (2012) explore various factors that dictate what objects are mentioned in image descriptions, and found that object size and its position relative to the image centre are important.", "Inspired by these findings, we explore alternative representations based on these cues:", "(i) Object size: The area of the region provided by COCO, normalized by image size; we encode the largest object if multiple objects occur for the same category (max pooling).", "(ii) Object distance: The Euclidean distance from the object bounding box centre to the image centre, normalized by image size; we encode the object closest to the centre if multiple instances occur (min pooling).", "We also explore concatenating these features to study their complementarity.", "Finally, we study the effects of removing information from the bag of objects representation.", "More specifically, we compare the results of retaining only a certain number of object instances Representation GT Detect CNN (ResNet-152 POOL5) -0.749 Frequency 0.807 0.752 Normalized 0.762 0.703 Binarized 0.751 0.703 Object min distance 0.759 0.691 Object max size 0.793 0.725 Obj max size + Obj min distance 0.799 0.743 Frequency + Obj min distance 0.830 0.769 Frequency + Obj max size 0.836 0.769 All three features 0.849 0.743 Table 1: CIDEr scores for image captioning using bag of objects variants as visual representations.", "in the frequency -based bag of objects representation, rather than representing an image with all objects present.", "We experiment with retaining only the frequency counts for one object category and 25% , 50% , and 75% of object categories; the remaining entries in the vector are set to zero.", "The object categories to be retained are selected, per image:", "(i) randomly;", "(ii) by the N % most frequent categories of the image;", "(iii) by the N % largest categories of the image;", "(iv) by the N % categories closest to the centre of the image.", "We performed these evaluations based on", "(i) ground truth COCO annotations and", "(ii) the output of an off-the-shelf object detector (Redmon and Farhadi, 2017) trained on 80 COCO categories.", "With ground truth annotations we can isolate issues stemming from incorrect detections.", "We train our models on the full COCO training set, and use the standard, publicly available splits 1 of the validation set as in previous work (Karpa-thy and Fei-Fei, 2015) for validation and testing (5,000 images each).", "We use CIDEr (Vedantam et al., 2015) the official metric for COCO as our evaluation metrics for all experiments.", "For completeness, we present scores for other common IC metrics in Appendix B. Table 1 shows the CIDEr scores of IC systems using variants of the bag of objects representation, for both ground truth annotations and 1 http://cs.stanford.edu/people/ karpathy/deepimagesent 2183 Feature vs. Pooling Min Max Mean Obj.", "the output of an object detector.", "Compared to a pure CNN embedding (ResNet-152 POOL5), our object-based representations show higher (for ground truth annotations) or comparable CIDEr scores (for detectors).", "Our first observation is that frequency counts are essential to IC.", "Using normalized counts as a representation gives poorer results, which intuitively makes sense: An image with 20 cars and 10 people is significantly different from an image with two cars and one person.", "Using binarized counts (presence or absence) brings the score further down.", "This is to be expected: An image with one person is very different from one with 10 people.", "Using spatial information (size or distance) also proved useful.", "Encoding the object size in place of frequency gave reasonably better results over using object distance from the image centre.", "We can conclude that the size and centrality of objects are important factors for captioning, with object size being more informative than position.", "We also experimented with different methods for aggregating multiple instances of the same category, in addition to choosing the biggest instance and the instance closest to the image centre.", "For example, choosing the smallest instance (min pooling) or the instance furthest away from the image centre (max pooling), or just averaging them (mean pooling).", "Table 2 shows the results.", "For object size, the findings are as expected: Smaller object instances are less important for IC, although averaging them works comparably well.", "Surprisingly, in the case of distance, using the object furthest from the image centre actually gave slightly better results than the one closest.", "Further inspection revealed that aggregating instances is not effective in some cases.", "We found that the positional information (and interaction with other objects) captured by the object further away may sometimes represent the semantics of the image better than the object in the centre of the image.", "For example, in Figure 2, encoding only the position of the person in the middle will result in the Obj.", "min distance: a man in a kitchen preparing food in a kitchen .", "Obj.", "max distance: a group of people standing around a kitchen counter .", "representation being similar to other images with only one person in the centre of the image (and also on a kitchen counter).", "Representing the person as the one furthest from the image will result in some inference (from training data) that there could be more than one person in the image sitting around the kitchen counter rather than a single person standing at the kitchen counter.", "The combination of results (bottom row of Table 1) shows that the three features (frequency, object max size and min distance) are complementary, and that combining any pair gives better CIDEr scores than each alone.", "The combination of all three features produces the best results.", "These results are interesting, as adding spatial information of even just one object per category can produce a better score.", "This has, to our knowledge, not been previously demonstrated.", "The performance of using an explicit detector rather than ground truth annotations is poorer, as expected from noisy detections.", "However, the overall trend generally remains similar, except for the combination of all three features which gave poorer scores.", "Finally, Figure 3 shows the results of partially removing or masking the information captured by the bag of object representation (frequency).", "As expected, IC performance degrades when less than 75% of information is retained.", "The performance of the system where the representation is reduced using frequency information suffers the most (even worst than removing categories ran-domly), suggesting that frequency does not correspond to an object category's importance, i.e. just because there is only one person in the image does 2184 100% 75% 50% 25% 1 \u00004\u0000X\u0000D\u0000Q\u0000W\u0000L\u0000W\u0000\\\u0000\u0003\u0000U\u0000H\u0000W\u0000D\u0000L\u0000Q\u0000H\u0000G 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 \u0000& \u0000, \u0000' \u0000( \u0000U Obj Max Size Obj Min Distance RandomFrequency Figure 3: Change in CIDEr scores for image captioning by reducing the number of (ground truth) object instances in the image representation, based on different heuristics.", "not mean that it is less important than the ten car s depicted.", "On the other hand, object size correlates with object importance in IC, i.e. larger objects are more important than smaller objects for IC: The performance does not degrade as much as removing categories by their frequency in the image.", "We hypothesize that the bag of objects representation performs well because it serves as a good representation for the dataset and allows for better image matching.", "One observation is that the category distribution between the training and test sets are very similar (Figure 4), thus increasing the chance of the bag of objects representation producing a close match to one in the training set.", "From this observation, we posit that end-to-end IC models leverage COCO being repetitive to find similar matches for a test image to a combination of images in the training set.", "Further investigation on the category distribution (e.g. by splitting the dataset such that the test set contains unseen categories) is left for future work.", "k -Nearest neighbour analysis.", "We further investigate our claim that end-to-end IC systems essentially perform complex image matching against the training set with the following experiment.", "The idea is that if the IC model performs some form of image matching and text retrieval from the training set, then the nearest neighbour (from training) of a test image should have a caption similar to the one generated by the model.", "However, the model does not always perform text retrieval as the LSTM is known to sometimes generate novel captions, possibly by aggregating or averaging' the captions of similar images and performing some factorization.", "We first generate captions for every training image using the bag of ob-\u0000S \u0000H \u0000U \u0000V \u0000R \u0000Q \u0000E \u0000L \u0000F \u0000\\ \u0000F \u0000O \u0000H \u0000F \u0000D \u0000U \u0000P \u0000R \u0000W \u0000R \u0000U \u0000F \u0000\\ \u0000F \u0000O \u0000H \u0000D \u0000L \u0000U \u0000S \u0000O \u0000D \u0000Q \u0000H \u0000E \u0000X \u0000V \u0000W \u0000U \u0000D \u0000L \u0000Q \u0000W \u0000U \u0000X \u0000F \u0000N \u0000E \u0000R \u0000D \u0000W \u0000W \u0000U \u0000D \u0000I\u0000I \u0000L \u0000F \u0000\u0003 \u0000O\u0000L \u0000J \u0000K \u0000W \u0000I \u0000L \u0000U \u0000H \u0000\u0003 \u0000K \u0000\\ \u0000G \u0000U \u0000D \u0000Q \u0000W \u0000V \u0000W \u0000R \u0000S \u0000\u0003 \u0000V \u0000L \u0000J \u0000Q \u0000S \u0000D \u0000U \u0000N \u0000L \u0000Q \u0000J \u0000\u0003 \u0000P \u0000H \u0000W \u0000H \u0000U \u0000E \u0000H \u0000Q \u0000F \u0000K \u0000E \u0000L \u0000U \u0000G \u0000F \u0000D \u0000W \u0000G \u0000R \u0000J \u0000K \u0000R \u0000U \u0000V \u0000H \u0000V \u0000K \u0000H\u0000H \u0000S \u0000F \u0000R \u0000Z \u0000H \u0000O \u0000H \u0000S \u0000K \u0000D \u0000Q \u0000W \u0000E \u0000H \u0000D \u0000U \u0000] \u0000H \u0000E \u0000U \u0000D \u0000J \u0000L \u0000U \u0000D \u0000I\u0000I \u0000H \u0000E \u0000D \u0000F \u0000N \u0000S \u0000D \u0000F \u0000N \u0000X \u0000P \u0000E \u0000U \u0000H \u0000O\u0000O \u0000D \u0000K \u0000D \u0000Q \u0000G\u0000E \u0000D \u0000J \u0000W \u0000L \u0000H \u0000V \u0000X \u0000L \u0000W \u0000F \u0000D \u0000V \u0000H \u0000I \u0000U \u0000L \u0000V \u0000E \u0000H\u0000H \u0000V \u0000N \u0000L \u0000V \u0000V \u0000Q \u0000R \u0000Z \u0000E \u0000R \u0000D \u0000U \u0000G \u0000V \u0000S \u0000R \u0000U \u0000W \u0000V \u0000\u0003 \u0000E \u0000D \u0000O\u0000O \u0000N \u0000L \u0000W \u0000H \u0000E \u0000D \u0000V \u0000H \u0000E \u0000D \u0000O\u0000O \u0000\u0003 \u0000E \u0000D \u0000W \u0000E \u0000D \u0000V \u0000H \u0000E \u0000D \u0000O\u0000O \u0000\u0003 \u0000J \u0000O \u0000R \u0000Y \u0000H \u0000V \u0000N \u0000D \u0000W \u0000H \u0000E \u0000R \u0000D \u0000U \u0000G \u0000V \u0000X \u0000U \u0000I \u0000E \u0000R \u0000D \u0000U \u0000G \u0000W \u0000H \u0000Q\u0000Q \u0000L \u0000V \u0000\u0003 \u0000U \u0000D \u0000F \u0000N \u0000H \u0000W \u0000E \u0000R \u0000W\u0000W \u0000O \u0000H \u0000Z \u0000L \u0000Q \u0000H \u0000\u0003 \u0000J \u0000O \u0000D \u0000V\u0000V \u0000F \u0000X \u0000S \u0000I \u0000R \u0000U \u0000N \u0000N \u0000Q \u0000L \u0000I \u0000H \u0000V \u0000S \u0000R\u0000R \u0000Q \u0000E \u0000R \u0000Z \u0000O \u0000E \u0000D \u0000Q \u0000D \u0000Q \u0000D \u0000D \u0000S\u0000S \u0000O \u0000H \u0000V \u0000D \u0000Q \u0000G \u0000Z \u0000L \u0000F \u0000K \u0000R \u0000U \u0000D \u0000Q \u0000J \u0000H \u0000E \u0000U \u0000R \u0000F\u0000F \u0000R \u0000O\u0000L \u0000F \u0000D \u0000U\u0000U \u0000R \u0000W \u0000K \u0000R \u0000W \u0000\u0003 \u0000G \u0000R \u0000J \u0000S \u0000L \u0000]\u0000] \u0000D \u0000G \u0000R \u0000Q\u0000X \u0000W \u0000F \u0000D \u0000N \u0000H \u0000F \u0000K \u0000D \u0000L \u0000U \u0000F \u0000R \u0000X \u0000F \u0000K \u0000S \u0000R \u0000W\u0000W \u0000H \u0000G \u0000\u0003 \u0000S \u0000O \u0000D \u0000Q \u0000W \u0000E \u0000H \u0000G \u0000G \u0000L \u0000Q \u0000L \u0000Q \u0000J \u0000\u0003 \u0000W \u0000D \u0000E \u0000O \u0000H \u0000W \u0000R \u0000L\u0000O \u0000H \u0000W\u0000W \u0000Y \u0000O \u0000D \u0000S \u0000W \u0000R \u0000S \u0000P \u0000R \u0000X \u0000V \u0000H \u0000U \u0000H \u0000P \u0000R \u0000W \u0000H \u0000N \u0000H \u0000\\ \u0000E \u0000R \u0000D \u0000U \u0000G \u0000F \u0000H \u0000O\u0000O \u0000\u0003 \u0000S \u0000K \u0000R \u0000Q \u0000H \u0000P \u0000L \u0000F \u0000U \u0000R \u0000Z \u0000D \u0000Y \u0000H \u0000R \u0000Y \u0000H \u0000Q \u0000W \u0000R \u0000D \u0000V \u0000W \u0000H \u0000U \u0000V \u0000L \u0000Q \u0000N \u0000U \u0000H \u0000I \u0000U \u0000L \u0000J \u0000H \u0000U \u0000D \u0000W \u0000R \u0000U \u0000E \u0000R\u0000R \u0000N \u0000F \u0000O \u0000R \u0000F \u0000N \u0000Y \u0000D \u0000V \u0000H \u0000V \u0000F \u0000L \u0000V\u0000V \u0000R \u0000U \u0000V \u0000W \u0000H \u0000G\u0000G \u0000\\ \u0000\u0003 \u0000E \u0000H \u0000D \u0000U \u0000K \u0000D \u0000L \u0000U \u0000\u0003 \u0000G \u0000U \u0000L \u0000H \u0000U \u0000W \u0000R\u0000R \u0000W \u0000K \u0000E \u0000U \u0000X \u0000V \u0000K 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 \u00001 \u0000R \u0000U \u0000P \u0000D \u0000O\u0000L \u0000] \u0000H \u0000G \u0000\u0003 \u0000I \u0000U \u0000H \u0000T \u0000X \u0000H \u0000Q \u0000F \u0000\\ TrainValidation split (5000) Test split (5000) Figure 4: Object category distributions for COCO train, validation and test splits: normalized document frequency of each category.", "jects model (with ground truth frequency counts).", "We then compute the k -nearest training images for each given test image using both the bag of objects representation and its projection (Eq. 1).", "Finally, we compute the similarity score between the generated caption of the test image against all k nearest captions.", "The similarity score measures how well a generated caption matches its nearest neighbour's captions.", "We expect the score to be high if the IC system generates an image similar to something summarized' from the training set.", "As reported in Table 3, overall the captions seem to closely match the captions of 5 nearest training images.", "Further analysis showed that 2301 out of 5000 captions had nearest images at a zero distance, i.e., the same exact representation was seen at least 5 times in training (note that CIDEr gives a score of 10 only if the test caption and all references are the same).", "We found that among the non-exact image matches, the projected image representation better captures candidates in the training set than bag of objects.", "Figure 5 shows the five nearest neighbours of an example non-exact match and their generated captions in the 2185 t e s t person (5), cup (8), spoon (1), bowl (8), carrot (10), chair(6),diningtable(3) agroupofpeoplesittingaroundatablewithfood.", "Note that the nearest neighbours are an approximation since we do not know the exact distance metric derived from the LSTM.", "We observe that the captions for unseen representations seem to be interpolated from multiple neighbouring points in the projection space, but further work is needed to analyze the hidden representations of the LSTM to understand the language model and to give firmer conclusions.", "Here we further explore the effect of incorporating spatial information of object detections for IC.", "More specifically, we enrich the representations by encoding positional and size information for more object instances, rather than restricting the encoding to only one instance per category which makes the representation less informative.", "We explore encoding object instances and their spatial properties as a fixed-size vector.", "In contrast to Section 3, we propose handling multiple instances of the same category by encoding spatial properties of individual instances rather than aggregating them as a single value.", "Each instance is represented as a tuple ( x , y , w , h , a ), where x and y are the coordinates of the centre of the bounding box and are normalized to the image width Feature set Fixed Tuned Bag of objects 0.807 0.834 ( x, y, w, h, a ) 0.870 0.915 ( x, y, w, h ) 0.859 0.898 ( x, y, a ) 0.850 0.900 ( w, h ) 0.870 0.920 ( a ) 0.869 0.857 ( x, y ) 0.810 0.863 LSTM Yin and Ordonez (2017) 0.922 Table 4: CIDEr scores for image captioning using representations encoding spatial information of instances derived from ground truth annotations, with either fixed hyperparameters (Section 3.1) or with hyperparameter tuning.", "and height respectively, w and h are the width and height of the bounding box respectively, and a is the area covered by the object segment and normalized to the image size.", "Note that w h a (box encloses the segment).", "We assume that there are maximum 10 instances per vector, and instances of the same category are ordered by a (largest instance first).", "We encode each of the 80 categories as separate sets.", "Non-existent objects are represented with zeros.", "The dimension of the final vector is 4000 ( 80 10 5 ).", "We also perform a feature ablation experiment to isolate the contribution of different spatial components.", "All experiments in this subsection use ground truth annotations we expect the results of using an object detector to be slightly worse but in most cases follow a similar trend, as shown in the previous section.", "Table 4 shows the CIDEr scores using the same setup as Section 3, but using representations with spatial information about individual object instances.", "Encoding spatial information led to substantially better performance over bag of objects alone.", "Consistent with our previous observation, w and h (bounding box width and height) seems to be the most informative feature combination it performs well even without positional information.", "Area ( a ) is less informative than the combination of w and h , possibly because it compresses width-height ratio information despite discarding noise from background regions.", "Positional information ( x , y ) does not seem to be as informative, consistent with observations from previous work (Wang and Gaizauskas, 2016).", "The last column in Table 4 shows the CIDEr 2186 Image ID: 378657 Objects in the image person , clock Representation Caption Frequency a large clock tower with a large clock on it .", "We note that the results with our simpler image representation are comparable to the ones reported in Yin and Ordonez (2017), which use more complex models to encode similar image information.", "Interestingly, we observe that positional information ( x , y ) work better than before tuning in this case.", "Example outputs from the models in Sections 3 and 4 can be found in Figure 6.", "In the previous sections, we explore IC based on explicit detections for 80 object categories.", "However, not all categories are made equal.", "Some categories could impact IC more than others (Berg et al., 2012).", "In this section we investigate which categories are more important for IC on the COCO dataset.", "Our category ablation experiment involves removing one category from the 80 -dimensional bag of objects (ground truth frequency) representation at a time, resulting in 80 sets of 79 D vectors without each ablated category.", "We postulate that salient categories should lead to larger performance degradation than others.", "However, what makes a category salient' in general ( dog vs. cup )?", "We hypothesize that it could be due to", "(i) how frequently it is depicted across images;", "(ii) how frequently it is mentioned in the captions when depicted in the image.", "To quantify these hypotheses, we compute the rank correlation between changes in CIDEr from removing the category and each of the statistic below: f ( v c ) = P Ni 1( c C i ) : frequency of the ablated category c being annotated among N images in the training set, where C i is the set of all categories annotated in image i , and 1( x ) is the indicator function.", "p ( t c | v c ) f ( t c ,v c ) f ( v c ) : proportion of ablated category being mentioned in any of the reference captions given that it is annotated in the image in the training set.", "For determining whether a depicted category is mentioned in the caption, the matching method described in Ramisa et al. (2015) is used to increase recall by matching category labels with", "(i) the term themselves;", "(ii) the head noun for multiword expressions;", "(iii) WordNet synonyms and hyponyms.", "We treat these statistics as an approximation because of the potential noise from the matching process, although it is clean enough for our purposes.", "We have also tried computing the correlation with f ( t c ) (frequency of the category being mentioned regardless of whether or not it is depicted).", "However, we found the word matching process too noisy as it is not constrained or grounded on the image (e.g. hot dog is matched to the dog cate-gory).", "Thus, we do not report the results for this.", "Figure 7 shows the result of the category ablation experiment.", "Categories like train , sandwich , person and spoon led to the largest drop in CIDEr scores.", "On the other end, categories like surfboard , carrot and book can be removed without negatively affecting the overall score.", "By comparing the CIDEr score changes against the frequency counts of object annotations in the training set (top row), there does not seem to be a clear correlation between depiction frequency and CIDEr.", "Categories like bear are infrequent but led to a large drop in score; likewise, chair and dining table are frequent but do not affect the results 2187 0.0 2.5 5.0 7.5 10.0 l o g f ( v ) 0.00 0.25 0.50 0.75 1.00 p ( t | v ) 0.04 0.03 0.02 0.01 0.00 0.01 CIDE r w .", "as negatively.", "In contrast, the frequency of a category being mentioned given that it is depicted is a better predictor for the changes in CIDEr scores in general (middle row).", "Animate objects seem to be important to IC and are often mentioned in captions (Berg et al., 2012).", "Interestingly, removing spoon greatly affects the results even though it is not frequent in captions.", "Table 5 presents the rank correlation (Spear-man's and Kendall's , two-tailed test) between changes in CIDEr and the two heuristics.", "While both heuristics are positively correlated with the changes in CIDEr, we can conclude that the frequency of being mentioned (given that it is depicted) is better correlated with the score changes than the frequency of depiction.", "Of course, the categories are not mutually exclusive and object co-occurrence may also play a role.", "However, we leave this analysis for future work.", "Figure 6 shows an example when the category person is removed from the feature vector.", "Here, the model does not generate any text related to person , as the training set contains images of clocks without people in it.", "In this paper we investigated end-to-end image captioning by using highly interpretable representations derived from explicit object detections.", "We provided an in-depth analysis on the efficacy of a variety of cues derived from object detections for IC.", "We found that frequency counts, object size and position are informative and complementary.", "We also found that some categories have a bigger impact on IC than others.", "Our analysis showed that end-to-end IC systems are image matching systems that project image representations into a learned space and allow the LSTM to generate captions for images in that projected space.", "Future work includes", "(i) investigating how object category information can be better used or expanded to improve IC;", "(ii) analyzing end-to-end IC systems by using interpretable representations that rely on other explicit detectors (e.g. actions, scenes, attributes).", "The use of such explicit information about object instances could help improve our understanding of image captioning.", "This work is supported by the MultiMT project (H2020 ERC Starting Grant No. 678017).", "The authors also thank the anonymous reviewers for their valuable feedback on an earlier draft of the paper." ]
[ "abstain", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "method", "objective", "method", "method", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "objective", "method", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "result", "result", "abstain", "abstain", "abstain", "result", "other", "other" ]
[ "We annotate 17,000 SNS posts with both the writer's subjective emotional intensity and the reader's objective one to construct a Japanese emotion analysis dataset.", "In this study, we explore the difference between the emotional intensity of the writer and that of the readers with this dataset.", "We found that the reader cannot fully detect the emotions of the writer, especially anger and trust .", "In addition, experimental results in estimating the emotional intensity show that it is more difficult to estimate the writer's subjective labels than the readers'.", "The large gap between the subjective and objective emotions implies the complexity of the mapping from a post to the subjective emotional intensities, which also leads to a lower performance with machine learning models.", "Emotion analysis is one of the major NLP tasks with a wide range of applications, such as a dialogue system (Tokuhisa et al., 2008) and social media mining (Stieglitz and Dang-Xuan, 2013).", "Since emotion analysis has been actively studied, not only the classification of the sentiment polarity (posi-tive or negative) of the text (Socher et al., 2013), but also more detailed emotion detection and emotional intensity estimation (Bostan and Klinger, 2018) have been attempted in recent years.", "Previous studies on emotion analysis use six emotions ( anger , disgust , fear , joy , sadness , and surprise ) by Ekman (1992), eight emotions ( anger , disgust , fear , joy , sadness , surprise , trust , and anticipation ) by Plutchik (1980), and VAD model ( Valence , Arousal , and Dominance ) by Russell (1980).", "These existing emotion analysis datasets include subjective emotional intensity labels by the writers (Scherer and Wallbott, 1994) and objective ones by the readers (Aman and Szpakowicz, 2007; Strapparava and Mihalcea, 2007; Buechel and Hahn, 2017; Mohammad and Bravo-Marquez, 2017a; Mohammad and Kiritchenko, 2018; Bostan et al., 2020), whereas the latter is mainly done by, e.g., expert or crowdsourcing annotators.", "It depends on the applications whether the writer's emotions or the reader's ones to be estimated in NLP-based emotion analysis.", "For example, in a dialogue system, it is important to estimate the reader's emotion because we want to know how the user feels in response to the system's utterance.", "On the other hand, in applications such as social media mining, we want to estimate the writer's emotion.", "In other applications such as story generation, it is worth considering the difference between the emotions the writer wants to express and the emotions the reader receives.", "As shown in Table 1, most existing datasets have collected only objective emotions.", "2 Therefore, previous studies on emotion analysis have focused on estimating objective emotional intensity.", "In this study, we introduce a new dataset, WRIME, 3 for emotional intensity estimation.", "We collect both the subjective emotional intensity of the writers themselves and the objective one annotated by the readers, and explore the differences between them.", "In our data collection, we hired 50 2 EmoBank (Buechel and Hahn, 2017) is a dataset that aims to collect the emotional intensity of both writers and readers.", "However, crowdsourcing annotators, who are different from the text writer, infer the writer's emotions, so they are not able to collect the writer's subjective emotions.", "3 Dataset of w riters' and r eaders' i ntensities of e m otion for their e stimation.", "participants via crowdsourcing service.", "They annotated their own past posts on a social networking service (SNS) with the subjective emotional intensity.", "We also hired 3 annotators, who annotated all posts with the objective emotional intensity.", "Consequently, our Japanese emotion analysis dataset consists of 17,000 posts with both subjective and objective emotional intensities for Plutchik's eight emotions (Plutchik, 1980), which are given in a four-point scale (no, weak, medium, and strong).", "Our comparative study over subjective and objective labels demonstrates that readers may not well infer the emotions of the writers, especially of anger and trust .", "For example, even for posts written by the writer with a strong anger emotion, our readers (i.e., the annotators) did not assign the anger label at all to more than half of the posts with the subjective anger label.", "Overall, readers may tend to underestimate the writers' emotional intensities.", "In addition, experimental results on emotional intensity estimation with BERT (Devlin et al., 2019) show that predicting the subjective labels is a more difficult task than predicting the objective ones.", "This large gap between the subjective and objective annotations implies the challenge in predicting the subjective emotional intensity for a machine learning model, which can be viewed as a reader of the posts.", "To estimate the emotional intensity of the text, datasets labeled with Ekman's six emotions (Ek-man, 1992) and Plutchik's eight emotions (Plutchik, 1980) has been constructed for languages such as English, as shown in Table 1.", "EmoBank 4 (Buechel and Hahn, 2017), which is most relevant to ours, 4 https://github.com/JULIELab/EmoBank labels the emotional intensity of both the writers and readers of the text.", "However, the annotators for EmoBank are not writers, and readers are required to guess the writer's emotion; therefore, to be strict, this dataset only contains the objective labels.", "Our dataset is the first to collect the subjective emotional intensity of the writers themselves.", "ISEAR (Scherer and Wallbott, 1994) is a dataset with subjective emotional labels.", "This is a dataset in which annotators describe their own past events in each emotion.", "They use a label set that adds shame and guilt to Ekman's six emotions.", "Although ISEAR is the only dataset with subjective emotional labels, their intensity is not considered.", "Early studies in collecting objective emotional labels were annotated by experts.", "Aman and Sz-pakowicz (2007) labeled each sentence of English blog posts with Ekman's six emotions and their intensity on a three-point scale.", "Strapparava and Mihalcea (2007) labeled Ekman's six emotional intensities to English news headlines and held a competition of SemEval-2007 Task 14.", "5 In recent years, there have been many studies on collecting objective emotional labels using crowdsourcing.", "Mohammad and Bravo-Marquez (2017a); Mohammad and Kiritchenko (2018) labeled tweets in English, Arabic, and Spanish with the intensity of four emotions ( joy , sadness , anger , and fear ).", "Using these datasets, they held a series of competitions to estimate the emotional intensity in WASSA-2017 Shared Task on Emotion Intensity 6 (Mohammad and Bravo-Marquez, 2017b) and SemEval-2018 Task 1 7 (Mohammad et al., 2018).", "Some datasets (Kaji and Kitsuregawa, 2006; Suzuki, 2019) are available in Japanese.", "However, these are sentences with sentiment polarity, and do not cover the various emotions dealt with in this study.", "Our study is the first to label Japanese texts with various emotional intensity.", "We hired 50 participants via crowdsourcing service Lancers .", "8 Those participants include 22 men and 28 women, where 2 are teens, 26 are in their 20s, 18 are in their 30s, and 4 are above 40 years old.", "They copy and paste their own past SNS posts and then labeled the posts with the subjective emotional intensity according to Plutchik's eight emotional intensities (Plutchik, 1980) with a four-point scale (0: no, 1: weak, 2: medium, and 3: strong).", "They did not provide us with all the posts, but chose only those posts that they could agree to publish.", "Here, for the purpose of emotion analysis from the text, posts with images or URLs were excluded.", "Each participant labeled 100 to 500 posts, resulting in 17,000 posts in total.", "We did not limit the posts to be annotated based on when they are posted.", "As a result, our dataset contains posts in the 9-year range from June 2011 to May 2020.", "We assumed that each post would require 50 seconds for annotation and paid 21.5 JPY per post.", "This roughly corresponds to 15 USD per hour, which is a good reward for crowdsourcing.", "9 To assess the quality of annotations, we randomly sampled 30 posts for each participant.", "One of our graduate students evaluated the posts and the corresponding eight emotional intensity labels on a four-point scale based on the following criteria.", "3: I fully agree with the label given.", "2: I can find the relevance between the post and label.", "1: I hardly find the relevance between the post and label.", "0: I do not think the annotator seriously engaged for this post.", "The average score for each participant was 2.1, where 1.8 at minimum, and 2.5 at maximum.", "There were no posts rated as 0.", "We had five annotators whose average score was below 2, but reviewing their posts and labels does not necessarily show obvious clues of improper annotation.", "We hired three objective annotators via the same crowdsourcing service as in Section 3.1.", "Annotators include two women in their 30s and one woman in their 40s.", "They labeled all the 17,000 posts with Plutchik's eight emotional intensities (Plutchik, 1980) in the same way as subjective annotation.", "Note that while the subjective annotators labeled their own emotions as the writer of each post, the objective annotators labeled each post based on the emotions they received from the post.", "Objective annotators do not have to fill in the text, so their task is simply to label emotional intensity.", "We assumed that each post takes 10 seconds and paid 3.8 JPY per post, which results in the reward of roughly 13 USD per hour.", "To assess the quality of annotations, we calculated the quadratic weighted kappa 10 (Cohen, 1968) as a metric of the inter-annotator agreement.", "The upper part of Table 2 shows the agreement between the objective annotators.", "The best case, joy , shows 10 https://scikit-learn.org/stable/ modules/generated/sklearn.metrics.cohen_kappa_score.html Figure 1: Results of personality diagnosis.", "a substantial agreement ( > 0 . 6 ), but trust , is with a fair agreement ( < 0 . 4 ).", "Overall, we confirmed a moderate agreement ( 0 . 5 < < 0 . 6 ) among the objective annotators.", "The lower part of Table 2 shows the agreement between the subjective and the objective annotators.", "These are discussed in Section 4.2.", "We also performed personality assessments of our writers (i.e., subjective annotators) in order to explore the relationship between personality and emotion.", "Through 60 questions (Saito et al., 2001) based on the Big Five personality traits (Goldberg, 1992), the following five factors were assessed: agreeableness, extraversion, neuroticism, openness, and conscientiousness.", "In this personality assessment, the writer's own applicability to each of 60 adjectives, such as cheerful and honest is reported on a 7-point scale, and the five factors of personality indicators are derived.", "Figure 1 shows the results of the personality assessment over all 50 writers, where we can see various personalities.", "For example, well-balanced writers can be seen near the center of the figure, and writers with low neuroticism appear in the lower right.", "In Section 5, we shall show how the personality helps to improve emotional intensity estimation.", "Table 3 shows some examples of labeled posts in our dataset.", "The first post was written with a strong emotions of both joy and anticipation .", "Readers can have similar emotions as the writer for this post.", "The second post was written with a strong emotions of both sadness and anger .", "Readers can share emotions of sadness , but they are more surprised than angry .", "Table 4 shows the distribution of emotional intensity labels.", "For all emotions, intensity 0 is most frequently assigned.", "This is not surprising, as it is rare for a single post to come with many emotions, which may be contradictory to each other, at the same time.", "11 However, for emotions of anger and trust , about 95% of labels by the objective annotators have an intensity 0, which is particularly high.", "In other words, with regard to emotions of anger and trust , readers may tend to underestimate the emotions of the writers.", "In addition, we can see some characteristics of each objective annotator, e.g., the number of times that reader 1 gives intensity 1 is small.", "The lower part of Table 2 shows the agreement between the subjective and the objective annotators.", "As with the agreement between the objective annotators in Section 3.2, we calculated the quadratic weighted kappa (Cohen, 1968).", "Agreement between subjective and objective annotators are lower than agreement between objective annotators (the upper part of Table 2).", "Especially for the emotion of anger , there is a large gap between the readerreader agreements and writerreader agreements.", "In addition, for the emotion of trust , the 11 90% of posts have less than 4 emotions at the same time.", "writerreader agreement is even lower, although the readerreader agreements are also low.", "These results imply that there is a large difference between the subjective and objective emotion.", "Table 5 shows the confusion matrix between the subjective emotional intensity labels and the objective ones for respective emotions.", "For example, in posts where the writer labeled intensity 0 for joy , the percentages where the reader labeled intensities 0, 1, 2, and 3 were 91.7%, 3.1%, 4.0%, and 1.2%, respectively.", "This confusion matrix shows the fine-grained differences in emotional intensity between writers and readers, which reinforces our discussion in Section 3 that readers hardly detect the emotions associated with the post.", "Focusing on the emotion of anger in the confusion matrix, in 58.6% of the posts where the writer labeled intensity 3 (strong anger ), the reader labeled intensity 0 (no emotions of anger ).", "This is more prominent in the emotion of trust : for 81.5% of posts that the writer labeled intensity 3, the reader labeled intensity 0.", "This clearly demonstrates that the readers cannot infer the emotion trust of the writer.", "As for other emotions, readers are most likely to label an intensity 0 in posts labeled with an intensity 2 or less by the writer.", "Overall, the readers tend to underestimate the writer's emotions, and they rarely label intensity 1 or more when the writer label intensity 0.", "We conduct experiments on the four-class classification as an ordinal classification to estimate emotional intensity {0, 1, 2, 3} using the dataset constructed in Section 3.", "In this experiment, we divided the dataset 12 into training set of 15,000 posts from 30 writers, validation set of 1,000 posts from 10 writers, and evaluation set of 1,000 posts from 10 writers.", "That is, there is no duplication of writers between the splits.", "We used MeCab (IPADIC-2.7.0) 13 (Kudo et al., 2004) to tokenize Japanese text.", "The performance of the emotional intensity estimation models is evaluated by the mean absolute error (MAE) and the quadratic weighted kappa (QWK).", "We evaluated the model using both the emotional intensity labels given by the subjective annotators (subjective labels) and the average of the emotional intensity labels given by the three objective annotators (objective labels).", "12 Each writer provided 500 posts for the training set and 100 posts for the validation and test sets.", "Following the standard emotional intensity estimation models (Acheampong et al., 2020), we train the following three types of four-class classification models for each emotion.", "BoW+LogReg employs Bag-of-Words to extract features and Logistic Regression to the estimate emotional intensity.", "fastText+SVM vectorizes each word with fastText 14 (Bojanowski et al., 2017) and estimates the emotional intensity with a Support Vector Machine based on their average vector.", "BERT is a model that fine-tunes the pre-trained BERT 15 (Devlin et al., 2019) and estimates the emotional intensity as y = softmax( hW ) , where h is a feature vector obtained for the [CLS] token of BERT.", "We investigate the performance of both BERT trained with subjective labels (Subj. BERT) and BERT trained with objective labels (Obj. 14 https://dl.fbaipublicfiles.com/ fasttext/vectors-crawl/cc.ja.300.bin.gz 15 https://huggingface.co/cl-tohoku/ bert-base-japanese-whole-word-masking Subjective labels MAE QWK Joy Sadness Anticipation Surprise Anger Fear Disgust Trust Overall Overall Random 1.390 1.383 1.419 1.313 1.492 1.420 1.411 1.407 1.404 0.001 Modal Class 0.896 0.713 0.907 0.684 0.218 0.344 0.435 0.429 0.578 0.000 BoW+LogReg 0.863 0.817 0.919 0.752 0.313 0.479 0.545 0.555 0.655 0.156 fastText+SVM 0.896 0.754 0.910 0.723 0.250 0.397 0.489 0.510 0.616 0.120 Subj. BERT 0.734 0.666 0.899 0.684 0.218 0.344 0.443 0.432 0.553 0.135 Subj. BERT w/ Pc 0.784 0.698 0.870 0.659 0.218 0.343 0.457 0.429 0.557 0.153 Subj. BERT w/ Pa 0.740 0.665 0.850 0.665 0.218 0.351 0.441 0.429 0.545 0.183 Obj. BERT 0.674 0.623 0.789 0.634 0.218 0.356 0.432 0.427 0.519 0.242 Reader 1 0.545 0.544 0.713 0.686 0.211 0.523 0.522 0.428 0.522 0.417 Reader 2 0.521 0.520 0.720 0.571 0.201 0.347 0.375 0.426 0.460 0.442 Reader 3 0.526 0.533 0.738 0.694 0.200 0.610 0.520 0.432 0.532 0.439 Avg. Readers 0.491 0.466 0.658 0.584 0.198 0.458 0.420 0.425 0.463 0.486 Table 6: Evaluation of MAE and QWK in estimating subjective emotional intensity. BERT), in both evaluations on subjective and objective labels.", "We also evaluate the following two baselines.", "Random outputs one of the four emotional intensity labels {0, 1, 2, 3} randomly with the uniform distribution.", "Modal Class always outputs the most frequent intensity label for each emotion.", "As shown in Table 4, in this dataset, intensity 0 has the highest frequency for all emotions, so in practice, this baseline always gives label 0.", "We used scikit-learn 16 (Pedregosa et al., 2011) implementation for both BoW+LogReg and fast-Text+SVM models.", "For the hyper-parameter of C , the optimum value over the validation set was selected from {0.01, 0.1, 1, 10, 100}.", "As for BERT-based models, we used the implementation in Transformers 17 (Wolf et al., 2020).", "We used the whole-word-masking model with a batch size of 32, a dropout rate of 0.1, a learning rate of 2e-5, and Adam (Kingma and Ba, 2015) for optimization.", "The training stopped after 3 epochs without improvement in the validation loss.", "In the evaluation of subjective labels, the personality of the writers is considered in the Subj.", "BERT in the following two ways.", "w/ Pc : Feature extraction is performed with h c = [ u ; v ] W c in consideration of personality.", "Here, v is a 768-dimensional text representation obtained from the [CLS] token of 16 https://scikit-learn.org/ 17 https://github.com/huggingface/ transformers BERT, and u is a representation of the Big Five personality indicators given by linearly transforming the five indicator values into a 768-dimensional vector.", "When estimating the emotional intensity, h c is used instead of h .", "w/ Pa : Feature extraction is performed with h a = attention( uW Q , vW K , vW V ) in consideration of personality.", "That is, in the calculation of the attention mechanism, the personality representation u is used as the query, and the text representation v is used as both the key and the value.", "h a is used instead of h for emotional intensity estimation.", "The performance of each model on subjective and objective labels is shown in Tables 6 and 7, respectively.", "Regardless of the method, the evaluation of subjective label estimation gets a larger mean absolute error than the evaluation of objective labels.", "In our previous discussion, we have stated that it is difficult for readers to estimate the emotions of writers; this also applies to machine learning models.", "In the evaluation of subjective labels, the traditional models of BoW+LogReg and fastText+SVM achieved lower mean absolute errors than the Random baseline, but were inferior to the Modal Class baseline.", "The BERT methods achieved a mean absolute error lower than the Modal Class baseline.", "Surprisingly, Obj.", "BERT trained with objective labels, rather than Subj.", "BERT trained with subjective labels, achieved the highest performance.", "Since it is difficult to estimate subjective labels, which are the emotion of the writer, a simple model may not provide sufficient performance.", "Therefore, we examined Subj.", "BERT w/ Pc and Subj.", "BERT w/ Pa to assist training using the personality information of the writer.", "As a result, Subj.", "BERT w/ Pc, which simply concatenates the personality representation and the text representation, was not effective, but Subj.", "BERT w/Pa, which considers personality representation with weighting, achieved higher performance than simple Subj.", "BERT.", "The evaluation by QWK also shows the usefulness of using the personality information of the writer.", "However, even with personality information, the performance is not comparable with that of Obj.", "BERT.", "Improving methods for accurate estimation of subjective emotions is our future work.", "Below the dotted line in Table 6, the performance of the human readers is shown for comparison.", "Estimating the emotional intensity of writers is difficult for both human readers and machine learning models.", "In the evaluation of objective labels (Table 7), the traditional models of BoW+LogReg and fast-Text+SVM were comparable to the Modal Class baseline.", "Similar to the evaluation in the subjective labels, the BERT-based models achieved mean absolute errors lower than the Modal Class baseline, and Obj.", "BERT achieved the highest performance.", "Below the dotted line in Table 7, the performance of the human readers is shown for comparison.", "Note that the objective labels are the average of each of these readers.", "Compared to each reader, Obj.", "BERT does not reach human performance.", "We introduce a new dataset, WRIME, for Japanese emotional intensity estimation.", "Our dataset is based on Plutchik's eight emotions (Plutchik, 1980), labeling both the writer's subjective emotional intensity and the reader's objective one in SNS posts.", "Overall, the readers tend to underestimate the writer's emotions.", "Even the strong emotions of the writer cannot be detected by the reader, especially in the emotions of anger and trust .", "Experimental results on emotional intensity estimation show that it is more difficult to estimate the writer's subjective labels than the readers'.", "The large gap between the subjective and objective emotions imply the complexity of the mapping from a text to the subjective emotional intensities, which also leads to a lower performance with machine learning models.", "Estimating the writer's subjective emotions with higher accuracy is future work.", "We have shown the possibility of improving the performance of subjective emotional intensity estimation by considering the personality of the writer.", "It may be worth considering the writer's meta information, including personality, and the writer's past posting history.", "We ensure that our work is conformant to the ACM Code of Ethics.", "This work was supported by Innovation Platform for Society 5.0 from Japan Ministry of Education, Culture, Sports, Science and Technology." ]
[ "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other" ]
[ "We propose a method for program generation based on semantic scaffolds , lightweight structures representing the high-level semantic and syntactic composition of a program.", "By first searching over plausible scaffolds then using these as constraints for a beam search over programs, we achieve better coverage of the search space when compared with existing techniques.", "We apply our hierarchical search method to the SPoC dataset for pseudocode-to-code generation, in which we are given line-level natural language pseudocode annotations and aim to produce a program satisfying execution-based test cases.", "By using semantic scaffolds during inference, we achieve a 10% absolute improvement in top-100 accuracy over the previous state-of-the-art.", "Additionally, we require only 11 candidates to reach the top-3000 performance of the previous best approach when tested against unseen problems, demonstrating a substantial improvement in efficiency.", "Systems that can map from natural language descriptions of tasks or programs to executable code have the potential for great societal impact, helping to bridge the gap between non-expert users and basic automation or full-fledged software development.", "Accordingly, this area of research has garnered significant interest in recent years, with systems being devised for the translation of natural language specifications into database queries (Wang et al., 2018), if-then programs (Chen et al., 2016), game elements (Ling et al., 2016), and more.", "While much of the prior work in executable semantic parsing involves short descriptions being mapped into single-line programs, some tasks have recently been proposed that involve multiple natural language utterances on the input side and full programs on the output side, often reaching tens of Line Pseudocode Code 1 in function main int main() { 2 n is a long integer 0 long n = 0; 3 while n is less than o while (n < o') { 4 5 close while scope } Translate while (n < o) { while (n < o') Other wrong candidates error: use of undeclared identifier 'o' error: missing '{' Figure 1: Pseudocode is translated to code for each line and combined to form a valid program.", "lines in length and including non-trivial state manipulation.", "Examples include the Magic the Gathering and Hearthstone datasets (Ling et al., 2016) derived from trading cards and Java or Python classes implementing their behavior in a game engine, the CONCODE dataset (Iyer et al., 2018) consisting of Java documentation strings and method bodies, and the NAPS and SPoC datasets (Zavershynskyi et al., 2018; Kulal et al., 2019) consisting of pseudocode annotations and source code for programming competition problems.", "Past approaches to these large-scale language-to-code tasks have typically employed sequence-based models (Ling et al., 2016) that do not account for structure on the output side, or tree-based models (Allamanis et al., 2015; Rabinovich et al., 2017a; Yin and Neubig, 2017; Hayati et al., 2018; Iyer et al., 2019) that incorporate the syntax but not the semantics of the output domain.", "However, if we want to generate programs that can be executed successfully, the inclusion of both syntactic and semantic constraints is crucial.", "As shown in Figure 1, while multiple program fragments may be syntactically correct and represent plausible translations of the corresponding pseudocode, not all of them will lead to executable programs.", "summaries of higher-level program structure that include both syntactic information as well as semantic features such as variable declarations and scope constraints.", "See Section 3 for a more formal definition.", "While these do not encode the full spectrum of constraints used in some formal program synthesis tools (Solar-Lezama, 2009; Gulwani et al., 2017), they strike a balance between utility, speed, and ease of use, offering substantial improvements in system performance without a significant increase in complexity.", "In this work we focus on the Search-based Pseudocode to Code (SPoC) dataset (Kulal et al., 2019) due to its challenging multiline programs and availability of input-output test suites to evaluate denotation accuracy.", "The dataset contains line-level pseudocode annotations for 18,356 C++ programs provided by crowdsource workers from Amazon Mechanical Turk.", "As in the approach of Kulal et al. (2019), we first obtain candidate code fragments for each line using an off-the-shelf neural machine translation system.", "We then aim to find the highest-scoring combination of fragments that results in a valid program.", "Although finding the optimal program under this setting is NP-hard when variable usage constraints are introduced (see Section A.3), we can approximate it with a hierarchical beam search.", "Our algorithm first searches for semantic scaffolds for the program, then assembles fragments together conditioned on these scaffolds.", "This hierarchical approach speeds up search, produces higher quality variations, and leads to substantial improvements in our system's final accuracy.", "We achieve a new state-of-the-art by solving 55.1% of the test cases within 100 attempts.", "This represents a 10.4% absolute improvement over the previous best (Kulal et al., 2019), and reaches 81% of our model's oracle performance.", "When tested against unseen problems (or crowd-workers), our top 11 (or top 52, respectively) candidates have the same performance as their top 3000 candidates, demonstrating marked gains in efficiency.", "We complement our results with a discussion of specific cases in which our semantic scaffolds use global program context to resolve ambiguities in the pseudocode.", "We also conduct a manual error analysis of 200 failures to better characterize the limitations of our method and suggest possible extensions for future work.", "Our contributions are summarized as follows: We propose the use of semantic scaffolds to add semantic constraints to models for long-form language-to-code generation tasks.", "We introduce a hierarchical beam search algorithm that incorporates these constraints, resulting in heightened efficiency, better coverage of the search space, and stronger performance when compared with the standard approach.", "We achieve a new state-of-the-art accuracy of 55.1% on the SPoC pseudocode-to-code dataset.", "In this work, we focus on the SPoC dataset introduced by Kulal et al. (2019).", "This dataset consists of C++ solutions to problems from Codeforces, a competitive programming web-site, along with the input-output test cases used for each problem to evaluate correctness.", "It contains 18,356 programs in total with 14.7 lines per program on average.", "Each line is annotated with a natural language pseudocode description given by a crowd worker from Amazon Mechanical Turk.", "On average, there are 7.86 tokens per line of code and 9.08 tokens per pseudocode annotation.", "From the full dataset, 1,752 programs with annotations from unseen crowd workers and 1,820 programs for unseen problems are held out for evaluation.", "More details can be found in Kulal et al. (2019).", "Suppose the target program has L lines.", "For each line l [ L ] , we are given a natural language pseudocode annotation x l and an indentation level i l .", "Our goal is to find a candidate program y based on ( x 1 , i 1 ) , . . . , ( x L , i L ) that can solve the given problem (i.e. pass all the test cases) using as few submission attempts as possible.", "The search efficiency of an algorithm is calculated as the fraction of problems it can solve using a budget of B attempts per problem, where an attempt includes both compiling a candidate program and running the test cases.", "As in Kulal et al. (2019), for each pseudocode line x l , we use an off-the-shelf neural machine translation system to obtain a set of C candidate code pieces Y l = { y lc | c [ C ] } , where candidate code piece y lc has probability p lc .", "A full candidate program y is a concatenation of candidate code pieces, one per line, and has score p ( y ) : y = concat Ll =1 y lc l , p ( y ) = L (cid:89) l =1 p lc l .", "We aim to find valid high-scoring programs in our search procedure.", "Kulal et al. (2019) propose best-first search as a baseline, which enumerates all complete candidate programs in descending order by score.", "Using a priority queue, this algorithm can efficiently find the exact top B highest scoring candidates in time O ( L log( BL )) per candidate.", "However, this approach ignores any dependence between different lines.", "For example, any of the code piece candidates in Figure 1 could potentially be used in a valid program, but if we naively combine certain subsets of candidates together, the resulting program will be invalid due to the use of undeclared variables or mismatching braces.", "To solve this problem, we propose to enforce certain syntactic and semantic constraints when combining candidate code pieces.", "The candidate program should adhere to the grammatical specification of the target language.", "However, since incorporating the complete set of C++ grammatical constraints would require significant engineering effort, we instead restrict our attention to the set of primary expressions consisting of high-level control structures such as if , else , for loops, function declarations, etc.", "As shown in Figure 2, we parse the candidate code pieces for each line into a list of primary expression symbols.", "In order for code pieces from consecutive lines to be used together, there must exist a grammatical derivation that combines their respective symbols.", "The complete list of primary expression can be found in the appendix; see Tables 6 and 7.", "Additionally, some production rules are associated with the start or end of a variable scope block.", "We require that the number of open scope blocks equals the indentation level i l for each line l .", "Each scope block is associated with a symbol table (Aho et al., 1986) keeping track of the variables that have been declared within that scope or any", "containing scopes.", "We extract the variable names used or declared by each code piece (Figure 3) and ensure that (1) undeclared variables are not used, and (2) variables are not redeclared within the same scope.", "After checking these constraints, any variables declared by a given code piece will be added to the symbol table associated with the current scope.", "These symbol table constraints are based on the semantic information of code pieces and are fundamentally different from previous AST-based syntactic constraints for code generation (Rabinovich et al., 2017b; Yin and Neubig, 2017).", "Formally, any context free grammar that specifies the same constraints requires at least exponential description complexity.", "We provide a proof adapted from Ellul et al. (2005) in Appendix A.2.", "We note two properties of the aforementioned constraints.", "First, we can efficiently compute whether a program prefix can possibly lead to a full program that satisfies the constraints by using an incremental parser (Ghezzi and Mandrioli, 1979) and checking the symbol tables.", "Secondly, not all information from a code piece is necessary to verify the constraints.", "Accordingly, when multiple code piece candidates have the same primary expression symbols and variable declarations and usage, swapping between them would not affect the satisfiability of the constraints.", "For example, changing from a += 1 to a -= 1 will not change a compilable program into a non-compilable one, or vice versa.", "These two properties will help motivate the hierarchical beam search algorithm introduced in the next section.", "More formally, we take the configuration ( y lc ) of a line y lc to be the minimal set of features required to verify the above constraints.", "The prefix scaffold S y,l = [ ( y 1 c 1 ) , ( y 2 c 2 ) , . . . , ( y lc l )] of a program y then contains all the information needed to verify the constraints for the first l lines.", "We can efficiently compute whether S y,l 1 is a valid prefix scaffold when l < L and whether S y,L is a valid scaffold for a full program when l = L .", "1 To keep notation uncluttered, we sometimes use to denote a configuration, we ignore the subscript y of S when we refer to a general scaffold that is not necessarily associated with a specific program, and we ignore the subscript l = L of S when we refer to the scaffold of a full program.", "Our goal is to find the top B highest-scoring candidate programs that satisfy the aforementioned constraints.", "Unfortunately, finding whether even one solution exists is NP-hard (proof given in Section A.3).", "One way we can approximate the solution is to use a standard beam search.", "The beam maintains a list of hypothesis program prefixes along with their respective scores.", "We extend the beam by adding the candidate code pieces from the next line to each candidate program prefix if they form valid combinations under the constraints, then prune the hypotheses with scores outside of the top W .", "The algorithm ends after L steps, returning all the valid hypotheses in the final beam.", "Although beam search can approximate the top B solutions, the time complexity of beam search grows quadratically with the beam width W .", "Finding the top B candidates requires that W B , and hence each candidate takes ( BL ) (amortized) time to generate, which can become intractable if B is on the order of thousands.", "Even worse, beam search is often biased towards variations at the end of the program due to its greedy decisions, and can waste its budget on candidates that are unlikely to be the correct solution.", "This is in direct contrast to the computationally lighter baseline which generates the exact (unbi-ased) top candidates independently for each line without constraint.", "Can we combine the advantages of both algorithms?", "A key observation is that the assumption of independent scoring across different lines allows fast and unbiased full program candidate generation, while an expensive beam search is inevitably needed to deal with the inherent dependence between lines.", "Therefore, we propose a hierarchical beam search method that first uses beam search with a smaller beam width W to find likely scaffolds, including only the minimum dependency information between lines to satisfy the constraints, then scores candidates independently for each line conditioned on the scaffold.", "We assign probability p ( l ) to configuration l by marginalizing all code piece candidates at line l with configuration l , and assign probability p ( S ) to scaffold S by multiplying the configuration probabilities from each line: p ( l ) = (cid:88) ( y lc )= l p lc , p ( S ) = L (cid:89) i =1 p ( S [ i ]) .", "(2) Using this scoring function, we run a scaffold beam search with size W , then select the top K highest scoring scaffolds S 1 , S 2 . . . SK .", "Next, to generate program candidates from a given scaffold S , we filter out all code pieces in Y l that do not have the configuration specified by S ; in other words, the new set of code candidate pieces for each line l is Y Sl = { y lc Y l | ( y lc ) = S [ l ] } .", "As a result, conditioned on a fixed scaffold S , code pieces from each line can be chosen independently and the resulting full program will be guaranteed to satisfy the aforementioned constraints.", "Given K candidate scaffolds, we enumerate the top full program candidate from each scaffold and choose the highest scoring one.", "This takes time O ( K + L log( BL )) per candidate.", "In practice, we pick relatively small K and the running time has only logarithmic dependence on B .", "An alternative view on beam search is that it front loads the computation to reject invalid programs that do not satisfy the constraints earlier in the search process.", "A brute force alternative is to generate the next highest scoring candidates from the unconstrained baseline and reject invalid ones.", "This method is guaranteed to produce top-scoring solutions, but it might need arbitrarily many candidates to find a valid one.", "We need to compare the computational efficiency between these two methods.", "The most computationally expensive operation in constraint verification is to verify whether the next line is valid given the program prefix.", "Therefore, we count how many times this verifier function is called as a proxy to measure computational efficiency.", "We allow the brute force method to use as large a verifier function call quota as our ac-tive beam search method: it can validate/reject a program candidate until the quota is used up.", "Section 6.4 compares our scaffold search method against this brute force approach.", "The latter needs thousands of times more computation to attain the same level of performance as the former.", "Empty Pseudocode Around 26% of the lines in the data set do not have pseudocode annotations.", "They usually correspond to lines of code that do not have semantically meaningful information, such as int main() { , { , } , etc.", "Kulal et al. (2019) replaced these empty pseudocode lines with the ground truth code, effectively giving this information away to the search algorithm.", "We did not use the gold code pieces for these lines, which makes our task more challenging.", "Model Training We use OpenNMT (Klein et al., 2017) with its default settings to translate pseudocode into code piece candidates.", "Our model is a two-layer LSTM seq2seq model with hidden size 512, an attention mechanism (Bahdanau et al., 2014) and copy pointers (Vinyals et al., 2015).", "We estimate the fraction problems solvable given infinite search budget and 100 candidates per line as in Kulal et al. (2019) to obtain an oracle bound on performance.", "Due to slight difference in hyperparameters and tokenization method, our model has higher ceiling: on the unseen worker (prob-lems) test set, the oracle performance 3 is 74.4% (60.5%), compared to 71.4% (55.2%) in previous work.", "Across all test examples, the oracle performance is 68%.", "Parsing Code Pieces Since no off-the-shelf C++ parser extracts the information we need from code pieces, we implement our own primary expression parser to extract high level control information.", "We rely on the following heuristic assumptions to parse the code pieces generated by the model: (1) a code piece belongs to only one variable scope; (2) the generation of every primary expression terminal symbol lies in one line.", "Our parser fails on less than 0 .", "01% of the code pieces in the dataset.", "While selecting the candidates for each line, we immediately reject the ungrammatical pieces we cannot parse.", "Without deliberate implementation optimization, this parsing operation takes on average 2.6 seconds to process all the top 100 code pieces for a problem approximately the same wallclock time as 1 compilation attempt.", "Search Algorithm Hyperparameters As in Kulal et al. (2019), we consider the top C = 100 code pieces for each line.", "Unless otherwise mentioned, our default beam width W is 50 for scaffold search and we keep the top K = 20 scaffolds for the subsequent generation.", "We evaluate a search algorithm A by computing the fraction of problem it can solve on the test set given evaluation budget B per problem, which we denote as f A ( B ) .", "We plot f A against B and evaluate it at B = 1 , 10 , 100 , 1000 for each algorithm A to compare performance.", "We note that the difference of f values between two algorithms becomes smaller and less informative as B increases.", "With infinite code piece candidates and budget, a brute force search can 3 The oracle performance here is not a universal property of the data, but depends on the model used to generate the code pieces.", "enumerate all possible programs, find the right solution and f converges to 1.", "Direct comparison on f values hence becomes meaningless as B increases.", "To address this deficiency, we define a lead metric l A 1 , A 2 ( B ) equal to the extra budget X needed by algorithm A 2 to reach the same level of performance as A 1 given budget B .", "Formally, l A 1 , A 2 ( B ) = inf { X | f A 2 ( B + X ) f A 1 ( B ) } .", "A visualization can be seen in Figure", "5(c).", "We report our algorithms' performance on the heldout test set with annotations from unseen crowd workers and with unseen problems separately.", "We compare four settings:", "No Constraints : the best-first search method that scores lines independently.", "Syntactic Constraints : the constraints on the primary expression and indentation level as described in section 3.1.", "Symbol Table Constraints : both the syntactic constraints and the symbol table constraints described in section 3.2.", "We abbreviate this as SymTable .", "Backoff : sometimes hierachical beam search with the SymTable constraints fails to return Figure 5:", "Additionally, we compare with the Previous state-of-the-art reported by Kulal et al. (2019).", "The results can be seen in Figure 5 and Table 1, where we use the constraint type as a shorthand for the search algorithm under this constraint.", "Without constraints, the baseline algorithm performs especially poorly because it needs syntactic context to select relevant code pieces for 26% of the lines with empty pseudocode.", "SymTable outperforms Syntactic.", "As shown in Test Against Unseen Workers Hierarchical Search ( H ), Beam Width W = 50 Constraint B =1 B =10 B =10 2 B =10 3 None 0.0% 8.1 % 29.2 % 44.3% Previous 30.7% 44.4% 53.7% 58.6% Syntactic 42.8 % 51.9% 59.3% 65.9% SymTable 45.8% 55.1% 62.6% 67.3% Backoff 46.0% 55.3 % 62.8% 67.6% Test Against Unseen Problems Constraint B =1 B =10 B =10 2 B =10 3 None 0.0% 3.0% 11.5% 21.8% Previous 17.8% 28.4% 34.2% 38.3% Syntactic 27.5 % 35.4% 42.1% 47.8% SymTable 31.0% 39.2 46.0% 49.3% Backoff 31.2% 39.4% 46.1% 49.6% Table 1: Comparison of the fraction of program passed when B = 10 0 , 1 , 2 , 3 under different constraints; constraint satisfied by hierarchical beam search with the default hyper-parameters mentioned in Section 5. Pre-vious refers to the previous state of the art model.", "Figure", "5(d), the lead of SymTable on Syntactic grows linearly: the more these two algorithms search, the more budget is needed by Syntactic to reach the same level as SymTable.", "Syntactic needs nearly 600 more budget to have comparable performance with SymTable that uses 400 budget.", "We notice that all of our constrained search methods outperform the previous state-of-the-art.", "Averaged across all test examples, Backoff can solve 55.1% of the problems within 100 budget, which is 10% higher than the previous work.", "On unseen workers (problems), the top 11 (top 52) candidates of Backoff solve the same fraction of problems as the top 3000 candidates of the best performing algorithm in Kulal et al. (2019).", "We use regular beam search with beam width W = 200 to generate B = 100 valid candidate full programs.", "We did not experiment with B = 1000 because beam search with W B 1000 is computationally intractable.", "For hierarchical beam search we experiment with W = 10 , 25 , 50 for scaffold search and keep the top K = min ( W, 20) scaffolds for subsequent searches.", "Table 2 compares the performance of hierarchical beam search against regular beam search with different beam sizes under Syntactic and SymTable constraints.", "We find that if hierarchical beam search is used, even dropping the beam width Test Against Unseen Workers, Syntactic Method, Width B =1 B =10 B =10 2 H, W =10 42.8% 51.7% 59.1% H, W =25 42.8% 51.8% 59.3% H, W = 50 42.8% 51.9% 59.3% R, W =200 42.4% 51.3% 58.2% Test Against Unseen Workers, SymTable Method, Width B =1 B =10 B =10 2 H, W =10 45.4% 54.3% 61.0% H, W =25 45.6% 54.7% 61.9% H, W = 50 45.8% 55.1% 62.6% R, W =200 45.6% 54.9% 61.9% Table 2: Comparison of different beam size with Syntactic and SymTable constraint when tested against unseen workers.", "from 50 to 10 leads to negligible change in performance.", "In contrast, even with a large beam width W = 200 , regular beam search method cannot efficiently search for the solution and leads to a noticeable drop in performance.", "We observe a similar trend for SymTable: regular beam search with beam width W = 200 under-performs hierarchical search with beam width W = 25 .", "However, if we further decrease the hierarchical beam search width from 25 to 10 in this setting, we observe a significant drop in performance, possibly because there are more variable usage variations than syntactic variations.", "We now compare scaffold search to the brute force algorithm as described in section 4.3.", "We make B = 50,000 attempts for the brute force method so that its performance can match at least the top 10 candidates of our constrained approach and make the lead metrics meaningful.", "To save computation and avoid compiling all 50,000 programs, we early reject every candidate that does not fulfill our constraints.", "The lead of our approaches against the brute force algorithm is shown in Figure 6. After being adjusted for the constraint checking quota used, the lead of our approach is tens of thousands ahead of the unconstrained approach.", "Scaffold search saves lot of computation by inducing a little overhead earlier in the search process.", "Beam search has the problem of producing fewer variations at the beginning of the search.", "Such a weakness might be tolerable if we only care about the top 1 candidate, but becomes disastrous in a search setting where we want the top B candidates, whose variation is typically spread across the entire program.", "We describe the following procedure to formally define this intuition.", "We first aggregate code piece choices for each line for all the top B programs.", "As shown in Figure", "8(a), we construct a matrix such that each column corresponds to a full program candidate; the number r in the i th row and j th column means that on line i , the j th full program candidate chooses the r th code piece candidate (i.e. y ic i = y ir ).", "Then we can build a prefix tree (Figure", "8(b)) by treating each column as a string, where each traversal from the root to a leaf is a complete candidate program y .", "We define the representative branch /program as a traversal from the root to a leaf that always chooses the child that contains the most leaves (with ties being broken randomly).", "For each of the remaining B 1 programs/traversals, we find the smallest line number where it starts to diverge from the representative branch.", "Among these B 1 programs, we count the fraction of divergences that take place in the first/second half of the lines.", "For example, in Figure", "8(b), 0% of the divergences occur in the first half.", "We compare hierarchical vs. regular beam search under syntactic constraints with different beam widths W : hierarchical W = 10 , 50 and regular W = 50 , 200 .", "We group the programs by length L , consider the top B = 25 attempted programs for each problem and report the fraction of divergences that occur in the first half of the program length for each group.", "The results can be seen in Table 3.", "For regular beam search, a moderate beam width W = 50 consistently brings fewer variations in the first half of the program, and it needs a larger W = 200 to fix this problem.", "In contrast, a small W for hierarchical beam search produces the same amount of variations in the first half of the program.", "The same statistics under SymTable constraints can be seen in the appendix (Table 5) and the conclusion holds similarly.", "In this section we give representative examples on what program candidates are rejected by our syntactic and symbol table constraints.", "Syntactic Constraints As mentioned in Section 5, about 26% of the lines do not have pseudocode.", "They may correspond to } , int main() { , { , return 0 , } ; or ; .", "These lines need contextual information to select valid code pieces and navely combining the top 1 candidate from each line independently will always produce grammatically invalid programs.", "Syntactic constraints also rule out stylistic ambiguities.", "For example, when there is only one statement within an if statement, the programmer can optionally include a curly brace.", "However, the pseudocode does not contain such detailed information about style.", "Both if(...) { and if(...) might be valid, but only one of them can be correct given the context of a program.", "Our syntactic constraints, which contain a curly brace constraint, can help us select the right code piece.", "Symbol Table (SymTable) Constraints Pseudocode annotations are sometimes implicit about variable declarations.", "Given the instruction set N to 222222, both code pieces (1) int N = Reason (percentage) Pseudocode Gold Solution Model Generation", "222222; and (2) N = 222222; are potentially valid.", "We might disambiguate this case with a SymTable constraint: if the variable is declared before in the same scope, then we know this code piece should not contain a repeated declaration and hence we should choose candidate (2); otherwise we should choose (1) to avoid using undeclared variables.", "SymTable constraints are also helpful when the pseudocode does not put quotation marks around string/character literals.", "Consider the instruction if lucky is A then do the following with the ground truth code piece if (lucky == 'A') { .", "The model might misunderstand A as a variable name and generate if (lucky == A) { .", "This error can be ruled out by SymTable constraint if variable A is undeclared.", "However, SymTable constraints do not preclude all errors related to declarations.", "Consider the following generation where the last line is wrong: int now = 1, cnt = 0; for ( int i = 0; i < n ; ++ i ) { . . . / / some l i n e s omitted / / cnt = 1 , now = v [ i ]; / / gold int cnt = 1 , now = v [ i ] ; / / pred } A programmer will usually not declare new variables in the last line of a variable scope.", "However, technically this is not an invalid statement and the SymTable constraint fails to reject this wrong candidate.", "Extra modelling is needed to take into account programming conventions and common sense.", "So far we have focused on combining independent candidates from each line together to search for the target program.", "This heavily depends on the underlying model to generate potentially correct code pieces.", "However, in 32% of the programs at least one hard line has no generated code piece that is functionally equivalent to the solution, thus indicating plenty of room for improvement.", "To help the readers understand the bottleneck for code piece generation and point out important future directions, we randomly sampled 200 hard lines and manually analyzed why the generation fails by looking at the top 1 candidate of the model.", "The error analysis is available on our GitHub.", "We group the failures into the following categories, giving a detailed breakdown and examples in Figure 7.", "(a) The model generation is wrong despite clear pseudocode; this typically happens when the gold code piece is long or highly compositional.", "(b,", "c) The pseudocode contains ambiguity; the model generation is reasonable but either needs", "(b) variable type clarification or", "(c) syntactic context.", "This requires incorporating contextual information of the program into the code piece generation process.", "(d,", "e) The pseudocode either", "(d) consists of variable name typos or", "(e) is completely wrong." ]
[ "objective", "objective", "objective", "result", "objective", "abstain", "abstain", "other", "abstain", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "result", "objective", "method", "objective", "abstain", "objective", "result", "method", "objective", "result", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Abstract Sentiment analysis is used as a proxy to measure human emotion, where the objective is to categorize text according to some prede-fined notion of sentiment.", "Sentiment analysis datasets are typically constructed with gold-standard sentiment labels, assigned based on the results of manual annotations.", "When working with such annotations, it is common for dataset constructors to discard noisy or controversial data where there is significant disagreement on the proper label.", "In datasets constructed for the purpose of Twitter sentiment analysis (TSA), these controversial examples can compose over 30% of the originally annotated data.", "We argue that the removal of such data is a problematic trend because, when performing real-time sentiment classification of short-text, an automated system cannot know a priori which samples would fall into this category of disputed sentiment.", "We therefore propose the notion of a complicated class of sentiment to categorize such text, and argue that its inclusion in the short-text sentiment analysis framework will improve the quality of automated sentiment analysis systems as they are implemented in real-world settings.", "We motivate this argument by building and analyzing a new publicly available TSA dataset of over 7,000 tweets annotated with 5x coverage, named MTSA.", "Our analysis of classifier performance over our dataset offers insights into sentiment analysis dataset and model design, how current techniques would perform in the real world, and how researchers should handle difficult data.", "The goal of sentiment analysis is to determine the attitude or emotional state held by the author of", "a piece of text.", "Automatic sentiment classification that can quickly garner user sentiment is useful for applications ranging from product marketing to measuring public opinion.", "The volume and availability of short-text user content makes automated sentiment analysis systems highly attractive for companies and organizations, despite potential complications arising from their short length and specialized use of language.", "The popularity of Twitter as a social media platform on which people can readily express their thoughts, feelings, and opinions, coupled with the openness of the platform, provides a large amount of publicly accessible data ripe for analysis, being a well established domain for sentiment analysis as reflecting real-world attitudes (Pak and Paroubek, 2010; Bollen et al., 2011).", "In this paper, we look into Twitter sentiment analysis (TSA) as a suitable, core instance of general short-text sentiment analysis (Thelwall et al., 2010, 2012; Kiritchenko et al., 2014; Dos Santos and Gatti, 2014), and encourage the methods and practices presented to be applied across other domains.", "Building a TSA model that can automatically 1886 determine the sentiment of a tweet has received significant attention over the past several years.", "However, since most state-of-the-art TSA models use machine learning to tune their parameters, their performance and relevance to a real-world implementation setting is highly dependent on the dataset on which they are trained.", "TSA dataset construction has, unfortunately, received less attention than TSA model design.", "Many commonly used TSA datasets make assumptions that do not hold in a real-world implementation setting.", "For example, it is a common practice for studies to discard tweets on which there is high annotator disagreement.", "While some argue that this is done to remove noise resulting from poor annotator quality, this argument does not hold when considering that these datasets present high rates of unanimous annotator agreement 1 .", "This suggests that the problem is not poor annotators, but, rather, difficult data that does not fall into the established categories of sentiment.", "Consider the sample tweets in Table 1 drawn from our dataset, one with unanimous agreement on an OBJECTIVE label, one with 60% agreement, and one with complete disagreement.", "We observe that, as the amount of disagreement across annotations increases, so too does the clarity of what the tweet's gold standard label really should be.", "Though the issues we raise may seem obvious, the absence of their proper treatment in the existing literature suggests the need to systematically consider their implications in sentiment analysis.", "In this paper, we propose the inclusion of a COMPLICATED class of sentiment to indicate that the text does not fall into the established categories of sentiment.", "We offer insights into the differences between tweets that receive different levels of inter-annotator-agreement, providing empirical evidence that tweets with differing levels of agreement are qualitatively different from each other.", "Our claims are supported by empirical analysis of a new TSA dataset, the McGill Twitter Sentiment Analysis dataset (MTSA), which we release publicly with this work 2 .", "The dataset contains 7,026 tweets across five different topic-domains, annotated with 5x coverage.", "We release this dataset with the raw annotation results, and hope that researchers and organizations will be able to 1 Annotator disagreement information has proven useful in other areas of sentiment analysis (Wilson et al., 2005).", "analyze our dataset and build models that can be applied in real-world sentiment analysis settings.", "The field of Twitter Sentiment Analysis (TSA) has seen a considerable productive work over the past several years, and several large reviews and surveys have been written to highlight the trends and progress of the field, its datasets, and the methods used for building automatic TSA systems (Saif et al., 2013; Medhat et al., 2014; Martnez-Camara et al., 2014; Giachanou and Crestani, 2016).", "There are a variety of methods for constructing TSA datasets along a variety of domains, ranging from very specific (e.g., OMD (Shamma et al., 2009)) to general (e.g., SemEval 2013-2014 (Nakov et al., 2016)).", "While there is the popular Stanford Twitter corpus, constructed with noisy labellings (Go et al., 2009), the more common method of constructing TSA datasets relies on manual annotation (usually crowd-sourced) of tweet sentiment to establish gold-standard labellings according to a pre-defined set of possible label categories (often POSITIVE , NEGATIVE , and NEUTRAL ) (Shamma et al., 2009; Speriosu et al., 2011; Thelwall et al., 2012; Saif et al., 2013; Nakov et al., 2016; Rosenthal et al., 2017).", "One of the earliest manually annotated TSA datasets, the Obama-McCain Debate (OMD) (Shamma et al., 2009) was released with the specific annotator votes for each tweet, rather than a final specific label assignment.", "Nonetheless, most work on this dataset filters out tweets with less than two-thirds agreement (Speriosu et al., 2011; Saif et al., 2013) (Table 2).", "Unfortunately, many later dataset releases have not followed the example of the OMD; the designers of such datasets have opted instead to release only the resultant labelling according to a motivated (but constraining) label-assignment schema, often removing tweets with high inter-annotator disagreement from the final dataset release (Saif et al., 2013; Nakov et al., 2016; Rosenthal et al., 2017).", "The assumptions and implications resulting from such design choices should be carefully considered by researchers before deciding on how to construct or analyze sentiment analysis datasets.", "Indeed, a current limitation in the field is the lack of attention paid to label-assignment schemes, which ultimately determine the gold-standard labellings of samples.", "We argue that researchers 1887 Name # Annotated Discarded Coverage Labels Ref.", "should consider whether or not the choices made during dataset construction adequately reflect a situation in which automatic sentiment analysis systems would be used in real-world settings.", "In the SemEval 2017 Task 4 (Rosenthal et al., 2017), a thorough 5x coverage annotation scheme is used (each tweet is annotated by at least five people).", "Annotations were made on a five-point scale, with categories STRONGLYNEGATIVE , WEAKLYNEGATIVE , NEUTRAL , WEAKLYPOSITIVE , and STRONGLYPOSITIVE .", "If at least three out of five of the annotators gave the same labelling, that was accepted as the final annotation.", "Otherwise, the authors used an averaging scheme (mapping the labels to integers 2 , 1 , 0 , 1 , 2 ) to determine the final label, taking the average of the labellings and rounding according to a specific criterion.", "This is highly problematic.", "For example, if a controversial tweet receives two STRONGLYNEGATIVE , two STRONGLYPOSITIVE , and one NEUTRAL labelling, it will have a resultant label of NEUTRAL .", "Yet, the tweet would certainly not be neutral, it would be qualitatively different from a tweet with unanimous agreement on a NEUTRAL labelling.", "In Section 5, we provide empirical results supporting this claim, discovering that high-disagreement data is qualitatively different from high-agreement data.", "Nakov et al. (2016) provide a thorough exploration into the specific design decisions and considerations made during the construction of the 2013-2014 SemEval shared task for short-text sentiment analysis.", "In Subtask B, annotators de-3 Note that the entire OMD dataset was released with annotator votes, but most studies remove that proportion tweets where there was not at least two-thirds agreement on label.", "termined the overall polarity of a piece of text, according to a ternary labelling scheme between POSITIVE , NEGATIVE , or NEUTRAL .", "The final label of the sentence was determined based on the majority of the labels according to 5x coverage.", "The designers thus discarded sentences where there was no majority annotator agreement, since such sentences are likely to be controversial cases (p. 40); they do not report how much data was discarded.", "Saif et al. (2013) constructed a new dataset, the STS-Gold, by taking into account several limitations of the TSA datasets they reviewed.", "In their study, 3,000 tweets were labelled with 3x coverage.", "Any tweet without unanimous agreement on the label was discarded; this decision was justified by the argument that they did not want noisy data in their dataset.", "Thus, they discarded 794 tweets, or 26.5% of their originally annotated data.", "While we argue that this is a problematic design decision, we note that discarding data in this way successfully isolated unanimous-agreement from majority-agreement data, thus avoiding conflating tweets with different levels of agreement, unlike in the 2013-14 and 2017 SemEval tasks.", "The annotation scheme for the STS-Gold resolves one of the problems in the SemEval 2017 Task, as it provides an option for labelling a MIXED category, capturing tweets bearing multiple conflicting sentiments.", "It also provides the OTHER category for tweets where it is difficult to decide on a proper label.", "Interestingly, the dichotomy between the high frequency of high-disagreement tweets (794 total) compared to the low frequency of tweets unanimously labelled as OTHER (4 total) is consistent with our findings on the COMPLICATED label (Section 3.3).", "The challenges and possible approaches to manual sentiment annotation have been previously discussed by Mohammad (2016), who offers important insights into how questions and problem descriptions should be posed to annotators.", "Based on analysis of the design choices of the three datasets described above, and on the thorough overview of other datasets found in (Saif et al., 2013), we conclude that there are two primary limitations in the standard TSA datasets.", "First, the lack of distinction between data with majorityvs. unanimous-agreement on the annotated label (Nakov et al., 2016; Rosenthal et al., 2017).", "In the analysis (Sections 3.3 and 6) of our TSA dataset, we observe a clear qualitative difference between majority-agreement and unanimous-agreement data, suggesting that these sets of data should not necessarily be treated in the same way.", "Second, the systematic removal of controversial (or, high-disagreement) data (Saif et al., 2013; Nakov et al., 2016).", "We argue that this tendency is problematic because any automatic sentiment analysis system to be implemented in a real-world setting cannot know a priori which tweets will be noisy or controversial.", "An automatic sentiment analysis system trained on such a dataset will inevitably mislabel such tweets as they appear in a real-world implementation setting.", "We therefore suggest that the following paradigm become the norm in the field: in releasing sentiment analysis datasets, researchers should provide the specific annotations obtained for each sample (as was done by Shamma et al. (2009)), in addition to the resultant labelling based on the label-assignment scheme they decide upon.", "Additionally, data with high levels of annotator disagreement should not be discarded , rather, it should be included in dataset releases.", "The absence of a TSA dataset containing raw annotations and sufficient coverage to identify sources of annotator disagreement necessitated the creation of a new annotated dataset.", "Here, we provide an overview of the development of a new McGill TSA (MTSA) dataset composed of 7,026 tweets annotated with 5x coverage.", "Tweets were collected from Twitter's streaming API, filtered for English tweets that contained at least one English token, that were posted by users in North American time-zones.", "Each tweet had to contain at least one keyword from a topic cloud relating to Food (example keywords: weight, breakfast, protein), Media (cin-ema, gameofthrones, reggae), Commercial Technology (microsoft, laptop, iphone), or Sports (spurs, hockey, habs).", "Using this topic cloud and a diverse set of keywords per topic (average of 38 hand-selected keywords per topic), we collected tweets with the intent to represent the general sentiment surrounding a specific topic, while reducing the bias that would result by relying on a single topic or keyword.", "A further subset of tweets (categorized as General ) was collected from the stream, without any keyword filters, in order to further broaden the representative scope of our dataset.", "We additionally filtered out tweets containing external links or images, arguing that analysis of these multimodal tweets is a separate problem, belonging to the domain of Multimodal Sentiment Analysis (Poria et al., 2016; Soleymani et al., 2017).", "After the entire filtering process 4 was complete, we obtained 7,026 tweets across the different topics, which would be annotated with 5x coverage.", "The distribution of these tweets is seen below in Table", "3. 3.2 Data Annotation Data annotation was crowd-sourced using the CrowdFlower platform 5 .", "All qualified CrowdFlower contributors had the opportunity to complete the task, which was presented as: carefully 4 See supplemental material for full enumeration of the specific filters used and the keywords used for each topic.", "read the tweet, determine whether or not it expresses sentiment (e.g., OBJECTIVE or not), if it does, categorize the sentiment as being either POSITIVE , NEGATIVE , or COMPLICATED .", "In the instructions, COMPLICATED was presented as the preferable option when the sentiment expressed in the tweet was ambiguous, mixed or could be interpreted as both positive and/or negative.", "After a one-line description of the meaning of each category, the contributor was presented with examples of tweets belonging in each category before starting the task.", "In order to be considered qualified to complete the task, the contributor had to correctly answer at least 8 of 10 test questions, which we manually selected and labelled.", "When a user failed a test question, they were presented with the correct answer and a corresponding justification to ensure that they understood the task.", "We experimented with the inclusion of test questions from the COMPLICATED category during screening, and found that this was a major source of protest among high-quality annotators.", "Indeed, it may be paradoxical to expect annotators to agree on tweets that cause significant disagreement.", "Furthermore, due to the heterogeneous na-ture of this class, such test questions would risk biasing the annotators' notion of the category.", "As such, we limited our test questions to OBJECTIVE , POSITIVE , and NEGATIVE tweets.", "Users who successfully passed the initial test questions annotated a maximum of 400 tweets.", "Of those tweets, 10% were additional hidden test questions to continuously assess the quality of the annotators; an accuracy of at least 80% on these test questions was the threshold for including their annotations in the dataset.", "In the end, a total of 35,926 tasks were completed by 181 trusted contributors, resulting in 7,026 annotated tweets.", "The annotated tweets are categorized by four agreement levels: Unanimous (5 out of 5 agreed on the label), Consensus (exactly 4 out of 5 agreed), Majority (exactly 3 out of 5 agreed), or Disputed (maximum 2 out of 5 agreed).", "The distribution of agreement rates was consistent across topics (see supplemental material), thus the entire dataset is merged for the remainder of the analysis.", "with at least Consensus agreement compose 64%", "of the dataset (4505 tweets), and tweets with at least Majority agreement compose 92% of the dataset (6473 tweets; see Table 4).", "The decision to discard tweets with significant annotator disagreement, as previously done in TSA research, would result in the loss of 8% to 34% of the annotated tweets in our dataset, depending on whether to filter to a minimum Majority or Consensus agreement, respectively.", "Interestingly, these numbers are consistent with the proportion of discarded tweets in previous literature (Table 2).", "Sentiment and annotator agreement.", "Tweets that caused more disagreement among the human annotators were found to be more sentiment-laden (majority label of POSITIVE , NEGATIVE , or COMPLICATED ; Figure 1).", "Objective tweets composed 78% (1892 tweets), 63% (1311), and 50% (983) of the Unanimous , Consensus , and Majority subsets of annotated tweets, respectively.", "COMPLICATED label usage.", "Use of the COMPLICATED label by annotators was infrequent, and of those tweets with high inter-annotator agreement, almost exclusively limited to tweets that expressed clear, mixed sentiment.", "For example, the single tweet that received a unanimous COMPLICATED annotation had clear mixed sentiment: the iPhone 6s is so big and hard to use but I still like it.", "There were a total of 13 tweets with at least Consensus agreement for the COMPLICATED label (see supplemental material).", "These specific tweets largely corresponded to the MIXED label used in previous TSA datasets (Shamma et al., 2009; Saif et al., 2013).", "Other types of ambiguous 1890 Agreement Count % of Total Unanimous 2415 34.4 Consensus 2090 29.7 Majority 1968 28.0 Disputed 553 7.9 Total 7026 100 Table 4: Annotator agreement rates.", "tweets that did not clearly fall clearly within OBJECTIVE , POSITIVE , and NEGATIVE categories were not consistently identified as COMPLICATED by annotators.", "Rather, those tweets were a source of significant disagreement.", "Here, we present the construction of shallow classifier and the experiments performed to study the phenomenon of annotator disagreement.", "Our objective was not to build a state-of-the-art classifier with optimal accuracy rates, rather, we sought to understand how the inclusion or exclusion of tweet subsets based on annotator disagreement impacts classification accuracy.", "To use machine learning methods with textual data, it is necessary to represent the data in a vector space such that each sample has the same dimensionality, despite varying sequence lengths.", "We concatenated three different standard feature extraction methods to build vector representations of tweets: N-Grams (unigrams and bigrams), mean word embedding (GLoVE embeddings built from twitter data (Pennington et al., 2014) 6 ), and SentiWordNet scores (Baccianella et al., 2010).", "7 4.2 Experimental Design As described in Section 2, most recent work in TSA has agglomerated tweets together based on the majority labelling.", "For example, a tweet annotated with a Majority agreement labelling (e.g., 3 OBJECTIVE and 2 NEGATIVE ) would be given the label OBJECTIVE , just as one with Unanimous 6 https://nlp.stanford.edu/projects/glove/ 7 See supplemental material for full elaboration of the preprocessing decisions and features extracted.", "agreement on an OBJECTIVE labelling.", "In our experiments with our collected dataset (Section 3) we seek to determine whether or not there is a qualitative difference between highversus low-agreement data.", "Experiment I.", "In the first experiment setting, we agglomerate tweets according to the traditional practice for assigning labels based on annotations (Section 2); e.g., we remove tweets with at least a Majority voted label as COMPLICATED , and remove the Disputed tweets (that is, we remove 8.75% of our annotated data for these exper-iments), creating a 3-class classification problem.", "We experiment over four different sets of our data in this scenario: the full dataset (minus the COMPLICATED 8.75%); tweets with exactly Majority agreement; tweets with exactly Consensus agreement; and tweets with exactly Unanimous agreement on the label (see Figure 1 for the label distributions over each of these subsets).", "Additionally, when making predictions on a specific subset, we present results from training solely on the subset versus training on all of the data in this setting.", "Experiment II.", "In the second experiment setting, we sought to determine the impact of including controversial samples, making a 4-class classification problem.", "Samples that were labelled with at least Majority agreement on a COMPLICATED label, and samples with Disputed agreement, were all assigned the label COMPLICATED .", "We thus used the entirety of our dataset for this experiment, where the COMPLICATED class accounted for 8.75% (615) of the samples, with the rest of the samples being given the majority-vote labelling.", "Methods.", "For both experiments, we use a logistic regression classifier with balanced training set class weights, using the feature set described in Section 4.1.", "Preliminary experiments with fea-1891 All Majority Consensus Unanimous 0 .", "ture ablation, whether or not to balance the train set classes, and different models (SVM with linear or RBF kernel, Random Forests, Naive Bayes, and K-Nearest-Neighbors), proved that this model variant was the best.", "We compare to a stratified random guesser, which predicts according to the distribution of classes in the training set (e.g., if 50% of the training set has samples labelled as OBJECTIVE , it will guess OBJECTIVE 50% of the time).", "To account for possible variance in the results, we use 5-fold cross validation over the full dataset, where the accuracy reported is the average over the specific scores obtained on each fold.", "We evaluate with weightedand macro-F1-scores to assess classifier performance.", "F1-score is a common way to measure classifier performance in sentiment analysis as it computes the harmonic mean between precision and recall.", "In multi-class classification, we obtain a one-versus-all F-score, F c , for each class c in our set of possible classes, C .", "Weighted F-score weights each F-score by its support in the test set; if there are n c samples in the test set belonging to class c , then the weighted F-score is expressed by F weighted in Equation", "1. F weighted = 1 ( P n c ) X c C n c F c (1) Naturally, the weighted F-score is influenced by the frequency of samples in a class; so, in our case, it is biased toward the OBJECTIVE class due to its large frequency compared to the other classes (Ta-ble 5; Figure 1).", "Thus, we also report the macro F-score, which averages the F-scores for each class without considering their support, expressed by F macro in Equation", "2. This score evaluates model performance isolated from the class distribution, allowing us to determine if a change in accuracy is the result of simply a change in distribution of classes or a change in model generalization ability.", "In Figure 2, we present the results for Experiment I (Section 4.2).", "We note that the presented accuracy is higher when evaluated with weighted F-score versus macro F-score.", "Since both weighted-and macro-F1-score increase as we move along to higher agreement subsets, we conclude that the accuracy improvement is not solely due to a change in distribution of classes.", "Rather, there must be a qualitative difference between highvs. low-agreement tweets, otherwise the accuracy would have been the same across agreement levels.", "In Figure 3, we present the normalized confusion matrix obtained from Experiment II.", "We observe that the model poorly classifies COMPLICATED tweets.", "Although the model uses balanced class weights for training, it predicts OBJECTIVE the majority of the time, where each other class is most frequently mistaken as OBJEC 1892 Trained on all Trained on subset Test subset Precision Recall F1-Score Precision Recall F1-Score All 0.681 0.660 0.669 Majority 0.552 0.524 0.533 0.502 0.488 0.491 Consensus 0.689 0.674 0.680 0.680 0.633 0.652 Unanimous 0.789 0.821 0.803 0.843 0.761 0.793 Table 6: Experiment I. Macro -F1 score results for precision, recall, and F1-score, as shown visually in Figure", "TIVE .", "The final weighted-F1-score and macro-F1-scores, were, respectively: 65.8% and 51.2% with logistic regression, and 41.1% and 24.7% with the stratified random guesser.", "This large difference between weighted and macro is largely due to the poor classifier performance on the COMPLICATED class.", "which is the sentiment perceived by other humans.", "Thus, it is crucial to better understand sentiment annotation itself to inform future classifier design.", "Annotator disagreement is not human error.", "Our results show that annotator disagreements cannot simply be attributed to human error.", "There is a clear decrease in classifier performance when testing on subsets of tweets with lower annotator agreement (Figure 2), suggesting that tweets across these subsets are qualitatively different from each other.", "From a probabilistic perspective, this means that samples that obtain high annotator agreement are generated by a different real-world function than those that obtain low annotator agreement.", "This perspective is further justified by the fact that classifier performance is roughly the same when training on the full dataset versus when training just on the specific agreement level subsets.", "Future work should explore how to handle this data, and we recommend reporting results on the different subsets by agreement-level.", "On defaulting to the majority label.", "When each tweet is assigned a gold-standard label according to the majority annotation, we demonstrated that there are qualitative differences between tweets with Majority , Consensus , and Unanimous agreement.", "As exemplified by the sample tweets in Table 1, the differences between the two tweets with a majority OBJECTIVE annotation is reflected in the inter-annotator disagreement.", "We have shown that the subtleties in sentiment expression are masked by simply taking the majority label, and future work would involve factoring in these varying levels of agreement on labels during the model design process.", "To advance the field of short-text sentiment analysis, it is necessary to change common practices 1893 in dataset design and development.", "First, future datasets should be released with the raw annotator label assignments without discarding any annotated data.", "This would allow other researchers to experiment with different means of handling annotation-disagreement during the model design process.", "Secondly, we argue that sufficient resolution of short-text sentiment annotations requires at least 5x coverage.", "Our dataset, MTSA, of 7,026 tweets was constructed with 5x annotation coverage, a resolution at which we can just begin to distinguish these subsets of tweets.", "Higher coverage may be needed still to identify and understand these annotator disagreements.", "In contrast, the differences between these two subsets would be masked using the 3x coverage commonly found in other datasets.", "Identifying ambiguous data.", "Results from Experiment II, and analysis of our COMPLICATED tweets, reveal that detecting high-disagreement tweets is a difficult task for both classifiers and humans.", "The poor performance of human annotators on identifying ambiguous tweets in our study, and the fact that high disagreement affected up to one third of the samples across TSA datasets, suggests that complicatedness is a real phenomenon.", "The optimal way to handle and identify this data requires further research.", "It is, however, an essential problem to solve, as real-world implementations of automated sentiment analysis systems will inevitably be confronted with such data.", "Such a system may be able to leverage the raw annotations during training, which is why we release the MTSA dataset with the raw annotation results included, and suggest all others do this as well.", "In this paper, we highlight the need to better engage with how humans actually annotate data in short-text sentiment analysis dataset construction by constructing the new McGill Twitter Sentiment Analysis (MTSA) dataset.", "Future work involves leveraging raw human annotations to improve sentiment analysis classifiers, and finding ways to better detect and understand the complicated property in these samples that cause high annotator disagreement.", "Additionally, we encourage researchers to use MTSA in the development of other methods for short text sentiment analysis, including unsupervised, lexicon-based, and rule-based methods.", "This work was the product of a class project pursued collectively by students in the COMP 767 graduate seminar in Social Media Analytics at McGill University, taught by Derek Ruths.", "This work was funded by the Discovery Grant Accelerator Supplement 2017-05165." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "method", "objective", "method", "objective", "abstain", "result", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "method", "other", "other" ]
[ "By introducing a small set of additional parameters, a probe learns to solve specific linguistic tasks (e.g., dependency parsing) in a supervised manner using feature representations (e.g., contextualized embeddings).", "The effectiveness of such probing tasks is taken as evidence that the pre-trained model encodes linguistic knowledge.", "However, this approach of evaluating a language model is undermined by the uncertainty of the amount of knowledge that is learned by the probe itself.", "Complementary to those works, we propose a parameter-free probing technique for analyzing pre-trained language models (e.g., BERT).", "Our method does not require direct supervision from the probing tasks, nor do we introduce additional parameters to the probing process.", "Our experiments on BERT show that syntactic trees recovered from BERT using our method are significantly better than linguistically-uninformed baselines.", "We further feed the empirically induced dependency structures into a downstream sentiment classification task and find its improvement compatible with or even superior to a human-designed dependency schema.", "Recent prevalent pre-trained language models such as ELMo (Peters et al., 2018b), BERT (De-vlin et al., 2018), and XLNet (Yang et al., 2019) achieve state-of-the-art performance for a diverse array of downstream NLP tasks.", "An interesting area of research is to investigate the interpretability of these pre-trained models (i.e., the linguistic properties they capture).", "Most recent approaches are built upon the idea of probing classifiers (Shi et al., 2016; Adi et al., 2017; Conneau et al., 2018; Peters et al., 2018a; Hewitt and Manning, 2019; Clark et al., 2019; Tenney et al., 2019b; Jawahar et al., 2019).", "A probe is a simple neural network (with a small additional set of parameters) that uses the feature representations generated by a pre-trained model (e.g., hidden state activations, attention weights) and is trained to perform a supervised task (e.g., dependency labeling).", "The performance of a probe is used to measure the quality of the generated representations with the as-sumption that the measured quality is mostly attributable to the pre-trained language model.", "One downside of such approach, as pointed out in (Hewitt and Liang, 2019), is that a probe introduces a new set of additional parameters, which makes the results difficult to interpret.", "Is it the pre-trained model that captures the linguistic information, or is it the probe that learns the downstream task itself and thus encodes the information in its additional parameter space?", "In this paper we propose a parameter-free probing technique called Perturbed Masking to analyze and interpret pre-trained models.", "The main idea is to introduce the Perturbed Masking technique into the masked language modeling ( MLM ) objective to measure the impact a word x j has on predicting another word x i (Sec 2.2) and then induce the global linguistic properties (e.g., dependency trees) from this inter-word information.", "Our contributions are threefold: We introduce a new parameter-free probing technique, Perturbed Masking , to estimate inter-word correlations.", "Our technique enables global syntactic information extraction.", "We evaluate the effectiveness of our probe over a number of linguistic driven tasks (e.g., syntactic parsing, discourse dependency parsing).", "Our results reinforce the claims of recent probing works, and further complement them by quantitatively evaluating the validity of their claims.", "dependency schema and find that our structures perform on-par or even better (Sec 6) than the parser created one.", "This offers an insight into the remarkable success of BERT on downstream tasks.", "We propose the perturbed masking technique to assess the impact one word has on the prediction of another in MLM.", "The inter-word information derived serves as the basis for our later analysis.", "BERT 1 (Devlin et al., 2018) is a large Transformer network that is pre-trained on 3.3 billion tokens of English text.", "It performs two tasks: (1) Masked Language Modeling ( MLM ): randomly select and mask 15% of all tokens in each given sequence, and then predict those masked tokens.", "In masking, a token is", "(a) replaced by the special token [MASK],", "(b) replaced by a random token, or", "(c) kept unchanged.", "These replacements are chosen 80%, 10%, and 10% of the time, respectively.", "(2)Next Sentence Prediction: given a pair of sentences, predict whether the second sentence follows the first in an original document or is taken from another random document.", "Given a sentence as a list of tokens x = [ x 1 , . . . , x T ] , BERT maps each x i into a contextualized representation H ( x ) i , where represents the network's parameters.", "Our goal is to derive a function f ( x i , x j ) that captures the impact a context word x j has on the prediction of another word x i .", "We propose a two-stage approach to achieve our goal.", "First, we replace x i with the [MASK] token and feed the new sequence x \\{ x i } into BERT.", "We use H ( x \\{ x i } ) i to denote the representation of x i .", "To calculate the impact x j x \\{ x i } has on H ( x \\{ x i } ) i , we further mask out x j to obtain the second corrupted sequence x \\{ x i , x j } .", "Similarly, H ( x \\{ x i , x j } ) i denotes the new representation of token x i .", "We define f ( x i , x j ) as: f ( x i , x j ) = d ( H ( x \\{ x i } ) i , H ( x \\{ x i , x j } ) i ) 1 In our experiments, we use the base, uncased version from (Wolf et al., 2019).", "where d ( x , y ) is the distance metric that captures the difference between two vectors.", "We experimented with two options for d ( x , y ) : Dist: Euclidean distance between x and y Prob : d ( x , y ) = a ( x ) x i a ( y ) x i , where a ( ) maps a vector into a probability distribution among the words in the vocabulary.", "a ( x ) x i represents the probability of predicting token x i base on x .", "By repeating the two-stage perturbation on each pair of tokens x i , x j x and calculating f ( x i , x j ) , we obtain an impact matrix F , where F ij RT T .", "Now, we can derive algorithms to extract syntactic trees from F and compare them with ground-truth trees that are obtained from benchmarks.", "Note that BERT uses byte-pair encoding (Sennrich et al., 2016) and may split a word into multiple tokens(subwords).", "To evaluate our approach on word-level tasks, we make the following changes to obtain inter-word impact matrices.", "In each perturbation, we mask all tokens of a split-up word.", "The impact on a split-up word is obtained by averaging 2 the impacts over the split-up word's tokens.", "To measure the impact exerted by a split-up word, we assume the impacts given by its tokens are the same; We use the impact given by the first token for convenience.", "Given the token-level perturbation above, it is straightforward to extend it to span-level perturbation.", "We investigate how BERT models the relations between spans, which can be phrases, clauses, or paragraphs.", "As a preliminary study, we investigate how well BERT captures document structures.", "We model a document D as N non-overlapping text spans D = [ e 1 , e 2 , . . . , e N ] , where each span e i contains a sequence of tokens e i = [ x i 1 , x i 2 , . . . , x iM ] .", "For span-level perturbation, instead of masking one token at a time, we mask an array of tokens in a span simultaneously.", "We obtain the span representation by averaging the representations of all the tokens the span contains.", "Similarly, we calculate the impact e j has on e i by: f ( e i , e j ) = d ( H ( D \\{ e i } ) i , H ( D \\{ e i , e j } ) i ) where d is the Dist function.", "Before we discuss specific syntactic phenomena, let us first analyze some example impact matrices derived from sample sentences.", "We visualize an impact matrix of a sentence by displaying a heatmap.", "We use the term impact map to refer to a heatmap of an impact matrix.", "Setup.", "We extract impact matrices by feeding BERT with 1,000 sentences from the English Parallel Universal Dependencies (PUD) treebank of the CoNLL 2017 Shared Task (Zeman et al., 2017).", "We follow the setup and pre-processing steps employed in pre-training BERT.", "An example impact map is shown in Figure 1.", "Dependency.", "We notice that the impact map contains many stripes , which are short series of vertical/horizontal cells, typically located along the diagonal.", "Take the word different as an example (which is illustrated by the second-to-last column in the impact matrix).", "We observe a clear vertical stripe above the main diagonal.", "The interpretation is that this particular occurrence of the word different strongly affects the occurrences of those words before it.", "These strong influences are shown by the darker-colored pixels seen in the second last column of the impact map.", "This observation agrees with the ground-truth dependency tree, which selects different as the head of all remaining words in the phrase this will be a little different .", "We also observe similar patterns on transitions and Hill .", "Such correlations lead us to explore the idea of extracting dependency trees from the matrices (see Section 4.1).", "Constituency.", "Figure 2 shows part of the constituency tree of our example sentence generated by Stanford CoreNLP (Manning et al., 2014).", "In this sentence, media and on are two words that are adjacent to transitions .", "From the tree, however, we see that media is closer to transitions than on is in terms of syntactic distance.", "If a model is syntactically uninformed, we would expect media and on to have comparable impacts on the prediction of transitions , and vice versa.", "However, we observe a far greater impact (darker color) between media and transitions than that between on and transitions .", "We will further support this observation with empirical experiments in Section 4.2.", "Other Structures.", "Along the diagonal of the impact map, we see that words are grouped into four contiguous chunks that have specific intents (e.g., a noun phrase on Capitol Hill ).", "We also observe that the two middle chunks have relatively strong inter-chunk word impacts and thus a bonding that groups them together, forming a larger verb phrase.", "This observation suggest that BERT may capture the compositionality of the language.", "In the following sections we quantitatively evaluate these observations.", "We start with two syntactic probes dependency probe and constituency probe.", "With the goal of exploring the extent dependency relations are captured in BERT, we set out to answer the following question: Can BERT outperform linguistically uninformed baselines in unsupervised dependency parsing?", "If so, to what extent?", "We begin by using the token-level perturbed masking technique to extract an impact matrix F for each sentence.", "We then utilize graph-based algorithms to induce a dependency tree from F , and compare it against ground-truth whose annotations are linguistically motivated.", "Experiment Setup.", "We evaluate the induced trees on two benchmarks: (1) the PUD treebank described in Section 3.", "(2) the WSJ10 treebank, which contains 7,422 sentences (all less than 10 words after punctuation removal) from the Penn Treebank (PTB) (Marcus et al., 1993).", "Note that the original PTB does not contain dependency annotations.", "Thus, we convert them into Universal Dependencies using Stanford CoreNLP.", "We denote this set as WSJ10-U.", "Next, two parsing algorithms, namely, the Eisner algorithm (1996) and Chu-Liu/Edmonds (CLE) algorithm (1965; 1967), are utilized to extract the projective and non-projective unlabeled dependency trees, respectively.", "Given that our impact matrices have no knowledge about the dependency root of the sentence, we use the gold root in our analysis.", "Introducing the gold root may artifi-cially improve our results slightly.", "We thus apply this bias evenly across all baselines to ensure a fair comparison, as done in (Raganato and Tiedemann, 2018; Htut et al., 2019).", "We compared our approach against the following baselines: (1) right-(left-) chain baseline, which always selects the next(previous) word as dependency head.", "(2) A random BERT baseline, with which we randomly initialize weights of the BERT model (Htut et al., 2019), then use our methods to induce dependency trees.", "We measure model performance using Unlabeled Attachment Score (UAS).", "We note that UAS has been shown to be highly sensitive to annotation variations (Schwartz et al., 2011; Tsarfaty et al., 2011; Kubler et al., 2009).", "Therefore, it may not be a fair evaluation metric for analyzing and interpreting BERT.", "To reflect the real quality of the dependency structures that are retained in BERT, we also report Undirected UAS (UUAS) (Klein and Manning, 2004) and the Neutral Edge Direction (NED) scores (Schwartz et al., 2011).", "Results.", "Tables 1 and 2 show the results of our dependency probes.", "From Table 1, we see that although BERT is trained without any explicit supervision from syntactic dependencies, to some extent the syntax-aware representation already exists in it.", "The best UAS scores it achieves (Eisner+Dist) are substantially higher than that of the random BERT baseline with respect to both WSJ10-U(+41.7) and PUD(+31.5).", "Moreover, the Dist method significantly outperforms the Prob Model Parsing UAS WSJ10-U PUD Right-chain 49.5 35.0 Left-chain 20.6 10.7 Random BERT 16.9 10.2 Eisner+Dist 58.6 41.7 Eisner+Prob 52.7 34.1 CLE+Dist 51.5 33.2 Table 1: UAS results of BERT on unsupervised dependency parsing.", "method on both datasets we evaluated.", "We thus use Dist as the default distance function in our later discussion.", "We also note that the Eisner algorithm shows a clear advantage over CLE since English sentences are mostly projective.", "However, our best performing method does not go much beyond the strong right-chain baseline (with gold root modified), showing that the dependency relations learned are mostly those simple and local ones.", "For reference, the famous unsupervised parser DMV (Klein and Manning, 2004) achieves a 43.2 UAS on WSJ10 with Collins (1999) conventions.", "Note that the DMV parser utilizes POS tags for training while ours start with the gold root.", "The results are therefore not directly comparable.", "By putting them together, however, we see potential room for improvement for current neural unsupervised dependency parsing systems in the BERT era.", "From Table 2, we see that although BERT only outperforms the right-chain baseline modestly in terms of UAS, it shows significant improvements on UUAS (+12.2) and NED (+28.4).", "We also make similar observation with WSJ10-U.", "This suggests that BERT does capture interword dependencies despite that it may not totally agree with one specific human-designed governor-dependent schema.", "We manually inspect those discrepancies and observe that they can also be syntactically valid.", "For instance, consider the sentence It closed on Sunday..", "For the phrase on Sunday, our method selects the functional word on as the head while the gold-standard annotation uses a lexical head (Sunday) 3 .", "The above findings prove that BERT has learned its own syntax as a by-product of self-supervised training, not by directly copying any human design.", "However, giving the superior performance of BERT on downstream tasks, it is natural to ask if BERT is learning an empirically useful structure of language.", "We investigate this question in Sec 6.", "We now examine the extent BERT learns about the constituent structure of sentences.", "We first present the algorithm for unsupervised constituent parsing, which executes in a top-down manner by recursively splitting larger constituents into smaller ones.", "Top-Down Parsing.", "Given a sentence as a sequence of tokens x = [ x 1 , . . . , x T ] and the corresponding impact matrix F .", "We start by finding the best splitting position k that will separate the sentence into constituents (( x <k ) , ( x k , ( x >k ))) , where x <k = [ x 1 , . . . , x k 1 ] .", "The best splitting position ensures that each constituent has a large average impact between words within it (thus those words more likely to form a constituent) while at the same time the impact between words of different constituents are kept as small as possible (thus they are unlikely to be in the same con-stituent).", "Mathematically, we decide the best k for the constituent x = [ x i , x i +1 , . . . , x j ] by the following optimization: arg max k F i,...,ki,...k + F k +1 ,...,j k +1 ,...,j F k +1 ,...,j i,...,k F i,...,kk +1 ,...,j (1) where F i,...,ki,...k = (cid:80) k a = i (cid:80) k b = i f ( x a ,x b ) 2( k i ) .", "We recursively split ( x <k ) and ( x >k ) until only single words remain.", "Note that this top-down strategy is similar to that of ON-LSTM (Shen et al., 2019) and PRPN (Shen et al., 2018), but differs from them in that ON-LSTM and PRPN decide the splitting position based on a syntactic distance vector which is explicitly modeled by a special network component.", "To distinguish our approach from the others, we denote our parser as MART ( MA t R ix-based T op-down parser) 3 This specific choice is actually agreed with the YM (Ya-mada and Matsumoto, 2003) schema.", "Experiment Setup.", "We follow the experiment setting in Shen et al (2019; 2018) and evaluate our method on the 7,422 sentences in WSJ10 dataset and the PTB23 dataset (the traditional PTB test set for constituency parsing).", "Results.", "Table 3 shows the results of our constituency probes.", "From the table, we see that BERT outperforms most baselines on PTB23, except for the second layer of ON-LSTM.", "Note that all these baselines have specifically-designed architectures for the unsupervised parsing task, while BERT's knowledge about constituent formalism emerges purely from self-supervised training on unlabeled text.", "It is also worth noting that recent results (Dyer et al., 2019; Li et al., 2019a) have suggested that the parsing algorithm used by ON-LSTM (PRPN) is biased towards the right-branching trees of English, leading to inflated F1 compared to unbiased parsers.", "To ensure a fair comparison with them, we also introduced this right-branching bias.", "However, our results show that our method is also robust without this bias (e.g., only 0.9 F1 drops on PTB23).", "To further understand the strengths and weaknesses of each system, we analyze their accuracies by constituent tags.", "In Table 3, we show the accuracies of five most common tags in PTB23.", "We find that the success of PRPN and ON-LSTM mainly comes from the accurate identification of NP (noun phrase), which accounts for 38.5% of all constituents.", "For other phrase-level tags like VP (verb phrase) and PP (prepositional phrase), the accuracies of BERT are competitive.", "Moreover, for clause level tags, BERT significantly outplays ON-LSTM.", "Take SBAR (clause introduced by a subordinating conjunction) for example, BERT achieves an accuracy of 51.9%, which is about 3.4 times higher than that of ON-LSTM.", "One possible interpretation is that BERT is pre-trained on long contiguous sequences extracted from a document-level corpus.", "And the masking strategy (randomly mask 15% tokens) utilized may allow BERT to learn to model a sequence of words (might form a clause).", "Having shown that clause-level structures are well-captured in BERT using the constituency probe, we now explore a more challenging probe probing BERT's knowledge about the struc-Model", "ture of a document.", "A document contains a series of coherent text spans, which are named Elementary Discourse Units (EDUs) (Yang and Li, 2018; Polanyi, 1988).", "EDUs are connected to each other by discourse relations to form a document.", "We devise a discourse probe to investigate how well BERT captures structural correlations between EDUs.", "As the foundation of the probe, we extract an EDU-EDU impact matrix for each document using span-level perturbation.", "Setup.", "We evaluate our probe on the discourse dependency corpus SciDTB (Yang and Li, 2018).", "We do not use the popular discourse corpora RST-DT (Carlson et al., 2003) and PDTB (Prasad et al.) because PDTB focuses on local discourse relations but ignores the whole document structure, while RST-DT introduces intermediate nodes and does not cover non-projective structures.", "We follow the same baseline settings and evaluation procedure in Sec 4.1, except that we remove gold root from our evaluation since we want to compare the accuracy by syntactic distances.", "Results.", "Table 4 shows the performance of our discourse probes.", "We find that both Eisner and CLE achieve significantly higher UAS (+28) than the random BERT baseline.", "This suggests that BERT is aware of the structure of the document it is given.", "In particular, we observe a decent accuracy in identifying discourse relations between adjacent EDUs, perhaps due to the next sentence prediction task in pre-training, as pointed out in (Shi and Demberg, 2019).", "However, our probes fall behind the left-chain baseline, which benefits from its strong structural prior 4 (principal clause mostly in front of its subordinate clause).", "Our finding sheds some lights on BERT's success in downstream tasks that have paragraphs as input (e.g., Question Answering).", "Our probing results suggest that although BERT has captured a certain amount of syntax, there are still substantial disagreements between the syntax BERT learns and those designed by linguists.", "For instance, our constituency probe on PTB23 significantly outperforms most baselines, but it only roughly agree with the PTB formalism (41.2% F1).", "However, BERT has already demonstrated its superiority in many downstream tasks.", "An interesting question is whether BERT is learning an empirically useful or even better structure of a language .", "To answer this question, we turn to neural networks that adopt dependency parsing trees as the explicit structure prior to improve downstream 4 For reference, a supervised graph-based parser (Li et al., 2014) achieves an UAS of 57.6 on SciDTB tasks.", "We replace the ground-truth dependency trees those networks used with ones induced from BERT and approximate the effectiveness of different trees by the improvements they introduced.", "We conduct experiments on the Aspect Based Sentiment Classification ( ABSC ) task (Pontiki et al., 2014).", "ABSC is a fine-grained sentiment classification task aiming at identifying the sentiment expressed towards each aspect of a given target entity.", "As an example, in the following comment of a restaurant, I hated their fajitas, but their salads were great, the sentiment polarities for aspect fajitas is negative and that of salads is positive.", "It has been shown in Zhang et al. (2019) that injecting syntactic knowledge into neural networks can improve ABSC accuracy.", "Intuitively, given an aspect, a syntactically closer context word should play a more important role in predicting that aspect's sentiment.", "They integrate the distances between context words and the aspect on a dependency tree into a convolution network and build a Proximity-Weighted Convolution Network (PWCN).", "As a naive baseline, they compare with network weighted by relative position between aspect and context words.", "Setup.", "We experimented on two datasets from SemEval 2014 (Pontiki et al., 2014), which consist of reviews and comments from two categories: LAPTOP and RESTAURANT .", "We adopt the standard evaluation metrics: Accuracy and Macro-Averaged F1.", "We follow the instructions of Zhang et al. (2019) to run the experiments 5 times with random initialization and report the averaged performance.", "We denote the original PWCN with relative position information as PWCN-Pos, and that utilizes dependency trees constructed by SpaCy 5 as PWCN-Dep.", "SpaCy has reported an UAS of 94.5 on English PTB and so it can serve as a good reference for human-designed dependency schema.", "We also compare our model against two trivial trees (left-chain and right-chain trees).", "For our model, we feed the corpus into BERT and extract dependency trees with the best performing setting: Eisner+Dist.", "For parsing, we introduce an inductive bias to favor short dependencies (Eisner and Smith, 2010).", "To ensure a fair comparison, we induce the root word from the impact matrix F instead of using the gold root.", "Specifically, we select the root word x k based on the simple heuristic arg max i (cid:80) Tj =1 f ( x i , x j ) .", "Results.", "Table 5 presents the performance of different models.", "We observe that the trees induced from BERT is either on-par (LAPTOP ) or marginally better (RESTAURANT ) in terms of downstream task's performance when comparing with trees produced by SpaCy.", "LAPTOP is considerably more difficult than RESTAURANT due to the fact that the sentences are generally longer, which makes inducing dependency trees more challenging.", "We also see that the Eisner trees generally perform better than the right-/leftchain baselines.", "It is also worth noting that the right-chain baseline also outperforms PWCN+Dep on RESTAURANT , which leads to an exciting future work that investigates how encoding structural knowledge can help ABSC.", "Our results suggest that although the tree structures BERT learns can disagree with parser-provided-linguistically-motivated ones to a large extent, they are also empirically useful to downstream tasks, at least to ABSC.", "As future work, we plan to extend our analysis to more downstream tasks and models, like those reported in Shi (2018).", "There has been substantial research investigating what pre-trained language models have learned about languages' structures.", "One rising line of research uses probing classifiers to investigate the different syntactic properties captured by the model.", "They are generally referred to as probing task (Conneau et al., 2018), diagnostic classifier (Giulianelli et al., 2018), and auxiliary prediction tasks (Adi et al., 2017).", "The syntactic properties investigated range from basic ones like sentence length (Shi et al., 2016; Jawahar et al., 2019), syntactic tree depth (Jawahar et al., 2019), and segmentation (Liu et al., 2019) to challenging ones like syntactic labeling (Ten-ney et al., 2019a,b), dependency parsing (Hewitt and Manning, 2019; Clark et al., 2019), and constituency parsing (Peters et al., 2018a).", "However, when a probe achieves high accuracy, it's difficult to differentiate if it is the representation that encodes targeted syntactic information, or it is the probe that just learns the task (Hewitt and Liang, 2019).", "In line with our work, recent studies seek to find correspondences between parts of the neural network and certain linguistic properties, without explicit supervision.", "Most of them focus on analyzing attention mechanism, by extracting syntactic tree for each attention head and layer individually (Raganato and Tiedemann, 2018; Clark et al., 2019).", "Their goal is to check if the attention heads of a given pre-trained model can track syntactic relations better than chance or baselines.", "In particular, Raganato and Tiedemann (2018) analyze a machine translation model's encoder by extracting dependency trees from its self-attention weights, using Chu-Liu/Edmonds algorithm.", "Clark et al. (2019) conduct a similar investigation on BERT, but the simple head selection strategy they used does not guarantee a valid dependency tree.", "Marecek and Rosa (2018) propose heuristic methods to convert attention weights to syntactic trees.", "However, they do not quantitatively evaluate their approach.", "In their later study (Marecek and Rosa, 2019), they propose a bottom-up algorithm to extract constituent trees from transformer-based NMT encoders and evaluate their results on three languages.", "Htut et al. (2019) reassess these works but find that there are no generalist heads that can do holistic parsing.", "Hence, analyzing attention weights directly may not reveal much of the syntactic knowledge that a model has learned.", "Recent dispute about attention as explanation (Jain and Wallace, 2019; Serrano and Smith, 2019; Wiegreffe and Pinter, 2019) also suggests that the atten-tion's behavior does not necessarily represent that of the original model.", "Another group of research examine the outputs of language models on carefully chosen input sentences (Goldberg, 2019; Bacon and Regier, 2019).", "They extend previous works (Linzen et al., 2016; Gulordava et al., 2018; Marvin and Linzen, 2018) on subject-verb agreement test (generating the correct number of a verb far away from its subject) to provide a measure of the model's syntactic ability.", "Their results show that the BERT model captures syntax-sensitive agreement patterns well in general.", "However, subject-verb agreement cannot provide more nuanced tests of other complex structures (e.g., dependency structure, constituency structure), which are the interest of our work.", "Two recent works also perturb the input sequence for model interpretability (Rosa and Marecek, 2019; Li et al., 2019b).", "However, these works only perturb the sequence once.", "Rosa and Marecek (2019) utilize the original MLM objective to estimate each word's reducibility and import simple heuristics into a right-chain baseline to construct dependency trees.", "Li et al. (2019b) focus on evaluating word alignment in NMT, but unlike our two-step masking strategy, they only replace the token of interest with a zero embedding or a randomly sampled word in the vocabulary.", "One concern shared by our reviewers is that performance of our probes are underwhelming: the induced trees are barely closer to linguist-defined trees than simple baselines (e.g., rightbranching) and are even worse in the case of discourse parsing.", "However, this does not mean that supervised probes are wrong or that BERT captures less syntax than we thought.", "In fact, there is actually no guarantee that our probe will find a strong correlation with human-designed syntax, since we do not introduce the human-designed syntax as supervision.", "What we found is the natural syntax inherent in BERT, which is acquired from self-supervised learning on plain text.", "We would rather say our probe complements the supervised probing findings in two ways.", "First, it provides a lower-bound (on the unsupervised syntactic parsing ability of BERT).", "By improving this lower-bound, we could uncover more accurate information to support supervised probes' findings.", "Second, we show that when combined with a down-stream application (sec 6), the syntax learned by BERT might be empirically helpful despite not totally identical to the human design.", "In summary, we propose a parameter-free probing technique to complement current line of work on interpreting BERT through probes.", "With carefully designed two-stage perturbation, we obtain impact matrices from BERT.", "This matrix mirrors the function of attention mechanism that captures inter-word correlations, except that it emerges through the output of BERT model, instead of from intermediate representations.", "We devise algorithms to extract syntactic trees from this matrix.", "Our results reinforce those of (Hewitt and Manning, 2019; Liu et al., 2019; Jawahar et al., 2019; Tenney et al., 2019b,a) who demonstrated that BERT encodes rich syntactic properties.", "We also extend our method to probe document structure, which sheds lights on BERT's effectiveness in modeling long sequences.", "Finally, we find that feeding the empirically induced dependency structures into a downstream system (Zhang et al., 2019) can further improve its accuracy.", "The improvement is compatible with or even superior to a human-designed dependency schema.", "This offers an insight into BERT's success in downstream tasks.", "We leave it for future work to use our technique to test other linguistic properties (e.g., coref-erence) and to extend our study to more downstream tasks and systems.", "We would like to thank Dr.Lingpeng Kong from DeepMind for his constructive feedback of the paper.", "This research is supported by Hong Kong Research Grant Council GRF grants 17254016." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "method", "result", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "method", "method", "abstain", "result", "result", "abstain", "abstain", "objective", "result", "objective", "result", "abstain", "method", "abstain", "objective", "result", "abstain", "abstain", "objective", "other", "other" ]
[ "Training dialogue agents requires a large number of interactions with users: agents have no idea about which responses are bad among a lengthy dialogue.", "In this paper, we propose loop-clipping policy optimisation (LCPO) to eliminate useless responses.", "LCPO consists of two stages: loop clipping and advantage clipping.", "In loop clipping, we clip off useless responses (called loops) from dialogue history (called trajectories).", "The clipped trajectories are more succinct than the original ones, and the estimation of state-value is more accurate.", "Second, in advantage clipping, we estimate and clip the advantages of useless responses and normal ones separately.", "The clipped advantage distinguishes useless actions from others and reduces the probabilities of useless actions efficiently.", "In experiments on Cambridge Restaurant Dialogue System, LCPO uses only 260 training dialogues to achieve 80% success rate, while PPO baseline requires 2160 dialogues.", "Besides, LCPO receives 3 .", "7 / 5 scores in human evaluation where the agent interactively collects 100 real-user dialogues in the training phase.", "Based on dialogue policies, task-oriented dialogue systems decide when and how to give or request information from users.", "Learning dialogue policies is often formulated as a reinforcement learning (RL) problem since we usually receive feedback from users for the whole dialogue but not the correct answer for a single response (Young et al., 2013; Levin et al., 1997).", "With high-capacity of function approximation, deep reinforcement learning has been widely applied to dialogue policy optimisation (Su et al., 2016; Li et al., 2016; Casanueva et al., 2017).", "Typically, when applying deep reinforcement learning for dialogue policy management, more than thousands of dialogues are required to reach convergence (Casanueva et al., 2017).", "However, requiring thousands of human dialogues during training is quite impractical for most academic or real-life scenarios.", "Users might lose patience and exhibit different behaviour during training.", "Therefore, in most prior work, the agents are trained via simulated users instead of real ones (Liu et al., 2018; Gao et al., 2018).", "Model-based reinforcement learning (MBRL) is commonly applied to make dialogue policy optimisation sample-efficient.", "MBRL approaches for dialogue management build a user model to predict users' behaviour (Wu et al., 2020b,a; Peng et al., 2018; Su et al., 2018; Wu et al., 2019; Zhang et al., 2019).", "Using the user model, DDQ (Peng et al., 2018) generates pseudo-data.", "The accuracy of the user model strongly affects the quality of generated pseudo-data.", "If the behaviour of pseudo-data is far from real users' behaviour, dialogue policies learnt from these data might not be optimal (Su et al., 2018).", "Manipulating when to use how much data in experience buffers becomes critical in these approaches.", "Trainable-action-mask (TAM) (Wu et al., 2020b) blocks useless actions by learning action-masks from data to explore the action space more efficiently.", "Instead of predicting the users' behaviour directly, TAM predicts only the termination and similarity of future dialogue states to ease the training difficulties.", "However, the wrong predictions of the user model block the wrong actions, which makes the policy performance unstable.", "Moreover, the wrong output of policy does not learn from the predictions of the user model since it is blocked.", "Wrong values in policy networks make the performance unstable.", "In this work, we propose loop-clipping policy optimisation (LCPO), which clips off useless actions in trajectories, computes advantages of actions in/out of the loop separately and optimises policy based on proximal policy optimisation (PPO) (Schulman et al., 2017).", "First, LCPO is a Figure 1: Illustration of LCPO.", "model-free and parameter-free algorithm.", "There is no additional effort of tuning hyperparameters of the user model.", "Also, it takes almost no extra running time during testing.", "Second, instead of brutally blocking actions like TAM does, LCPO directly reduces the probabilities of useless actions which makes optimisation smoother and easier.", "In our experiment on the Cambridge Restaurant Dialogue System, LCPO uses only 260 dialogues in the training phase to reach an 80% success rate, while the PPO baseline requires 2160 .", "In the human-in-the-loop experiment, LCPO that trained with only 100 dialogue receives 3.7/5 scores and high remarks of conciseness and fluency.", "Overall, our main contributions are two-fold: We propose LCPO, a parameter-free, sample efficient algorithm to optimise dialogue policies.", "This algorithm is easy to implement and has barely any overhead.", "We demonstrate that training dialogue systems with real users is feasible within 100 dialogues on Cambridge Restaurant Dialogue System.", "This section goes through the notations in this paper.", "We start with formulating dialogue management as an RL problem in section 2.1.", "In section 2.2, we explain how to optimise the policy through proximal policy optimisation (PPO).", "In section 2.4, we explain what is episodic memory.", "When applying reinforcement learning for dialogue management (Levin et al., 1997; Young et al., 2013; Williams, 2008), a state s , or a belief state, is the belief distribution over users' requests.", "An action a is the summarised action taken by a system.", "A reward r and a termination t are given by simulated users or real users.", "An episode E is a dialogue.", "The goal of reinforcement learning is to learn a policy ( a i | s i ) that maximises the cumulative reward R = (cid:80) Li =0 i r i , where L is the length of the dialogue.", "Policy gradient is a fundamental optimisation algorithm with the loss function:", "In order to ensure new policy is not changing far from the old one, trust region policy optimisation (TRPO) is set to surrogate the KL-divergence between the old and the current policies.", "In a similar but much simpler way, proximal policy optimisation (PPO) (Schulman et al., 2017) clips probability ratios r i to mitigate the excessive updates in TRPO.", "r i = ( a i | s i ) old ( a i | s i ) .", "Advantage A i and state-value V i are estimated by generalised advantage estimation (GAE) as follow:", "A i = i + A i +1 ,", "where decays future state-value, which represents our confidence in state-value estimation.", "decays the future TD-error, which represents a trade-off between bias and variance of advantage estimation.", "V i is the predicted state-value of s i , and i is the TD-error: i = r i + V i +1 V i .", "Trainable-action-mask (TAM) (Wu et al., 2020b) is a model-based baseline that blocks useless actions directly.", "TAM learns a user model during dialogue interaction.", "The user model predicts the termination, reward, and the similarity between the current and the next dialogue state, and the action mask is constructed based on these features.", "Though TAM is simple and effective, it is not stable enough.", "The first reason is a common pitfall of model-based approaches: the user model is hard to train and usually leads to inaccurate predictions that harm the dialogue policy.", "Second, the policy and state-value approximator (i.e. the policy network and value network in PPO) do not learn from the predictions of the user model.", "The wrong values estimated by these networks can not be updated efficiently since these actions are blocked.", "In most policy gradient algorithms, the history of interactions is recorded in a memory buffer M , which contains several episodes E .", "An episode E consists of N transitions { T 0 , T 1 ,", "..T N } .", "A transition T i { s i , a i , s i +1 , r i , t i } , where s i is the current state.", "a i is the action taken on s i , which leads to the next state s i +1 with a reward r i .", "If the episode terminates after taking action a i , t i is T rue or otherwise F alse .", "In this paper, we propose loop-clipping policy optimisation (LCPO) to improve sample efficiency.", "As illustrated in Figure 1, LCPO consists of three components: loop clipping, advantage clipping, and policy optimisation.", "We adopt proximal policy optimisation (PPO) (Schulman et al., 2017) for the policy optimisation part in this work.", "Firstly, we give definitions to loops in section 3.1, and illustrate how to get clean trajectories via loop clipping in section 3.2.", "In section 3.3, we demonstrate how to estimate and clip advantages and state-values of loops for policy optimisation.", "Note that in the following subsections, we utilise two domain knowledge in dialogue systems.", "Prior 1 : Information gain is non-negative since by asking more questions, we know better about user needs.", "Prior 2 : The last action of a failed dialogue and actions that loop over the same state are unwanted.", "In this paper, loop means transitions that consist of useless or unwanted actions.", "As illustrated in Figure 2, we define two kinds of the loop: N -hop loop and termination loop corresponding to our prior 2. Definition 1. A N -hop loop L Ni is a sequence of N transitions { T i , T i +1 ,", "Since in a loop that the starting state s i becomes the same as final state s N , { a i , a i +1 ,", ".., a i + N 1 } is a useless action sequence on state s i .", "In dialogue systems, N -hop loop might result from repetitively asking the same questions or giving the same information.", "Compared with the definition of useless Figure 3: Loop clipping.", "actions in TAM (Wu et al., 2020b) which only considers the similarity of the next state (i.e 1 -hop loop), N -hop loop is a more general definition and is able to detect more useless actions.", "Definition 2. A termination loop LT i is a transition T i state s i where t i = T rue and r i 0 .", "In dialogue systems, a i is a useless action on state s i since the dialogue is terminated and failed.", "For example, termination loops might result from saying goodbye before completing tasks or making users out of patience.", "Note that the definition of loops utilises domain knowledge and might not be suitable for other applications.", "As illustrated in Figure 3, the original trajectory might contain several identical states.", "We search for the identical states pair-wisely and detect loops by definitions in section 3.1.", "1 The detected loops are clipped off from the original trajectory.", "After clipping, the trajectory becomes succinct so that reward signals can be assigned to useful actions effectively (Figure 3b).", "In dialogue systems, the information for each state in a loop is the same since there is no information gain after taking useless actions.", "Therefore, a N -hop loop can be viewed as multiple one-hop loops as illustrated in Figure 3c.", "After loop clipping, the original trajectory is split into a clean trajectory and several loops.", "Then We estimate the advantages of the clean trajectory and loops separately.", "For the clean trajectory, standard 1 When the loop structure is complex.", "generalised advantage estimation (GAE) (Schul-man et al., 2015) is applied as shown in Eq.", "5, 6.", "If we only update the policy based on clean trajectories, the clipped useless actions will not be treated as training data.", "Therefore, these useless actions will not be penalised, resulting in unwanted lengthy dialogue.", "We first illustrate how to estimate state-values and advantage for loops, noted as loop advantage estimation (LAE) to distinguish from GAE.", "Second, we propose an advantage clipping trick, which makes the policy optimisation much more sample-efficient.", "1, in dialogue systems, information gain V i +1 V i 0 .", "In a loop L Ni with length N , since the V i V i +1", "..", "V i + N (8) and the same states share the same value i.e. V i = V i + N , (9) all the state-values in LN i are the same: V i = V i +1 =", "..", "= V i + N .", "(10)", "Advantage estimation The loop advantage for action a i is: A LAEi = i + AGAE , (11) where i = r i + V i +1 V i .", "Note that AGAE is the next advantage A i + N after the loop L ni .", "No matter how long the loop is, the loop advantage is computed from the transition after loop.", "L i by eq.", "10.", "We can see that V i +1 (cid:39) V i = i (cid:39) r i + ( 1) V i .", "= R i = A LAEi (cid:39) R i + AGAE = A LAEi (cid:39) A LAEi +1 , (12) where R i = r i + ( 1) V i .", "It is straightforward that when values converge, the advantage of loop is the advantage of best actions A GAEi with a one-turn penalty for all useless actions on state s i (since the agent wastes one more turn on the same state).", "When AGAE converges to zero, ALAE converges to R i .", "Advantage clipping However, we found that the advantage estimation is still not very accurate in the early stage of training process.", "The advantages of looping actions sometimes are higher than others and these actions are not penalised.", "To properly penalise the looping actions, we clip the advantages in both LAE and GAE.", "The clipping threshold R i = r + ( 1) V i since ALAE converges to this value.", "A ClipGAEi = max( R i , A GAEi ) , (13) A ClipLAEi = min( R i , A LAEi ) , (14) where R i = r i + ( 1) V i , so that A ClipGAEi R i A ClipLAEi .", "(15)", "Experiments are conducted on the Cambridge restaurant dialogue system using the PyDial toolkit (Ultes et al., 2017).", "We evaluate the agents on both a simulated user and real users.", "From section 4.1 to 4.5, we illustrate the experiments with a simulated user.", "For human-in-the-loop experiment, see section 4.6.", "User simulator We use a goal-driven simulated user on the semantic level (Schatzmann et al., 2007; Schatzmann and Young, 2009).", "The maximum dialogue length is set to 25 turns and = 0 .", "99 .", "The reward is defined as 20 for a successful dialogue Algorithm 1: LCPO Algorithm 1 Collect N transitions into Memory M 2 for Episode E in M do // Loop clipping 3 for T i in E do 4 if i<ptr then 5 continue 6 for T i in E do 7 if s i == s j then 8 ptr = j 9 if i<ptr then 10 T i L // Advantage estimation in reversed order 11 for Transition T i in reversed( E ) do 12 if T i L then 13 Estimate A, V via clipped LAE (Eq. 14, 10) 14 else 15 Estimate A, V via clipped GAE (Eq. 13, 6) 16 Optimise the policy via PPO (Eq. 3), with K epochs and mini-batch size B minus the number of turns in the dialogue.", "15% semantic error rate (SER) is included in the user simulator to accommodate for automatic speech recognition (ASR) error.", "Policy optimisation Proximal policy optimisation (PPO) is applied.", "The state and action dimension of policy and value networks are 268 and 16 .", "Dimensions of two hidden layers are 130 and 50 .", "The agent collects a N = 100 transitions to update the policy with K = 10 epochs and mini-batch size B = 16 .", "After an update, the memory is flushed and becomes empty again.", "Optimiser is ADAM (Kingma and Ba, 2014) with a learning rate 0 .", "001 .", "Entropy coefficient is 0 .", "01 and standardised advantages is applied.", "During testing, actions are sampled from the output distributions of the policy network.", "Loops Detection In theory, the starting state and the ending state are identical in a loop.", "Yet, due to numerical uncertainty, we use cosine similarity with threshold = 0 .", "99 to justify whether two states are the same.", "Under this strict setting, two states are considered different if they have any dif-Figure 4: Learning curves of different algorithms.", "Left: Success rate.", "Right: Average number of turns.", "The results are evaluated by 10 runs.", "The lines are averages and the shades represent standard deviations.", "Evaluation In the experiment with the simulated user, we evaluate each agent with 500 dialogues after every 100 training dialogues.", "The mean and standard deviation of performance is computed over 10 runs with different neural networks initialisation.", "The mean standard deviation is depicted as the shaded area.", "The x-axes of figures are in log-scale to emphasise both the early stage and the final performance of the training process.", "In figure 4, we compare the performance of PPO (Schulman et al., 2017), TAM (Wu et al., 2020b), and LCPO.", "The left part of the figure shows the learning curves of the success rate.", "We can see that LCPO is considerably stable and sample-efficient.", "Worth to note that LCPO has the best final performance.", "TAM learns slower, and PPO requires a large number of training dialogues.", "can see that LCPO takes more turns in the beginning but becomes more concise than the baselines later.", "In table 1, we can see the detail of performance at 200 and 2000 training dialogues respectively.", "In low resource scenario, where the dialogue policy is trained by 200 dialogues, LCPO outperforms other baselines with small variance.", "Yet the average number of loops in each dialogue is higher.", "That is because LCPO takes more turn than other agents.", "Other agents often give poor responses so that the users leave the dialogue out of patience with fewer turns.", "Regarding final performance at 2000 dialogues, all of the agents perform similarly.", "We can note that LCPO takes the least number of turns since its algorithm prevents from doing useless actions.", "LCPO requires only 260 dialogues to reach 80% success rate while PPO takes 2160 .", "In addition, LCPO is light-packed and does not consume a lot of additional training time like TAM.", "In the left part of figure 5, the red and brown lines are LCPO with and without clipping termination loop LT respectively.", "We can see that without clipping LT , the learning curves become less stable and inhibit the cold-start problem at the beginning of training.", "In a failed dialogue, some actions are good and should not be penalised for the failure of conversation.", "Therefore, we should clip off the last transition in failed dialogue, so that the rest transitions in the clean trajectories (not in loops) are not penalised for the failure.", "For example, if we clip off the last the action \"bye\" in a failed dialogue, only 'bye' is strongly penalised while other normal interactions are not.", "We propose 4 agents for comparisons: 1) clip both GAE and LAE, 2) clip LAE, 3) clip GAE, 4) no advantage clipping.", "The success rates after training with 100 , 200 , 2000 dialogues are reported in Table 2. In the low-resource scenario (less than 200 dia-logues), clipping both GAE and LAE outperforms other methods considerably.", "And LCPO with no advantage clipping is the worst.", "Without clipping, inaccurate advantage estimation in the early stage of the training process cannot reduce the probabilities of useless actions efficiently.", "Regarding the final performance after training agents with 2000 dialogues, all of the methods perform similarly.", "Yet, if we only clip the GAE, the final performance is slightly worse than others.", "That is because not all the actions in clean trajectories are useful.", "The 'clean' trajectories still contain several useless actions though not detected.", "In the right part of figure 5, policy update interval is set to 50 and 100 for PPO and LCPO.", "The red and brown lines are LCPO and the green and blue lines are PPO with different update intervals.", "We can see that the performance of PPO is strongly affected by the update interval.", "In contrast, LCPO still shows high stability and sample efficiency.", "Its robustness to hyperparameters makes tuning LCPO effortless.", "General Settings The dialogue system uses a rule-based belief tracker, and an NLG model (Wen et al., 2015).", "In each dialogue, one of the agents is randomly picked to talk with a user.", "The users have to interact with the agent according to a given instruction on the user goal sampled from the corpus.", "The users can decide to leave the dialogue session if they are out of patience.", "Training Settings We experiment on two training algorithms: PPO and LCPO.", "The hyperparameters of PPO and LCPO are the same as the simulated user experiment.", "A human user interacts with each agent for 100 dialogues.", "At the end of each dialogue, the user gives 20 scores to the agent for a successful dialogue and gives 0 scores for a failed one.", "A penalty of 1 is also applied in each turn.", "A successful dialogue means the restaurant given by the agent must fulfil all the constraints and the requested information like phone number or address must be provided.", "In other words, the agents only receive feedback on the aspect of task completion.", "interacts with each agent for 5 dialogue and gives his/her feedback on four aspects:", "Task completion : The agent finds a restaurant that meets the constrains.", "The requested information is also given.", "Conciseness : The agent is to the point and does not ask/provide the same information repetitively.", "Fluency : The agent does not interrupt the dialogue flow and answer the questions logically.", "Overall score : The overall score for chatting with this agent.", "Each agent is evaluated on 100 dialogues, the mean and variance of each score are reported in Table 3. The scores are range from 0 to 5 .", "We also evaluate the agents by a simulated user via 500 dialogues for each agent.", "Results In table 3, we can see that LCPO significantly outperform PPO in all aspects.", "The task completion is close to the success rate evaluated by the simulated user.", "Conciseness is the feature of this work, and the improvement is also the most considerable.", "Regarding fluency, the difference between PPO and LCPO is smaller.", "Sometimes a fluent conversation takes more turns.", "Sometimes a non-logical response can complete the task as well (e.g. inform a restaurant name in the beginning).", "However, LCPO is still better than PPO in terms of fluency since a non-logical response usually accompanies with no information gain.", "We propose LCPO to improve sample ef-ficiency for dialogue policy optimisation.", "LCPO has two critical components: loop clipping and advantage clipping.", "Both of them are strongly effective in low resource scenario and easy to implement.", "LCPO also demonstrates strong robustness to hyperparameters.", "We train and evaluate dialogue agents with real users on the Cambridge Restaurant domain.", "We also demonstrate that human-in-the-loop training is feasible within 100 dialogues.", "The evaluation has four aspects to clarify what has been learnt by each agent.", "LCPO outperforms PPO in all aspects.", "In this paper, LCPO integrates with PPO.", "In the future, we will generalise loop clipping method to other off-policy reinforcement learning approaches with episodic memory since off-policy approaches are considered more sample-efficient." ]
[ "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method" ]
[ "The availability of large-scale datasets has driven the development of neural models that create generic summaries from single or multiple documents.", "In this work we consider query focused summarization (QFS), a task for which training data in the form of queries, documents, and summaries is not readily available.", "We propose to decompose QFS into (1) query modeling (i.e., finding supportive evidence within a set of documents for a query) and (2) conditional language modeling (i.e., summary generation).", "We introduce MARGE , a Ma sked ROUGE Regression framework for evidence estimation and ranking which relies on a unified representation for summaries and queries, so that summaries in generic data can be converted into proxy queries for learning a query model.", "Experiments across QFS benchmarks and query types show that our model achieves state-of-the-art performance despite learning from weak supervision.", "1 1 Introduction The neural encoder-decoder framework has become increasingly popular in generic summarization (See et al. 2017; Gehrmann et al. 2018; Liu and Lapata 2019a; Fabbri et al. 2019, inter alia ) thanks to the availability of large-scale datasets containing hundreds of thousands of document-summary pairs.", "Training data of this magnitude is not readily available for query focused summarization (QFS; Dang 2005) which aims to create a short summary from a set of documents that answers a specific query.", "Existing corpora (Nema et al., 2017; Dang, 2005; Hoa, 2006; Baumel et al., 2016) are relatively small for modern data-hungry neural architectures and have been mostly used for evaluation purposes.", "A major bottleneck in leveraging generic summarization data for QFS is the absence of queries (Nema et al., 2017); the majority of existing datasets consist of document-summary pairs, while QFS summaries are expected to answer specific queries.", "Recent work (Xu and Lapata, 2020; Su et al., 2020; Laskar et al., 2020) sidesteps this problem by resorting to distant supervision from query-relevant NLP resources including question answering (Rajpurkar et al., 2016; Chakraborty et al., 2020) and paraphrase identification (Dolan and Brockett, 2005).", "Such approaches incorporate query modeling in the summarization process but are even more data hungry compared to generic summarization ones, since they additionally require access to QA datasets which can be extremely costly to create (Bajaj et al., 2016; Kwiatkowski et al., 2019).", "Moreover, there is often a mismatch between queries in QA datasets and those in QFS scenarios (Xu and Lapata, 2020); the two types of queries are not identically distributed and it is practically infeasible to find appropriate query-related resources for all domains and topics.", "In this work we do not assume access to any resources other than those available for generic summarization.", "We further decompose abstractive QFS into two subtasks: (1) query modeling (i.e., finding supportive evidence within a set of documents for a query) and (2) conditional language modeling (i.e., generating an abstractive summary based on found evidence).", "Under this formulation, we use generic summarization data not only for conditional language modeling, but also for learning an evidence ranking model.", "Inspired by the Cloze task and its applications in NLP (Taylor, 1953; Lewis et al., 2019; Lee et al., 2019), we propose MARGE , a Ma sked R OU GE regression framework for evidence estimation and ranking.", "MARGE intro-Masked Summary The Da Vinci Code was published in 2003, and within six years Brown had booted John Grisham from the No. 1 slot on the list of writers whose books were most often donated to Oxfam's 700 shops.", "duces a unified representation for summaries and queries , so that summaries in generic data can be converted into proxy queries for learning a query model.", "Based on the evidence selected by MARGE , we generate abstractive summaries whilst controlling their length and the extent to which the query influences their content.", "Our contributions in this work are threefold: we propose a weakly supervised system for abstractive QFS where no query-related resources are required; we discover a new type of connection between generic summaries and QFS queries, and provide a universal representation for them which allows generic summarization data to be exploited for QFS; we provide experimental results on QFS benchmarks, and show that across query types and domains our system achieves state-of-the-art results on both evidence ranking and abstractive QFS.", "The majority of previous QFS approaches have been extractive, operating over queries and document clusters from which they select query-relevant sentences to compose a summary.", "They mostly differ in the way centrality and relevance are estimated and incorporated, e.g., via manifold ranking (Wan et al., 2007), using a look-ahead strategy (Badri-nath et al., 2011), uncertainty prediction (Wan and Zhang, 2014), or attention mechanisms (Li et al., 2017a,b).", "More recently Xu and Lapata (2020) propose a coarse-to-fine framework that leverages distant supervision from question answering to extract summary-worthy content.", "Abstractive QFS has received significantly less attention.", "This is due to generation models being particularly data-hungry (Lebanoff et al., 2018; Liu and Lapata, 2019a) and the scarcity of QFS training data.", "The increasing availability of pretrained models has prompted the development of pipeline-style frameworks for QFS which use resources from a wider range of NLP tasks.", "For example, Su et al. (2020) fine-tune BART (Lewis et al., 2020) on CNN/DailyMail (Hermann et al., 2015), a single-document summarization dataset, and generate abstracts for QFS by iteratively summarizing paragraphs to a budget.", "They learn a query model for paragraph selection based on a plethora of QA and machine reading datasets (Su et al., 2019; Ra-jpurkar et al., 2016).", "Similarly, Laskar et al. (2020) fine-tune BERTSUM on CNN/DailyMail, and propose a three-stage system which uses supervision from QFS data (typically reserved for evaluation) and related QA and paraphrase identification tasks.", "We also focus on abstractive QFS, however, we do not assume access to any additional training resources over and above generic summarization datasets, even for query modeling.", "Moreover, our system is able to generate long QFS abstracts all at once , instead of iteratively creating bullet-style summaries which often lack coherence.", "Let { ( S, D ) } denote a generic summarization dataset where D = { d 1 , d 2 , . . . , d M } is a collection of documents with corresponding summaries S .", "|D| = 1 for single-document summarization (SDS) and |D| > 1 for multi-document summarization (MDS).", "In QFS, a query Q additionally specifies an information request, { ( S, D , Q ) } .", "It is often assumed (e.g., in DUC benchmarks) that Q consists of a short title (e.g., Amnesty International ), and a query narrative which is longer and more detailed (e.g., What is the scope of operations of Amnesty International and what are the international reactions to its activities? ).", "In this work, we propose to decompose QFS into two sub-tasks, namely query modeling and conditional language modeling .", "The query model q ( D | Q ; ) estimates whether textual units (e.g., sentences) within document cluster D are relevant to query Q , while p ( S | D, Q ; ) generates summary S conditioned on evidence provided by the query model and (optionally) the query itself (see Figure", "1(b) for an illustration).", "When S Q , we have a query-agnostic conditional language model p ( S | D ; ) .", "Otherwise, the conditional language model is query-guided .", "Our query model is trained with distant supervision derived from generic summarization data which is easier to obtain (e.g., from online sources) compared to QA datasets which must be annotated from scratch (e.g., for different types of questions and domains).", "Although queries are not verbalized in generic summarization, we hypothesize that the summaries themselves constitute a response to latent queries.", "So, how can we reverse-engineer the queries from the summaries?", "Inspired by the standard Cloze task (Taylor, 1953) and its recent variants (Lewis et al., 2019; Lee et al., 2019), we render queries and summaries in a Unified Masked Representation (UMR) which enables summaries to serve as proxy queries for model training, as shown in Figure", "1(a).", "We further assume that the answer to these queries can be found in sentences which form part of the document collection D .", "Although we do not know for certain what these sentences are we can assume that if they have a high ROUGE score against the reference summary they are likely to contain an answer.", "We therefore use ROUGE as a distant supervision signal, and train a model that takes a query and document sentence as input and estimates their relevance.", "At inference time, we also render actual queries in UMR and rank all sentences in the document collection with our trained model.", "The most relevant sentences serve as input to a conditional language model to generate query focused abstractive summaries.", "As explained earlier, we train a query model q ( D | Q ; ) on summary-sentence pairs via distant supervision.", "We use a summary-based proxy query UMRS during training and an actual query UMRQ during testing.", "In the following, we first describe how UMRs are obtained and then discuss how the query model is trained.", "Unified Masked Representation The intuition behind UMR is that a summary will encapsulate most salient information a user needs, while a query typically covers only a small fraction.", "We thus add one or more placeholders to the query to represent missing information the user actually seeks.", "We also identify such information in generic summaries for selective masking, to reduce the distributional shift during training.", "The UMR for a summary is the concatenation of its sentential UMRs.", "To convert a sentence from natural language to UMR, we parse it with Open Information Extraction (Open IE; Stanovsky et al. 2018) to a set of propositions consisting of verbs and their arguments.", "The latter are considered candidate information slots I .", "We initialize Algorithm 1, by replacing all such slots with a [MASK] token.", "We subsequently sample and reveal a set of slots subject to a budget constraint.", "We define the budget as B = |I| where [0 , 1] modulates the proportion of tokens to be revealed within I slots (and is optimized on the development set).", "Finally, in order to keep the representation of UMRS and UMRQ consistent (see next para-graph), we merge adjacent [MASK] tokens to one [MASK] resulting in a partially masked summary.", "We mask QFS queries by considering their structure and lexical makeup.", "Queries in DUC benchmarks often contain interrogative words (e.g., how is A and what is B ) and request words (e.g., describe A and tell me B ).", "Following this observation, we manually collect a small set of such query words and replace them with [MASK] .", "For queries with a title and a narrative, we first mask the narrative and then prepend [MASK] T . , where T is a sequence of title tokens.", "Figure", "1(a) shows examples of a masked query and summary.", "Evidence Ranking We represent sentences in a document collection and UMR queries with a pretrained BERT model (Devlin et al., 2019).", "Specifi-cally, we concatenate a UMR query and a candidate sentence to sequence [CLS] U [SEP] C [SEP] where U is a sequence of tokens within a UMR query and C a sequence of tokens in a document sentence (we pad each sequence in a minibatch of L tokens).", "The [CLS] vector serves as input to a single layer neural network which estimates whether the sentence contains sufficient evidence to answer the query (see Figure", "1(b) right).", "We use the mean-square error to compute the loss and update the encoding parameters in BERT via standard backpropagation: L ( ) = 1 |D| (cid:88) ( S,C ) D (cid:104) ( y y ( S, C ; )) 2 (cid:105) .", "where S, C is a summary-sentence pair sampled from collection D and y the training signal.", "Recall the summary is rendered as UMRS .", "Previous work (Liu and Lapata, 2019a) has used ROUGE-2 as training signal for paragraph ranking.", "However, sentences are significantly shorter than paragraphs, and we observe a number of instances with a ROUGE-2 score of 0. We therefore perform label smoothing and define y as the F1 interpolation of ROUGE-2 and ROUGE-1: y = R 2 ( S, C ) + R 1 ( S, C ) where is optimized on the development set.", "At inference time, we use the trained model to compute the affinity score between UMRQ and all candidate sentences in D and rank them accordingly.", "The highest ranked sentences are deemed query-relevant and passed on to our summary generation model.", "2 Query Narrative Expansion In some cases queries may be relatively short and narratives absent.", "This can be problematic for our setup since query proxies (in the form of summaries) are typically long and detailed.", "For datasets with short queries we automatically create query narratives in an unsupervised fashion.", "We employ LexRank (Erkan and Radev, 2004) to select a subset of representative sentences under a word budget and concatenate them to form narratives (which we append to the original queries).", "2 The Cloze task has been also employed in recent work in generic summarization (Huang et al., 2020).", "In comparison, we address a different research question (i.e., query modeling vs. summary evaluation) based on a different formulation (masked ROUGE regression vs. multiple-choice QA).", "We also leverage generic summarization datasets to fine-tune a pretrained language model for abstractive QFS.", "In experiments we employ the publicly released UNILMV 2 (Bao et al., 2020) to instantiate the controllable generator shown in Figure", "1(b), however any other language model could have been used instead.", "With Transformer (Vaswani et al., 2017) as the backbone network, UNILMV 2 is jointly pretrained for natural language understanding and generation.", "Specifically, a bidirectional model is employs an autoencoding objective (AE; identical to Devlin et al. 2019), while a partially autoregressive (PAR) sequence-to-sequence model decomposes the probability of masked tokens in input sequence x as: p ( x M | x \\ M ) = | M | (cid:89) i =1 (cid:89) m M i p ( x m | x \\ M i ) (2) where M is the uniformly-produced factorization order.", "The masked position set M i at the i th factorization step can be either a token or a n -gram block.", "x M is a set of x M i , and similarly, x \\ M is a set of x \\ M i .", "The pretraining loss is computed as LAE + LPAR .", "At inference, UNILMV 2 operates over sentences deemed relevant by the query model and decodes summaries autoregressively (see Figure", "1(b) left).", "Synthetic MDS Data The pre-trained language model can be fine-tuned on MDS datasets (e.g., Multi-News; Fabbri et al. 2019) which are perhaps better aligned with the QFS task since both MDS and QFS operate over document clusters.", "We additionally propose a way to create synthetic MDS datasets based on SDS data.", "This is advantageous for two reasons.", "Firstly, MDS resources are fairly limited compared to SDS data (Zhang et al., 2018; Lebanoff et al., 2018).", "And secondly, by construction, we can ensure various data characteristics which might be desirable (e.g., the number of topics represented in the document collection).", "A challenge with leveraging SDS for QFS is the summary length (Lebanoff et al., 2018).", "Summaries in SDS datasets such as CNN/DailyMail (Hermann et al., 2015), are on average 30 tokens long.", "In contrast, query focused summaries can be as long as 250 tokens.", "We sidestep this problem by adopting a retrieval -based solution.", "Specifically, we first build a database with all summaries in the original dataset.", "For each sample ( d i , s i ) , we query the database with summary s i .", "We retrieve N i 1 other summaries S i with the bigram hashing and TF-IDF matching method described in Chen et al. (2017).", "Then, we fetch their corresponding articles D i , and form the i th cluster as: D i = { d i } (cid:91) D i (3) s i = concat( s i , , s i, 1 , . . . , s i,N i ) , s i,n S i (4) where D i are the source documents, and s i is a potentially redundant summary of them.", "We set N i to minimize the length difference between s i and our summary length requirement (e.g., 250 tokens).", "To obtain the final summary s i , we eliminate redundancy by selecting sentences from the start of s i , skipping sentences that have high cosine similarity with those which have already been selected.", "Summarization Input In generic MDS, the input to the summarization model is a long sequence, i.e., documents within a cluster are concatenated together and sentences in each document follow their original order (Fabbri et al., 2019).", "In QFS, information about absolute (document) position is lost after evidence ranking.", "As a result, there is a discrepancy between training and testing for our generation model.", "To mitigate this, we collect all sentences across documents for each training sample and rank them in descending order according to their ROUGE-2 score against the reference summary.", "The pretrained language model is fine-tuned against this evidence-ranked list of sentences.", "During inference, when actual queries are available, we instead use the top sentences ranked by our query model as input to summary generation.", "Query Guidance Given that summarization input essentially consists of sentences that are highly relevant to the query, an obvious question concerns the usefulness of explicitly modeling the query during generation.", "We thus instantiate two conditional language models.", "For a query-guided summarizer p ( S | D, Q ; ) , we prepend UMRSS to the selected evidence during training and UMRQ at inference.", "While for a query-agnostic summarizer p ( S | D ; ) , we only consider the selected evidence as input to our summarizer and this setting is identical to generic MDS.", "Length Control QFS tasks usually require summaries of a fixed length budget (e.g, 250 words), whereas summary length is bound to be variable Dataset 2005 2006 2007 TD-QFS Domain Cross Cross Cross Medical Query Narrative Long Long Long Short #Clusters 50 50 45 4 #Queries/Cluster 1 1 1 10 #Documents/Cluster 32 25 25 185 #Summaries/Query 4-9 4 4 3 #Words/Summary 250 250 250 250 Table 1: Multi-Document QFS dataset statistics.", "in the training data.", "Inspired by Fan et al. (2018), we quantize summary length into discrete bins.", "We augment each training instance with this information, i.e., we prepend a length token (e.g., [230] ) to document sentences.", "At inference, we inform the model of the summary budget by prepending the expected length token (e.g., [250] ) to the sentences selected by the evidence ranker (see Figure", "1(b)).", "Datasets We performed experiments on the DUC 2005-2007 QFS benchmarks and TD-QFS (Baumel et al., 2016).", "DUC benchmarks contain long query narratives while TD-QFS focuses on medical texts with short keyword queries.", "Statistics for both datasets are given in Table 1. We used DUC 2005 as a development set to optimize hyperparameters and select abstractive models, and evaluated performance on the other three datasets.", "We used Multi-News (Fabbri et al., 2019) and CNN/DailyMail (Hermann et al., 2015) as our generic summarization datasets to train MARGE (for evidence ranking) and to fine-tune UNILMV 2 (for summary generation).", "Data statistics are shown in Table 2. To create the training and development sets for optimizing MARGE , we sampled sentences from each dataset.", "Specifically, we took the first and last 20 sentences from each cluster in Multi-News and the first and last three sentences from each article in CNN/DailyMail.", "For fine-tuning UNILMV 2, we used the original Multi-News and the synthetic multi-document version of CNN/DailyMail described in Section 5.", "Implementation Details We used the publicly released BERT model 3 and fine-tuned it for ROUGE regression with a learning rate of 3 10 5 and a batch size of 128 for 3 epochs on 8 GPUs (GTX 2080 Ti).", "We trained two summarization 3 https://github.com/huggingface/pytorch-transformers Query Modeling Multi-News CNN/DM #Sentence/Doc 20 3 #Train 1,615,508 1,719,210 #Validation 200,824 80,052 #Words/Proxy Query 111.7 26.0 #Masks/Proxy Query 35.6 8.1 Summary Generation Multi-News CNN/DM #Clusters 44,972 287,227 #Documents/Cluster 2.8 4.1 #Words/Summary 257.2 261.3 Table 2: Training data for query modeling and summary generation.", "models on CNN/DailyMail and Multi-News, respectively, with the same hardware.", "For both models, we set the maximum input length to 768, and fine-tuned the publicly released UNILMV 2 model 4 with a learning rate of 7 10 5 and a batch size of 16 for 40,000 steps with gradient accumulation every 4 steps.", "During decoding, we used beam search with beam size 5 and Trigram Blocking (Paulus et al., 2018) to reduce redundancy.", "The cosine similarity threshold for redundancy removal was set to 0.6 and summary length was discretized to 10 bins.", "The parameter for label smoothing was set to 0.15.", "We set , the parameter which modulates the proportion of information slots to reveal during masking, to 0 (see Appendix for detailed analysis of and its effect on model performance).", "Our experiments evaluate both components of the proposed approach, namely query modeling and summary generation.", "We assess the evidence ranker and the effectiveness of the unified masking.", "We also compare our summaries against competitive abstractive and extractive systems using automatic and human-based evaluation.", "Evaluation Metrics We evaluate query modeling with retrieval and summarization metrics.", "For the former evaluation, we follow Liu and Lapata (2019a), concatenate the top k ranked sentences, and calculate recall against gold summaries.", "We additionally propose to evaluate model output as 4 https://github.com/microsoft/unilm Models DUC 2006 DUC 2007 TD-QFS R@10 R@30 R@10 R@30 R@10 R@30 ORACLE 6.7 16.2 8.4 19.1 17.2 35.6 TERMFREQ 7.2 15.1 8.5 18.5 14.2 25.9 BERTQA 8.5 16.3 10.2 20.2 9.8 21.9 BERTMRC 8.2 16.6 9.0 19.2 8.1 16.4 MARGE-MN 11.1 20.2 13.8 25.3 11.2 21.6 + EXPAND 18.1 32.9 MARGE-CD 9.1 17.4 11.1 22.1 10.0 18.7 + EXPAND 17.2 27.7 Table 3: Retrieval performance of evidence rankers.", "if it were an extractive summary, to better assess coverage and informativeness.", "We thus take the top sentences subject to a budget of 250 tokens, and remove redundancy by selecting sentences from the top and skipping sentences that have high cosine similarity (e.g., 0 . 6 ) with selected ones.", "We use ROUGE F1 to evaluate the resulting summaries so that precision is also taken into account.", "Results We compare MARGE against Term Frequency, a simple but effective retrieval method that performs particularly well on DUC datasets (Katra-gadda and Varma, 2009).", "We also compare to two semantic matching models used for extractive QFS (Xu and Lapata, 2020): BERTQA which is trained on the joint set of WikiQA (Yang et al., 2015) and TrecQA (Yao et al., 2013), and BERTMRC which is fine-tuned on SQuAD 2.0 (Rajpurkar et al., 2018).", "ORACLE uses reference summaries as queries to retrieve summary sentences.", "For summarization evaluation, we report upper bound performance (GOLD ) which we estimated by comparing a (ran-domly selected) reference summary against the remaining three reference summaries.", "In addition, we compare to LEAD which returns all lead sentences of the most recent document (up to 250 words) and LEXRANK (Erkan and Radev, 2004), a widely-used unsupervised method based on Markov random walks on sentence-similarity graphs.", "5 We summarize ranking and summarization results in Tables 3 and 4.", "As we can see, despite learning from weak signals, i.e., proxy queries and proxy answers, MARGE outperforms the strongest base-5 To examine ranking performance, we exclude multi-stage frameworks like Xu and Lapata (2020) that rerank the evidence with additional modules (e.g., centrality).", "line, BERTQA , under both evaluation tasks.", "Without recourse to any question/answer annotations or dataset-specific retrieval methods, our model provides more informative input to the downstream generation task.", "As anticipated, query expansion ( + EXPAND ) gives a big boost on TD-QFS (which has short queries) leading to better coverage.", "Ablation Studies Table 5 shows the outcome of various ablation studies which assess the effectiveness of masking and how to best instantiate it.", "Specifically, Verb additionally treats verbs as information slots for sampling and masking; Mask removes masking entirely so that the whole summary is revealed; Query removes the proxy query (at training time) and the actual query (at inference time); this is to investigate whether our model simply learns to judge sentence salience based on its own features, instead of performing semantic matching with the given query; OpenIE removes the dependency on Open IE and chooses words to mask at random.", "Specifically, we randomly mask 15% words in summaries as in BERT (Devlin et al., 2019) and merge adjacent [MASK] tokens.", "Performance drops in all cases, especially when queries Models DUC 2006 DUC 2007 TD-QFS PQSUM-WSL 16.5 17.7 QUERYSUM 15.3 16.8 20.7 BART-CAQ 12.9 14.4 PQSUM 14.8 16.0 UNILM-MN 11.8 12.3 12.9 UNILM-CD 13.6 14.9 16.7 MARGESUM-MN 14.3 16.5 16.5 MARGESUM-CD 15.1 16.9 20.9 Table 6: Abstractive summarization models with R-SU4 (full set of results in Appendix); / : extrac-tive/supervised method.", "are removed, underscoring the effectiveness of the proposed representation and training framework.", "Automatic Evaluation Table 6 compares our model, which we call MARGESUM , against existing QFS systems.", "These include PQSUM-WSL (Laskar et al., 2020) a supervised abstractive system which represents the state of the art on DUC benchmarks.", "It first extracts relevant sentences for each document with a QA model, it then replaces some of these with reference summary sentences via a paraphrase model, and uses them to further fine-tune BERTSUM (Liu and Lapata, 2019b).", "In its supervised incarnation, two years' DUC datasets are used for training and one for testing.", "QUERYSUM (Xu and Lapata, 2020) is state-of-the-art extractive system which adopts a coarse-to-fine process for salience estimation.", "The second block compares our model with two distantly supervised approaches.", "BART-CAQ (Su et al., 2020) uses an ensembled QA model to extract answer evidence, and fine-tuned BART (Lewis et al., 2020) to iteratively generate summaries from paragraphs.", "PQSUM (Laskar et al., 2020), uses fine-tuned BERTSUM to generate summaries for each document in a cluster, and a QA model to rank summary sentences against the query.", "Table 7 compares these models and our own in terms of their training requirements.", "The third block presents the performance of UNILM fine-tuned on Multi-News and CNN/DailyMail following the standard setting in Bao et al. (2020).", "It uses no query guidance or length control.", "Documents are concatenated as input for training.", "During testing, sentences are selected with MARGE but ordered according to Models QA PI GS QFS BART-CAQ (Su et al., 2020) (cid:51) (cid:55) (cid:51) (cid:55) PQSUM (Laskar et al., 2020) (cid:51) (cid:55) (cid:51) (cid:55) PQSUM-WSL (Laskar et al., 2020) (cid:51) (cid:51) (cid:51) (cid:51) UNILM (Bao et al., 2020) (cid:55) (cid:55) (cid:51) (cid:55) MARGESUM (cid:55) (cid:55) (cid:51) (cid:55) Table 7: Training requirements for existing QFS models (QA, PI, GS, and QFS stand for question answering, paraphrase identification, generic summarization and query focused summarization).", "their original document position.", "The last block shows two variants of MARGESUM , optimized on Multi-News and a synthetic training set built from CNN/DailyMail.", "Both take as input sentences selected with MARGE-MN during inference.", "As we can see, without requiring expensive QA data (see Table 7), MARGESUM-CD outperforms existing distantly supervised approaches.", "Its performance on DUC is on par with one of the strongest extractive systems, while on TD-QFS it is supe-rior across metrics.", "Also note that MARGE trained on synthetic MDS data outperforms MARGESUMMN .", "Compared to Multi-News, synthetic summaries cover more topics and are less redundant, which is suited to QFS where there are usually multiple sub-queries to answer.", "Ablation Studies Table 8 presents the results of several ablation studies on MARGESUM-CD .", "Replacing the input to the summarization component with sentences selected by BERTQA (Xu and Lapata, 2020) significantly decreases performance, demonstrating that sentences selected by MARGE are useful to downstream abstractive summarization.", "Removing evidence ranking altogether ( Rank) leads to a large performance drop; this is expected since sentence position information from the original documents does not transfer well to QFS settings.", "Removing length control ( Length) also hurts performance as does the removal of query guidance ( Query) at inference time.", "Human Evaluation We also evaluated model summaries in a judgment elicitation study via Amazon Mechanical Turk.", "Native English speakers (self-reported) were asked to rate query-summary pairs on two dimensions: Succinctness (does the summary avoid unnecessary detail and redundant information?) and Coherence (does the summary make logical sense?).", "The ratings were obtained using a fivepoint Likert scale.", "In addition, participants were asked to assess the Relevance of the summary to the query.", "Crowdworkers read a summary and for each sentence decided whether it is relevant (i.e., it provides an answer to the query), irrelevant (i.e., it does not answer the query), and partially relevant (i.e., it is not clear it directly answers the query).", "Relevant sentences were awarded a score of 5, partially relevant ones a score of 2.5, and 0 otherwise.", "Sentence scores were averaged to obtain a relevance score for the whole summary.", "Participants assessed summaries created by PQSUM-WSL , the state-of-the-art abstractive system, QUERYSUM , a state-of-the-art extractive system, UNILM-CD , and MARGESUM-CD .", "6 We also randomly selected GOLD standard summaries to include as an upper bound.", "We sampled 20 query-cluster pairs from DUC (2006, 2007; 10 from each set), and 20 pairs from TD-QFS (5 from each cluster) and collected three responses per pair.", "6 We are grateful to Md Tahmid Rahman Laskar for providing us with the output of their PQSUM-WSL system.", "We include PQSUM-WSL only for human evaluation on DUC since it was not evaluated on TD-QFS (Laskar et al., 2020) and system output is not available.", "Table 9 shows the human ratings for each system (we provide examples of summary output in Appendix C).", "Participants perceive MARGESUMCD on par with PQSUM-WSL in terms of query relevance and summary succinctness, while significantly better than PQSUM-WSL and QUERYSUM in terms of coherence.", "In fact, participants find summaries PQSUM-WSL summaries as incoherent as those created by the extractive QUERYSUM ; this is probably due to the fact that PQSUMWSL first generates an abstractive summary for each document and then re-ranks the generated sentences.", "Therefore, final summary sentences are less related to each other.", "Summaries from our system are also considered significantly more relevant than UNILM-CD .", "Compared to PQSUM-WSL , although UNILM-CD is not good at producing relevant content, it maintains relatively higher coherence, demonstrating the effectiveness of training abstractive systems with synthetic data from SDS and generating long summaries at once.", "In this work we proposed an abstractive framework for query focused summarization.", "We provided a unified mask representation for summaries and queries, which enables summaries to serve as proxy queries for model training.", "As a result, a query model can be trained with generic summarization data without relying on additional question-answering resources.", "Experimental results across datasets show that the proposed system yields state-of-the-art performance despite the weakly supervised setting, and produces more relevant and coherent summaries compared to existing approaches.", "In the future, we would like to push this low-resource approach even further and attempt to generate abstractive summaries without access to any summarization datasets.", "The authors would like to thank the anonymous reviewers for their valuable feedback.", "We acknowledge the financial support of the European Research Council (Lapata; award number 681760).", "This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract FA8650-17-C-9118.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation therein." ]
[ "abstain", "method", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "other", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "result", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "abstain", "abstain", "method", "other", "other", "other", "other", "other" ]
[ "Understanding narrative text requires capturing characters' motivations, goals, and mental states.", "This paper proposes an Entity-based Narrative Graph (ENG) to model the internal-states of characters in a story.", "We explicitly model entities, their interactions and the context in which they appear, and learn rich representations for them.", "We experiment with different task-adaptive pre-training objectives, in-domain training, and symbolic inference to capture dependencies between different decisions in the output space.", "We evaluate our model on two narrative understanding tasks: predicting character mental states, and desire fulfillment, and conduct a qualitative analysis.", "Understanding narrative text requires modeling the motivations, goals and internal states of the characters described in it.", "These elements can help explain intentional behavior and capture causal connections between the characters' actions and their goals.", "While this is straightforward for humans, machine readers often struggle as a correct analysis relies on making long range common-sense inferences over the narrative text.", "Providing the appropriate narrative representation for making such inferences is therefore a key component.", "In this paper, we suggest a novel narrative representation model and evaluate it on two narrative understanding tasks, analyzing the characters' mental states and motivations (Abdul-Mageed and Ungar, 2017; Rashkin et al., 2018; Chen et al., 2020), and desire fulfillment (Chaturvedi et al., 2016; Rahimtoroghi et al., 2017).", "We follow the observation that narrative understanding requires an expressive representation capturing the context in which events appear and the interactions between characters' states.", "To clarify, consider the short story in Fig. 1. The desire expression appears early in the story and provides the context explaining the protagonist's actions.", "Evaluating the fulfilment status of this expression, Cindy really likes apples.", "which tends to appear towards the end of the story, requires models that can reason over the desire expression ( trying something new ), its target ( ap-ples ) and the outcome of the protagonist's actions ( it's now her favorite apple dish! ).", "Capturing the interaction between the motivation underlying the desire expression (in Fig. 1, CURIOSITY ) and the emotions (in Fig. 1, ANTICIPATION ) likely to be invoked by the motivation can help ensure the consistency of this analysis and improve its quality.", "To meet this challenge, we suggest a graph-contextualized representation for entity states.", "Similar to contextualized word representations (Peters et al., 2018; Devlin et al., 2019), we suggest learning an entity-based representation which captures the narrative it is a part of.", "For example, in She decided to try to make baked apples for the first time the mental state of she would be represented differently given a different context, such as a different motivation for the action ( Her mother asked her to make an apple dish for a dinner party ).", "In this case, the contextualized representation would capture the different emotion associated with it (e.g., FEAR of disappointing her mother).", "Unlike contextualized word embeddings, entity-based con-textualization needs to consider, at least, two levels of context: local text context and distant event context, which require more complicated modeling techniques to capture event semantics.", "Moreover, the context of event relationships can spread over a long narrative, exceeding maximum sequence length limitation in modern contextualized word embedding models such as BERT (Devlin et al., 2019).", "In this paper, we propose an Entity-based Narrative Graph (ENG) representation of the text.", "Unlike other graph-based narrative representations (Lehn-ert, 1981; Goyal et al., 2010; Elson, 2012) which require intensive human annotation, we design our models around low-cost supervision sources and shift the focus from symbolic graph representations of nuanced information to their learned embedding.", "In ENG, each node is associated with an entity-event pair, representing an entity mention that is involved in an event.", "Edges represent observed relations between entities or events.", "We adapt the definition of event relationships introduced in Lee et al. (2020) to our entity-event scenario.", "For entity relationships, the CNext relationship connects two coreferent entity nodes.", "For event relationships, the Next relationship captures the sequential order of events as they appear in the text, and six discourse relation types from the Penn Discourse Tree Bank (PDTB) (Prasad et al., 2007) are used.", "These include Before, After,", "Sync., Contrast, Reason and Result .", "Note that these are extracted in a weakly supervised manner, without expensive human annotations.", "To contextualize the entity embeddings over ENG, we apply a Relational Graph Convolution Network (R-GCN) (Schlichtkrull et al., 2018), a relational variant of the Graph Convolution Network architecture (GCN) (Kipf and Welling, 2016).", "R-GCNs create contextualized node representations by considering the graph structure through graph convolutions and learn a composition function.", "This architecture allows us to take into account the narrative structure and the different discourse relations connecting the entity-event nodes.", "To further enhance our model, we investigate three possible pre-training paradigms: whole-word-masking, node prediction, and link prediction.", "All of them are constructed by automatically extracting noisy supervision and pre-training on a large-scale corpus.", "We show that choosing the right pre-training strategy can lead to significant performance enhancements in downstream tasks.", "For example, automatically extracting sentiment for entities can impact downstream emotion predictions.", "Finally, we explore the use of a symbolic inference layer to model relationships in the output space, and show that we can obtain additional gains in the downstream tasks that have strong correlation in the output space.", "The evaluated downstream tasks include two challenging narrative analysis tasks, predicting characters' psychological states (Rashkin et al., 2018) and desire fulfilment (Rahimtoroghi et al., 2017).", "Results show that our model can outperform competitive transformer-based representations of the narrative text, suggesting that explicitly modeling the relational structure of entities and events is beneficial.", "Our code and trained models are publicly available 1 .", "Tracking entities and modeling their properties has proven successful in a wide range of tasks, including language modeling (Ji et al., 2017), question answering (Henaff et al., 2017) and text generation (Bosselut et al., 2018).", "In an effort to model complex story dynamics in text, Rashkin et al. (2018) released a dataset for tracking the emotional reactions of characters in stories.", "In their dataset, each character mention is annotated with three types of mental state descriptors: Maslow's hierarchy of needs (Maslow, 1943), Reiss' basic motives (Reiss, 2004), that provide a more informative range of motivations, and Plutchik's wheel of emotions (Plutchik, 1980), comprised of eight basic emotional dimensions (e.g. joy, sadness, etc).", "In their paper, they showed that neural models with explicit or latent entity representations achieve promising results on this task.", "Paul and Frank (2019) approached this task by extracting multi-hop relational paths from ConceptNet, while Gaonkar et al. (2020) leveraged semantics of the emotional states by embedding their textual description and modeling the co-relation between different entity states.", "Rahimtoroghi et al. (2017) introduced a dataset for the task of desire fulfillment.", "They identified desire expressions in first-person narratives and annotated their fulfillment status.", "They showed that models that capture the flow of the narrative perform well on this task.", "Representing the narrative flow of stories using graph structures and multi-relational embeddings has been studied in the context of script learning (Li et al., 2018; Lee and Goldwasser, 2019; Lee et al., 1 https://github.com/doug919/entity_ based_narrative_graph 2020).", "In these cases, the nodes represent predicate-centric events, and entity mentions are added as context to the events.", "In this paper, we use an entity-centric narrative graph, where nodes are defined by entity mentions and their textual context.", "We encode the textual information in the nodes using pre-trained language models (Devlin et al., 2019; Liu et al., 2019), and the graph structure with a relational graph neural network (Schlichtkrull et al., 2018).", "To learn the representation, we incorporate a task-adaptive pre-training phase.", "Gururangan et al. (2020) showed that further specializing large pre-trained language models to domains and tasks within those domains is effective.", "Many NLU applications require understanding entity states in order to make sophisticated inferences (Sap et al., 2018; Bosselut et al., 2019; Rashkin et al., 2018), and the entity states are highly related to the event the entity involves in.", "In this work, we propose a learning framework that aims at modeling entities' internal states, and their interactions to other entities' internal states through events.", "We include task-adaptive pretraining (TAPT) and downstream task training to train an entity-based narrative graph (ENG), a graph neural model designed to capture implicit states and interactions between entities.", "We extend the narrative graph proposed by Lee et al. (2020), which models event relationships, and instead of learning node representations for events, we focus on entity mentions that are involved in events.", "This change is motivated by the high-demand of NLU applications that require understanding entity mentions' states in order to make sophisticated inference.", "Our framework consists of four main components: Node Encoder, Graph Encoder, Learning Objectives, and Symbolic Inference, outlined in Figure 2. The node encoder is a function used to extract event information about the target entity mention corresponding to the local node representation.", "The graph encoder uses a graph neural network to contextualize node representations with entity-events in the same document, generating entity-context-aware representations.", "The learning objectives use this representation for several learning tasks, such as node classification, link prediction, and document classification.", "Finally, we include a symbolic inference procedure to capture dependencies between output decisions.", "We introduce a training pipeline, containing pretraining and downstream training, following recent evidence suggesting that task-adaptive pre-training is potentially useful for many NLU tasks (Guru-rangan et al., 2020).", "We experiment with three pre-training setups, including the common whole-word-masking pre-training (Liu et al., 2019), and two newly proposed unsupervised pre-training objectives based on ENG.", "We then evaluate two downstream tasks: StoryCommonsense (Rashkin et al., 2018) and DesireDB (Rahimtoroghi et al., 2017).", "StoryCommonsense aims at predicting three sets of mental states based on psychological theories (Maslow, 1943; Reiss, 2004; Plutchik, 1980), while DesireDB's goal is to identify whether a target desire is satisfied or not.", "Solving these tasks requires understanding entities' mental states and their interactions.", "Each node in our graph captures the local context of a specific entity mention (or character mention), and how the entity mentions are extracted is related to extracting their edges, which will be described in Sec. 3.3.", "Following Gaonkar et al. (2020), we format the input information to be fed into a pretrained language model.", "For a given character c and sentence s , the inputs to the node encoder consist of three components ( s, ctx ( c ) , L ) , where s is the sentence in which c appears, ctx ( c ) is the context of c (all the sentences that the character appears in), and L is a label sentence.", "The label sentence is an artificial sentence of the form [entity name] is [label 1], [label 2], ..., [label k].", "The k labels correspond to the target labels in the downstream task.", "For example, in StoryCommonsense, the Plutchik state prediction task has eight labels characterizing human emotions, such as joy , trust , and anger .", "Gaonkar et al. (2020) show that self-attention is an effective way to let the model take label semantics into account, and improve performance 2 .", "Our best model uses RoBERTa (Liu et al., 2019), a highly-optimized version of BERT (De-vlin et al., 2019), to encode nodes.", "We convert the node input ( s, ctx ( c ) , L ) to RoBERTa's two-sentence input format by treating s as the first sentence, and the concatenation of ctx ( c ) and L as the second sentence.", "After forward propagation, we take the pooled sentence representation (i.e., < s >for RoBERTa, CLS for BERT), as the node representation v .", "This is formulated as v = f roberta ( s, ctx ( c ) , L ) .", "The ENG is defined as ENG = ( V, E ) , where V is the set of encoded nodes in a document and E is the set of edges capturing relationships between nodes.", "Each edge e E is a triplet ( v 1 , r, v 2 ) , where v 1 , v 2 V and r is an edge type ( r R ).", "Following Lee et al. (2020), we use eight relation types ( | R | = 8 ) that have been shown to be useful for modeling narratives.", "NEXT denotes if two nodes appear in neighboring sentences.", "CNEXT expresses the next occurrence of a specific entity following its co-reference chain.", "Six discourse relation types, used by Lee et al. (2020) and defined in Penn Discourse Tree Bank (PDTB) (Prasad et al., 2007), are also used in this work, including BEFORE , AFTER , SYNC", "., CONTRAST , REASON , RESULT .", "Their corresponding definition in PDTB and can be found in Table 1. Following Lee et al. (2020), we use the Stanford CoreNLP pipeline 3 (Manning et al., 2014) to obtain co-reference links and dependency trees.", "We use them as heuristics to extract the above relations and identify entities for TAPT 4 .", "Details of this procedure can be found in (Lee et al., 2020).", "Note that although we share the same relation definitions, our nodes are defined over entities, instead of events.", "For encoding the graph, we use a Relational Graph Convolution Network (R-GCN) (Schlichtkrull et al., 2018), which is designed for Knowledge Base Completion.", "This 2 Note that all candidate labels are appended to every example, without denoting which one is the right answer.", "Our preliminary experiments confirm that taking label semantics into account improves performance 3 Stanford CoreNLP v4.0 with default annotators.", "architecture is capable of modeling typed edges and is resilient to noise.", "R-GCN is defined as: h l +1 i = ReLU (cid:32) (cid:88) r R (cid:88) u U r ( v i ) 1 z i,r W lr h lu (cid:33) , (1) where h li is the hidden representation for the i-th node at layer l and h 0 i = v i (output of the node encoder); U r ( v i ) represents v i 's neighboring nodes connected by the relation type r ; z i,r is for normalization; and W lr represents trainable parameters.", "Our implementation of R-GCN propagates messages between entity nodes, emulating the interactions between their psychological states, and thus enriching node representations with context.", "Note that our framework is flexible, and alternative node and graph encoders could be used.", "Node Classification For node classification, we use the contextualized node embeddings coming from the graph encoder, and plug in a k -layer feed-forward neural network on top ( k = 2 in our case).", "The learning objectives could be either multi-class or multi-label.", "For multi-class classification, we use the weighted cross-entropy loss (CE).", "For multi-label classification, we use the binary cross-entropy (BCE) loss for each label 5 : CE = 1 NN (cid:88) i =1 i y i log( S ( g ( f ( x i )))) , (2) where S ( . ) is the Softmax function, f ( . ) is the graph encoder, g ( . ) is the node encoder, x i is the input including the target node i ( ( s, ctx ( c ) , L ) ) and all other nodes in the same document (or ENG), y i is the label, and i is the example weight based on the label distribution of the training", "set..", "Link Prediction This objective tries to recover missing links in a given ENG.", "We sample a small portion of edges ( 20% in our case) as positive examples, based on the relation type distribution given in Table 1, taken from the training set.", "To obtain negative examples, we corrupt the positive examples by replacing one component of the edge triplet with a sampled component so that the resulting triplet does not exist in the original graph.", "For example, given a positive edge ( e 1 , r, e 2 ) , we can create negative edges: ( e (cid:48) 1 , r, e 2 ) , ( e 1 , r (cid:48) , e 2 ) , or ( e 1 , r, e (cid:48) 2 ) .", "Following Schlichtkrull et al. (2018), we score each edge sample with DistMult (Chang et al., 2014): D ( i, r, j ) = h Ti W r h j , (3) where W r is a relation-specific trainable matrix (non-diagonal) and h i and h j are node embeddings coming from the graph encoder.", "A higher score indicates that the edge is more likely to be active.", "To learn this, we reward positive samples and penalize negative ones, using an adapted CE loss: L = 1 | T | (cid:88) ( i,r,j,y ) T y log( ( (cid:15) r D ( i, r, j ))) +(1 y ) log(1 ( (cid:15) r D ( i, r, j ))) , (4) T is the sampled edges set, y = { 0 , 1 } , ( . ) is the Sigmoid function, and (cid:15) r is the edge type weight, based on the edge sampling rate in Table 1. Document Classification For document classifi-cations, such as DesireDB, we aggregate the node representations from the entire ENG to form a single representation.", "To leverage the relative importance of each node, we add a self-attention layer on top of the graph nodes.", "We calculate the attention weights by attending on the query embedding (in DesireDB, this is the sentence embedding for the desire expression).", "a i = ReLU ( W a [ h i ; h t ] + b a ) z i = exp ( a i ) i = z i (cid:80) k z k ; h d = (cid:88) i i h i (5) where h i is the i-th node representation, h t is the query embedding, W a and b a are trainable parameters, and h d is the final document representation.", "We then feed h d to a two-hidden-layer classifier to make predictions.", "We use the loss function speci-fied in Eq.", "2. 3.5 Task-Adaptive Pre-training Recent studies demonstrate that downstream tasks performance can be improved by performing self-supervised pre-training on the text of the target domain (Gururangan et al., 2020), called Task-Adaptive Pre-Training (TAPT).", "To investigate whether different TAPT objectives can provide different insights for downstream tasks, we apply three possible pre-training paradigms and compare them on StoryCommonsense.", "We focus on StoryCommonsense given that the dataset was created by annotating characters' mental states on a subset of RocStories (Mostafazadeh et al., 2016), a corpus with 90K short common-sense stories.", "This provides us with a large unlabeled resource for investigating different pre-training methods.", "We run TAPT on all the RocStories text 6 .", "We use the learning parameters suggested by Gururangan et al. (2020) and explore the following strategies: Whole-Word Masking: Randomly masks a subset of words and asks the model to recover them from their context (Radford et al., 2019; Liu et al., 2019).", "We perform this task over RoBERTa, initialized with roberta-base .", "ENG Link Prediction: Weakly-supervised TAPT over the ENG.", "The setup follows Sec. 3.4 (Link Prediction) to learn a model that can recover missing edges in the ENG.", "ENG Node Sentiment Classification: Performs weakly-supervised sentiment TAPT.", "We use the Vader sentiment analysis (Hutto and Gilbert, 2014) tool to annotate the sentiment polarity for each node in the ENG, based on its sentence.", "The setup follows Sec. 3.4 (Node Classification).", "In addition to modeling the narrative structure in the embedding space, we add a symbolic inference procedure to capture structural dependencies in the output space for the StoryCommonsense task.", "To model these dependencies, we use DRaiL (Pacheco and Goldwasser, 2021), a neural-symbolic framework that allows us to define probabilistic logical rules on top of neural network potentials.", "Decisions in DRaiL are modeled using rules, which can be weighted (i.e., soft constraints), or unweighted (i.e., hard constraints).", "Rules are formatted as horn clauses: A B, where A is a conjunction of observations and predicted values, and 6 Not including the validation and testing sets of Story Cloze Test B is the output to be predicted.", "Each weighted rule is associated with a neural architecture, which is used as a scoring function to obtain the rule weight.", "The collection of rules represents the global decision, and the solution is obtained by performing MAP inference.", "Given that rules are written as horn clauses, they can be expressed as linear inequalities corresponding to their disjunctive form, and thus MAP inference is defined as a linear program.", "In DRaiL, parameters are trained using the structured hinge loss.", "This way, all neural parameters are updated to optimize the global objective.", "Additional details can be found in (Pacheco and Goldwasser, 2021).", "To score weighted rules, we used feed-forward networks over the node embeddings obtained by the objectives outlined in Sec. 3.4 and 3.5, without back-propagating to the full graph.", "We model the following rules: Weighted rules We score each state, as well as state transitions to capture the progression in a character's mental state throughout the story.", "where e i and e j are two different mentions of the same character, and HasNext is a relation between consecutive sentences.", "State can be either Maslow , Reiss or Plutchik .", "Unweighted rules There is a dependency between Maslow's hierarchy of needs' and Reiss basic motives (Rashkin et al., 2018).", "We introduce logical constraints to disallow mismatches in the Maslow and Reiss prediction for a given mention e i .", "In addition to this, we model positive and negative sentiment correlations between Plutchik labels.", "To do this, we group labels into positive (e.g. joy, trust), and negative (e.g. fear, sadness).", "We refer to this set of rules as inter-label dependencies .", "Maslow ( e i , m i ) Align ( m i , r i ) Reiss ( e i , r i ) Reiss ( e i , r i ) Align ( m i , r i ) Maslow ( e i , m i ) Plut ( e i , p i ) Pos ( p i ) Pos ( p j ) Plut ( e i , p j ) Given that the DesireDB task requires a single prediction for each narrative graph, we do not employ symbolic inference for this task.", "Our evaluation includes two downstream tasks and a qualitative analysis.", "We report the results for different TAPT schemes and symbolic inference on StoryCommonsense.", "For the qualitative analysis, we visualize and compare the contextualized graph embeddings and contextualized word embeddings.", "For TAPT, we use RocStories, as it has a decent amount of documents (90K after excluding the validation and testing sets) that share the text style of StoryCommonsense.", "For all tasks, we use the train/dev/test splits used in previous work.", "All the RoBERTa models used in this paper are initialized with roberta-base , and the BERT models with bert-base-uncased .", "The maximum sequence length for the language models is 160 .", "If the input sequence exceeds this number, we will keep the label sentence untouched and cut down the main sentence.", "For large ENGs, such as long narratives in DesireDB, we set the maximum number of nodes to 60 ; all the hidden layer have 128 hidden units; and the number of layers for R-GCN is 2 .", "For learning parameters in TAPT, we set the batch size to 256 through gradient accumulations; the optimizer is Adam (Kingma and Ba, 2014) with an initial learning rate of 1e 4 , (cid:15) = 1e 6 , = (0 . 9 , 0 . 98) , weight decay 0 .", "01 , and warm-up proportion 0 .", "06 .", "We run TAPT for 100 epochs.", "For the downstream tasks, we conduct a grid search of Adam's initial learning rate from { 2e 3 , 2e 4 , 2e 5 , 2e 6 } , 5000 warm-up steps, and stop patience of 10 .", "Model selection is done on the validation set.", "We report results for the best model.", "For learning the potentials for symbolic inference with DRaiL (Pacheco and Goldwasser, 2021), we use local normalization with a learning rate of 1e 3 , and represent neural potentials using 2-layer Feed-Forward Networks over the ENG node embeddings.", "All hidden layers consist of 128 units.", "The parameters are learned using SGD with a patience of 5, tested against the validation set.", "For more details, refer to (Pacheco and Goldwasser, 2021).", "Note that while it would be possible to back-propagate to the whole graph, this is a computationally expensive procedure.", "We leave this exploration for future work.", "StoryCommonsense consists of three subtasks: Maslow, Reiss, and Plutchik, introduced in Sec. 2. Each subtask is a multi-label classification task, where the input is a sentence-character pair in a given story, and the output is a set of mental state labels.", "Each story was annotated by three annota-Maslow Reiss Plutchik Group Models Precision Recall F1 Precision Recall F1 Precision Recall F1 G1 RANDOM 7.45 49.99 12.96 1.76 50.02 3.40 10.35 50.00 17.15 TF-IDF 29.79 34.56 32.00 20.55 24.81 22.48 22.71 25.24 23.91 GLOVE 27.02 37.00 31.23 16.99 26.08 20.58 19.47 46.65 27.48 LSTM 30.34 40.12 34.55 21.38 28.70 24.51 25.31 33.44 28.81 CNN 29.30 44.18 35.23 17.87 37.52 24.21 24.47 38.87 30.04 REN 26.85 44.78 33.57 16.73 26.55 20.53 25.30 37.30 30.15 NPN 26.60 39.17 31.69 15.75 20.34 17.75 24.33 40.10 30.29 G2 SA-ELMo* 34.91 32.16 33.48 21.23 16.53 18.59 47.33 40.86 43.86 SA-RBERT* 43.58 30.03 35.55 24.75 18.00 20.84 46.51 45.45 45.97 LC-BERT* 43.05 41.31 42.16 29.46 28.67 29.06 49.36 52.09 50.69 LC-RBERT* 43.25 47.17 45.13 39.62 29.75 33.98 47.87 53.41 50.49 G3 ENG 43.87 51.13 47.22 37.66 36.20 36.92 48.96 56.07 52.27 ENG+Mask 44.27 53.54 48.47 39.29 33.93 36.41 49.64 56.93 53.03 ENG+Link 43.47 52.80 47.68 37.17 37.18 37.18 50.62 54.48 52.48 ENG+Sent 45.29 50.89 47.93 36.69 36.14 36.41 49.48 57.12 53.03 G4 ENG+IL 40.90 58.03 47.98 31.67 41.19 35.81 49.93 74.95 59.93 ENG+IL+ST 40.47 58.43 47.82 31.80 40.58 35.66 51.19 72.60 60.04 Table 2: Results for the StoryCommonsense task, including three multi-label tasks (Maslow, Reiss, and Plutchik), for predicting human's mental states of motivations or emotions.", "tors and the final labels were determined through a majority vote.", "For Maslow and Reiss, the vote is count-based, i.e., if two out of three annotators flag a label, then it is an active label.", "For Plutchik, the vote is rating-based, where each label has an annotated rating, ranging from { 0 , 5 } .", "If the averaged rating is larger or equal to 2 , then it is an active label.", "This is the set-up given in the original paper (Rashkin et al., 2018).", "Some papers (Gaonkar et al., 2020) report results using only the count-based majority vote, resulting in scores that are not comparable to ours.", "Therefore, we re-implement two recent strong models proposed for this task.", "The Label Correlation model (LC (Gaonkar et al., 2020)) applies label semantics as input and model output space using a learned correlation matrix.", "The Self-Attention model (SA (Paul and Frank, 2019)) utilize attentions over multi-hop knowledge paths extracted from external corpus.", "We evaluate them under the same set of hyper-parameters and model selection strategies as our models.", "We briefly explain all the baselines, as well as our model variants shown in Table 2. The first group (G1) are the baselines proposed in the task paper.", "TF-IDF uses TF-IDF features, trained on RocStories, to represent the target sentence s and character context ctx ( c ) , and uses a Feed-Forward Net (FFN) classifier; GloVe encodes the sentences with the pretrained GloVe embeddings and uses a FFN; CNN (Kim, 2014) replaces the FFN with a Convolutional Neural Network; LSTM is a two-layer bi-directional LSTM; REN (Henaff et al., 2017) is a recurrent entity network that learns to encode information for memory cells; and NPN (Bosselut et al., 2018) is an REN variant that includes a neural process network.", "The second group (G2) of baselines are based on two recent publications LC and SA that showed strong performance on this task.", "We re-implement them and run the evaluation under the same setting as our proposed models.", "They originally use BERT and ELMo, respectively.", "To provide a fair comparison, we also train a RoBERTa variant for them (LC-RBERT and SA-RBERT).", "Note that the original paper of SA (Paul and Frank, 2019) reports an F1 of 59 .", "81 on Maslow and 35 .", "41 on Reiss, while LC (Gaonkar et al., 2020) reports 65 .", "88 on Plutchik.", "However, these results are not directly comparable to ours.", "The discrepancy arises mainly from two points: (1) The rating-based voting, described in Sec. 4.2, is not properly applied, and (2) We do not optimize the hyper-parameter search space in our setting, given the relatively expensive pre-training.", "Our re-implemented versions give a better foundation for a fair comparison.", "The third (G3) and fourth (G4) groups are our model variants.", "ENG is the model without TAPT; ENG+Mask , ENG+Link , and ENG+Sent are the models with Whole-Word-Masking (WM), Link Prediction (LP), and Node Sentiment (NS) TAPT, respectively.", "In the last group, ENG(Best) + IL and ENG(Best) + IL + ST are based on our best ENG model with TAPT and adding inter-label dependencies (IL) and state transitions (ST) using symbolic inference, described in Sec. 3.6.", "Table 2 reports all the results.", "We can see that Group 2 generally performs better than Group 1 on all three subtasks, suggesting that our implementation is reasonable.", "Even without TAPT, ENG outperforms all baselines, rendering 2 3% absolute F1-score improvement.", "With TAPT, the performance is further strengthened.", "Moreover, we find that different TAPT tasks offer different levels of improvement for each subtask.", "The WM helps the most in Maslow and Plutchik, while the LP and NS excel in Reiss and Plutchik, respectively.", "This means that different TAPTs embed different information needed for solving the subtask.", "For example, the ability to add potential edges can be key to do motivation reasoning (Reiss), while identifying sentiment polarities (NS) can help in emotion analysis (Plutchik).", "This observation suggests a direction of connecting different related tasks in a joint pipeline.", "We leave this for future work.", "Lastly, we evaluate the impact of symbolic inference.", "We perform joint inference over the rules defined in Sec. 3.6.", "On Table 2, we can appreciate the advantage of modeling these dependencies for predicting Plutchik labels.", "However, the same is not true for the other two subtasks, where symbolic inference increases recall at the expense of precision, resulting in no F1 improvement.", "Note that labels for Maslow and Reiss are sparser, accounting for 55% and 42% of the nodes, respectively.", "In contrast, Plutchik labels are present in 68% of the nodes.", "DesireDB (Rahimtoroghi et al., 2017) is the task of predicting whether a desire expression is fulfilled or not, given its prior and posterior context.", "It requires aggregating information from multiple parts of the document.", "If a target desire is I want to be rich, and the character's mental changed from sad to happy along the text, we can infer that their desire is likely to be fulfilled.", "We use the baseline systems described in (Rahimtoroghi et al., 2017), based on SkipThought (ST) and Logistic Regression (LR), with manually engineered lexical and discourse features.", "We train a stronger baseline by encoding the prior and posterior context, as well as the desire expression, using BERT.", "Then, we add an attention layer (Eq. 5) for the two contexts over the desire expression.", "The resulting three representations (the weighted prior and posterior representations, and the desire representation) are then concatenated.", "For ENG, we add an attention layer over the nodes to form the ENG document representation.", "We compare BERT and BERT+ENG document representations by feeding each of them into a two-layer FFN for classification, as described in Sec. 3.4 (Doc. Classification).", "Table 3 shows the result.", "The BERT baseline outperforms other baselines with a large gap, 4 .", "27% absolute increase in the averaged F1-score.", "Furthermore, BERT+ENG forms a better document summary for the target desire, which further increase another absolute 3 .", "23% on the avg.", "F1-score.", "These results illustrate that ENG can be used in various settings for modeling entity information.", "We conduct a qualitative analysis by measuring and visualizing distances between event nodes corresponding to six verbs and their Maslow labels.", "We project the node embeddings, based on different encoders, to a 2-D space using t-SNE (Maaten and Hinton, 2008).", "We use shapes to represent verbs and colors to represent labels.", "In Fig. 3b and 3c, RoBERTa, pretrained on Whole-Word-Masking TAPT, was used.", "Nodes are word-contextualized, receiving the whole story ( W-CTX-STORY ) or the target sentence ( W-CTX-SENT ) as context.", "In these two cases, event nodes with the same verb (shape) tend to be closer.", "In Fig. 3a, we use ENG as the encoder to generate graph-contextualized embeddings ( ENG-CTX ).", "We observe that nodes with the same label (color) tend to be closer.", "In all cases, the embedding was trained using only the TAPT tasks, without task specific data.", "The ENG embedding is better at capturing entities' mental states, rather than verb information, as the graph structure is entity-driven.", "Figure 4 makes this point quantitatively.", "We use 10-fold cross validation and report averaged results.", "The proximity between verbs and between labels are measured in two ways: cluster purity and KNN classification.", "For the cluster purity (Man-ning et al., 2008), we cluster the events using K-Means ( K = 5 ), and calculate the averaged cluster Fulfilled Unfulfilled Average Models Precision Recall F1 Precision Recall F1 Precision Recall F1 ST-BOW 78.00 78.00 78.00 57.00 56.00 57.00 67.50 67.00 67.50 ST-ALL 78.00 79.00 79.00 58.00 56.00 57.00 68.0 67.50 68.00 ST-DISC 80.00 79.00 80.00 58.00 56.00 57.00 68.00 67.50 68.00 LR-BOW 69.00 65.00 67.00 53.00 57.00 55.00 61.00 61.00 61.00 LR-ALL 79.00 70.00 74.00 52.00 64.00 58.00 65.50 67.00 66.00 LR-DISC 75.00 84.00 80.00 60.00 45.00 52.00 67.50 64.50 66.00 BERT 81.75 75.90 78.72 57.95 66.23 61.82 69.85 71.06 70.27 BERT+ENG 81.99 83.06 82.52 65.33 63.64 64.47 73.66 73.35 73.50 Table 3: Results for the DesireDB task: identifying if a desire described in the document is fulfilled or not.", "where C is the set of clusters and D is either the set of labels or verbs.", "For the graph contextualization, we can see that the labels have higher cluster purity than the verbs, while for the word contextualization, the verbs have higher cluster purity.", "This result aligns with our visualization.", "The KNN classification uses the learned embedding as a distance function.", "The KNN classifier performs better when classifying labels using the graph-contextualized embeddings, while it performs better using word-contexualized embeddings when classifying verbs.", "These results demonstrate that ENG can better capture the states of entities.", "We propose an ENG model that captures implicit information about the states of narrative entities using multi-relational graph contextualization.", "We study three types of weakly-supervised TAPTs for ENG and their impact on the performance of downstream tasks, as well as symbolic inference capturing the interactions between predictions.", "Our empirical evaluation was done over two narrative analysis tasks.", "The results show that ENG can outperform other strong baselines, and the contribution of different types of TAPT is task-dependent.", "In the future, we want to connect different TAPT schemes and downstream tasks, and explore constrained representations.", "We thank the reviewers for their efforts and insights.", "This work was partially funded by the NSF and DARPA ASED program under contracts CNS-1814105 and 13000686." ]
[ "abstain", "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "objective", "abstain", "result", "abstain", "objective", "abstain", "result", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "other", "other" ]
[ "The existence of multiple datasets for sarcasm detection prompts us to apply transfer learning to exploit their commonality.", "The adversarial neural transfer (ANT) framework utilizes multiple loss terms that encourage the source-domain and the target-domain feature distributions to be similar while optimizing for domain-specific performance.", "However, these objectives may be in conflict, which can lead to optimization difficulties and sometimes diminished transfer.", "We propose a generalized latent optimization strategy that allows different losses to accommodate each other and improves training dynamics.", "The proposed method outperforms transfer learning and meta-learning baselines.", "In particular, we achieve 10.02% absolute performance gain over the previous state of the art on the iSarcasm dataset.", "Sarcastic language is commonly found in social media posts (Gonzlez-Ibez et al., 2011; Maynard and Greenwood, 2014), forum discussions (Khodak et al., 2018a), product reviews (Davidov et al., 2010; Filatova, 2012) and everyday conversations (Gibbs, 2000).", "Detecting sarcasm is an integral part of creative language understanding (Veale et al., 2019) and online opinion mining (Kannan-gara, 2018).", "Due to highly contextualized expressions, detecting sarcasm is a challenging task, even for humans (Fox Tree et al., 2020).", "A challenge specific to sarcasm detection is the difficulty in acquiring ground-truth annotations.", "Human-annotated datasets (Filatova, 2012; Riloff et al., 2013; Van Hee et al., 2018; Oprea and Magdy, 2020) usually contain only a few thousand texts, resulting in many small datasets.", "In comparison, automatic data collection using distant supervision signals like hashtags (Ptcek et al., 2014; Bamman and Smith, 2015; Joshi et al., 2015) yielded * Corresponding authors substantially larger datasets.", "Nevertheless, the automatic approach also led to label noise.", "For example, Oprea and Magdy (2020) found nearly half of the tweets with sarcasm hashtags in one dataset are not sarcastic.", "The existence of diverse datasets and data collection methods prompts us to exploit their commonality through transfer learning.", "Specifically, we transfer knowledge learned from large and noisy datasets to improve sarcasm detection on small human-annotated datasets that serve as effective performance benchmarks.", "Adversarial neural transfer (ANT) (Ganin and Lempitsky, 2015; Liu et al., 2017; Kim et al., 2017; Kamath et al., 2019) employs an adversarial setup where the network learns to make the shared feature distributions of the source domain and the target domain as similar as possible, while simultaneously optimizing for domain-specific performance.", "However, as the domain-specific losses promote the use of domain-specific features, these training objectives may compete with each other implicitly.", "This leads to optimization difficulties and potentially degenerate cases where the domain-specific classifiers ignore the shared features and no meaningful transfer occurs between domains.", "To cope with this issue, we propose Latent-Optimized Adversarial Neural Transfer (LOANT).", "The latent optimization strategy can be understood with analogies to to one-step look-ahead during gradient descent and Model-Agnostic Meta Learning (Finn et al., 2017).", "By forcing domain-specific losses to accommodate the negative domain discrimination loss, it improves training dynamics (Balduzzi et al., 2018).", "With LOANT, we achieve 10.02% absolute improvement over the previous state of the art on the iSarcasm dataset (Oprea and Magdy, 2020) and 3.08% improvement on SemEval-18 dataset (Van Hee et al., 2018).", "Over four sets of transfer learning experiments, latent optimization on average brings 3.42% improvement in F-score over traditional adversarial neural transfer and 4.83% over a similar training strategy from Model-Agnostic Meta Learning (MAML) (Finn et al., 2017).", "In contrast, traditional ANT brings an average of only 0.9% F-score improvement over non-adversarial multi-task learning.", "The results demonstrates that LOANT can effectively perform knowledge transfer for the task of sarcasm detection and suggests that the proposed latent optimization strategy enables the collaboration among the ANT losses during optimization.", "Our contributions can be summarized as follows:", "1. Inspired by the existence of multiple small sarcasm datasets, we propose to use transfer learning to bridge dataset differences.", "To the best of our knowledge, this is the first study of transfer learning between different sarcasm detection datasets.", "2. We propose LOANT, a novel latent-optimized adversarial neural transfer model for cross-domain sarcasm detection.", "By conducting stochastic gradient descent (SGD) with one-step look-ahead, LOANT outperforms traditional adversarial neural transfer, multitask learning, and meta-learning baselines, and establishes a new state-of-the-art F-score of 46.41%.", "The code and datasets are available at https://github.com/ guoxuxu/LOANT .", "Acquiring large and reliable datasets has been a persistent challenge for computational detection of sarcasm.", "Due to the cost of annotation, manually labeled datasets (Walker et al., 2012; Riloff et al., 2013; Wallace et al., 2014; Abercrombie and Hovy, 2016; Oraby et al., 2016; Van Hee et al., 2018; Oprea and Magdy, 2020) typically contain only a few thousand texts.", "Automatic crawling (Ptcek et al., 2014; Bamman and Smith, 2015; Joshi et al., 2015; Khodak et al., 2018b) using hashtags or markers yields substantially more texts, but the results are understandably more noisy.", "As a case study, after examining the dataset of Riloff et al. (2013), Oprea and Magdy (2020) found that nearly half of tweets with sarcasm hashtags are not sarcastic.", "In this paper, we evaluate performance on the manually labeled datasets, which are relatively clean and can serve as good benchmarks, and transfer the knowledge learned from automatically collected datasets.", "Traditional sarcasm detection includes methods based on rules (Tepperman et al., 2006) and lexical (Kreuz and Caucci, 2007) and pragmatic patterns (Gonzlez-Ibnez et al., 2011).", "Context-aware methods (Rajadesingan et al., 2015; Bamman and Smith, 2015) make use of contexts, such as the author, the audience, and the environment, to enrich feature representations.", "Deep learning techniques for sarcasm detection employ convolutional networks (Ghosh and Veale, 2016), recurrent neural networks (Zhang et al., 2016; Felbo et al., 2017; Wu et al., 2018), attention (Tay et al., 2018), and pooling (Xiong et al., 2019) operations.", "Amir et al. (2016) incorporate historic information for each Twitter user.", "Cai et al. (2019) consider the images that accompany tweets and Mishra et al. (2017) utilize readers' gaze patterns.", "To the best of our knowledge, no prior work has explored transfer learning between different sarcasm datasets.", "As a transfer learning technique, multi-task learning (MTL) allows related tasks or similar domains to inform each other and has been a powerful technique for NLP (Collobert et al., 2011; Yang et al., 2017; Aharoni et al., 2019; Guo et al., 2019; Raffel et al., 2020).", "However, MTL does not always lead to performance improvements (Alonso and Plank, 2017; Bingel and Sgaard, 2017; Changpinyo et al., 2018; Clark et al., 2019).", "Theoretical analysis (Ben-David et al., 2010) indicates that a key factor for the success of transfer is to reduce the divergence between the feature spaces of the domains.", "Ganin and Lempitsky (2015) propose to minimize domain differences via a GAN-like setup, where a domain discriminator network learns to distinguish between features from two domains and a feature extraction network learns to produce indistinguishable features, which are conducive to transfer learning.", "Similar adversarial setups (Liu et al., 2017; Kim et al., 2017) have been adopted for many NLP tasks, such as sentiment analysis (Chen et al., 2018; Liu et al., 2018), satirical news detection (McHardy et al., 2019), detection of duplicate questions (Ka-math et al., 2019), named entity recognition (Zhou et al., 2019), and QA (Yu et al., 2018).", "However, as shown in our experiments, adding the domain discriminator to MTL does not always result in improved performance.", "We attribute this to the implicit competition between the negative domain discrimination loss and the domain-specific losses, which causes difficulties in optimization.", "In this paper, we improve the training dynamics of adversarial transfer learning using latent optimization on BERT features.", "The idea of coordinating gradient updates of different and competing losses using gradient descent with look-ahead has been explored in Latent-optimized Generative Adversarial Network (LO-GAN) (Wu et al., 2019b,a), Symplectic Gradient Adjustment (Balduzzi et al., 2018; Gemp and Ma-hadevan, 2019), Unrolled GAN (Metz et al., 2016), Model-Agnostic Meta Learning (Finn et al., 2017) and extragradient (Azizian et al., 2020).", "The difference between LOGAN and other techniques is that the LOGAN computes the derivative of the randomly sampled latent input, whereas other methods compute the second-order derivative in the model parameter space.", "In this paper, we generalize latent optimization from GANs to multi-task learning, where the adversarial loss is complemented by domain-specific task losses.", "In addition, we apply latent optimization on the output of the BERT module, which differs from the optimization of the random latent variable in LOGAN.", "As large pretrained masked language models (PMLMs) gain prominence in NLP, latent optimization avoids gradient computation on the parameters of enormous PMLMs, providing reduction in running time and memory usage.", "In supervised transfer learning, we assume labeled data for both the source domain and the target domain are available.", "The source domain dataset D s comprises of data points in the format of ( x s , y s ) and the target domain dataset D t comprises of data points in the format of ( x t , y t ) .", "The labels y s and y t are one-hot vectors.", "The task of supervised cross-domain sarcasm detection can be formulated as learning a target-domain function f t ( x t ) that predict correct labels for unseen x t .", "Fig. 1 shows the model architecture for adversarial neural transfer (ANT) (Liu et al., 2017; Kamath et al., 2019; Kim et al., 2017).", "We use a large pretrained neural network, BERT (Devlin et al., 2019), as the sentence encoder, though the architecture is not tied to BERT and can use other pretrained encoders.", "We denote the parameters of the BERT encoder as w b , and its output for data in the source domain and the target domain as z s RD and z t RD respectively.", "We denote this encoder operation as z s = E ( x s , w b ) , z t = E ( x t , w b ) (1) On top of these outputs, we apply domain-specific dense layers to create domain-specific features v s , v t and shared dense layers to create shared features u s , u t .", "We use w s , w t , and w sh to denote the parameters for the source dense layers, the target dense layers, and the shared dense layers.", "The concatenation of features [ v s , u s ] is fed to the source-domain classifier, parameterized by s ; [ v t , u t ] is fed to the target-domain classifier, parameterized by t .", "The two classifiers categorize the tweets into sarcastic and non-sarcastic and are trained using cross-entropy.", "For reasons that will become apparent later, we make explicit the re-liance on z s and z t : L s ( z s ) = (cid:88) i y s,i log p ( y s,i | z s ) , L t ( z t ) = (cid:88) i y t,i log p ( y t,i | z t ) , (2) PretrainedEncoder !", "where y s and y t are the predicted labels and i is the index of the vector components.", "Simultaneously, the domain discriminator learns to distinguish the features u s and u t as coming from different domains.", "The domain discriminator is parameterized by d .", "It is trained to minimize the domain classification loss, L d ( z t , z s ) = log p (0 | u s ) log p (1 | u t ) .", "Through the use of the gradient reversal layer, the shared dense layers and the feature encoder maximizes the domain classification loss, so that the shared features u s and u t become indistinguishable and conducive to transfer learning.", "In summary, the network weights w b , w s , w t , w sh , s , t are trained to minimize the following joint loss, LANT = L s ( z s ) + L t ( z t ) L d ( z t , z s ) , (4) whereas d is trained to minimize L d ( z t , z s ) .", "It is worth noting that the effects of three loss terms in Eq.", "4 on the shared parameters w sh and w b may be competing with each other.", "This is because optimizing sarcasm detection in one domain will encourage the network to extract domain-specific features, whereas the domain discrimination loss constrains the network to avoid such features.", "It is possible for the competition to result in degenerate scenarios.", "For example, the shared features u s and u t may become indistinguishable but also do not correlate with the labels y s and y t .", "The domain classifiers may ignore the shared features u s and u t and hence no transfer happens.", "To cope with this issue, we introduce a latent optimization strategy that forces domain-specific losses to accommodate the domain discrimination loss.", "We now introduce the latent representation optimization strategy.", "First, we perform one step of stochastic gradient descent on L d on the encoded features z s and z t with learning rate , z (cid:48) s = z s + L d ( z s , z t ) z s , (5) z (cid:48) t = z t + L d ( z s , z t ) z t .", "(6) We emphasize that this is a descent step because we are minimizing L d .", "After that, we use the updated z (cid:48) s and z (cid:48) t in the computation of the losses LLO s ( z s , z (cid:48) s ) = L s ( z s ) + L s ( z (cid:48) s ) , (7) LLO t ( z t , z (cid:48) t ) = L t ( z t ) + L t ( z (cid:48) t ) .", "The new joint objective hence becomes", "which is optimized using regular stochastic gradient descent (SGD) on w b , w s , w t , w sh , s , and t .", "Here we show the general case of gradient computation.", "Consider any weight vector w in the neural network.", "Equations 5 and 6 introduce two intermediate variables z (cid:48) s and z (cid:48) t , which are a function of the model parameter w .", "Therefore, we perform SGD using the following total derivative d LLO d w = LLO w + LLO s ( z (cid:48) s ) z (cid:48) s z (cid:48) s w + LLO t ( z (cid:48) t ) z (cid:48) t z (cid:48) t w .", "For every network parameter other than the encoder weight w b , z / w is zero.", "The second-order derivative 2 L d ( z ) (cid:14) z w is difficult to compute due to the high dimensionality of w .", "Since is usually very small, we adopt a first-order approximation and directly set the second-order derivative to zero.", "Letting s = [ w s , s ] and t = [ w t , t ] , we now show the total derivatives for all network Algorithm 1: Training of LOANT Input: source data ( x s , y s ) , target data ( x t , y t ) , learning rate Initialize model parameters w repeat Sample N batches of data pairs for i = 1 to N do Compute forward loss L s , L t , L d ; Compute (cid:52) z s = L d ( z s ) z s and (cid:52) z t = L d ( z t ) z t ; Update the latent representations z (cid:48) s = z s + (cid:52) z s and z (cid:48) t = z t + (cid:52) z t ; Compute the new joint loss LLO = LLO s + LLO t L d ; Update w using gradient descent.", "d LLO d w b = LANT w b + L s ( z (cid:48) s ) w b + L t ( z (cid:48) t ) w b + L s ( z (cid:48) s ) z (cid:48) s z s w b + L t ( z (cid:48) t ) z (cid:48) t z t w b (12) d LLO d w sh = LANT w sh + L s ( z (cid:48) s ) w sh + L t ( z (cid:48) t ) w sh (13) d LLO d s = L s ( z s ) s + L s ( z (cid:48) s ) s d LLO d t = L t ( z t ) t + L t ( z (cid:48) t ) t d LLO d d = L d ( z s , z t ) d (14)", "More details can be found in Appendix A. Fig. 2 illustrates the latent optimization process.", "Algorithm 1 shows the LOANT algorithm.", "To better understand the LOANT algorithm, we relate LOANT to the extragradient technique and Model-Agnostic Meta Learning (Finn et al., 2017).", "The vanilla gradient descent (GD) algorithm follows the direction along which the function value decreases the fastest.", "However, when facing an ill-conditioned problem like the one in Fig. 3, GD is known to exhibit slow convergence because the local gradients are close to being orthogonal to the direction of the local optimum.", "For comparison with LOANT, we consider the extragradient (EG) method (Korpelevich, 1976; Az-izian et al., 2020) that uses the following update rule when optimizing the function f ( w ) with respect to w , w w d f ( w f ( w ) w ) d w .", "Similar to LOANT, we can adopt a first-order approximation to EG if we set the Hessian term to zero in the total derivative.", "Instead of optimizing the immediate function value f ( w ) , this method optimizes f ( w fw ) , which is the function value after one more GD step.", "This can be understood as looking one step ahead along the optimization trajectory.", "In the contour diagrams of Fig. 3, we show the optimization of a 2-dimensional quadratic function.", "This simple example showcases how the ability to look one step ahead can improve optimization in pathological loss landscapes.", "We motivate the nested optimization of LOANT by drawing an analogy between EG and LOANT.", "It is worth noting that LOANT differs from the EG update rule in important ways.", "Specifically, in EG the inner GD step and the outer GD step are performed on the same function f ( ) , whereas LOANT performs the inner step on L d and the outer step on L s or L t .", "For a similar idea with multiple losses, we turn to MAML (Finn et al., 2017).", "In MAML, there are K tasks with losses L 1 , . . . , L k , . . . , LK .", "On every task, we perform a one-step SGD update to the model parameter w RL , w T k = w L k ( w ) w .", "Utilizing the idea of look ahead, in MAML we update w so that subsequent optimization on any single task or combination of tasks would achieve good results.", "Adversarial neural transfer has three tasks, the source-domain and target-domain classifications and the negative discriminator loss.", "The updates performed by LOANT in Eq.", "5 and 6 are similar to MAML's look-ahead update in Eq.", "16.", "Specifically, when we update model parameters using the gradient from the total loss LLO , we prepare for the next descent step on L d .", "Therefore, LOANT can", "(c) Full-Hessian extragradient, which finds a direct path to the local minimum, enabling a large learning rate = 0 .", "1 .", "be understood as forcing domain-specific losses to accommodate the domain discrimination loss and mitigating their competition.", "LOANT differs from MAML since, in the inner update, LOANT updates the sentence-level features z s and z t instead of the model parameters w .", "As z s and z t are usually of much smaller dimensions than w , this leads to accelerated training and reduced memory footprint.", "For example, in the BERT-base model (Devlin et al., 2019), L is 110 million and D is 768.", "Within the regular range of batch size B , BD (cid:28) L .", "In the experiments, we verify the benefits of LOANT in terms of accuracy and time and space complexity.", "We conduct four cross-domain sarcasm detection experiments by transferring from an automatically collected dataset to a manually annotated dataset.", "The two automatically collected datasets include Ptcek (Ptcek et al., 2014) and Ghosh 1 (Ghosh and Veale, 2016), which treat tweets having particular hastags such as #sarcastic , #sarcasm or #not as sarcastic and others as not sarcastic.", "We crawled the Ptcek dataset using the NLTK API 2 according to the tweet ids published online 3 .", "The two manually annotated datasets include SemEval-18 4 (Van Hee et al., 2018) and iSarcasm 1 https://github.com/AniSkywalker/ SarcasmDetection/tree/master/resource 2 http://www.nltk.org/howto/twitter.", "(Oprea and Magdy, 2020).", "SemEval-18 consists of both sarcastic and ironic tweets supervised by third-party annotators and thus is used for perceived sarcasm detection.", "The iSarcasm dataset contains tweets written by participants of an online survey and thus is an example of intended sarcasm detection.", "Table 1 summarizes the statistics of the four datasets.", "The SemEval-18 dataset is balanced while the iSarcasm dataset is imbalanced.", "The two source datasets are more than ten times the size of the target datasets.", "For all datasets, we use the predefined test set and use a random 10% split of the training set as the development set.", "We preprocessed all datasets using the lexical normalization tool for tweets from Baziotis et al. (2017).", "We cleaned the four datasets by dropping all the duplicate tweets within and across datasets, and trimmed the texts to a maximum length of 100.", "To deal with class imbalance, we performed upsampling on the target-domain datasets, so that both the sarcastic and non-sarcastic classes have the same size as source domain datasets.", "text sarcasm detection model ranked top-1 on the iSarcam dataset.", "The model is a co-attention based LSTM model which uses the word embeddings pretrained on Twitter data 5 .", "Dense-LSTM (Wu et al., 2018): A state-of-the-art single-task sarcasm detection model ranked top-1 on the SemEval-18 dataset.", "The model is a densely connected LSTM network consisting of four Bi-LSTM layers and the word embeddings pretrained on two Twitter datasets.", "BERT : We finetune the BERT model (Devlin et al., 2019) with an additional simple classifier directly on the target dataset.", "S-BERT is a two-stage finetuning of the BERT model.", "We first finetune BERT on the source dataset and the best model is selected for further fine-tuning on the target dataset.", "MTL : We implemented a multi-task learning (MTL) model, which has the same architecture as LOANT except that the domain discriminator is removed.", "We use BERT as the shared text encoding network.", "MTL+LO : In this baseline, we applied latent optimization to MTL.", "As MTL does not have the adversarial discriminator, we use the domain-specific losses to optimize latent representations: z (cid:48) s = z s L s ( z s ) z s (18) z (cid:48) t = z t L t ( z t ) z t (19) We use the above to replace Equations 5 and 6 and keep the rest training steps unchanged.", "This model is compared against MTL to study the effects of LO in non-adversarial training for cross-domain sarcasm detection.", "ANT : This is the conventional adversarial neural transfer model with the same architecture as LOANT.", "The only difference is that we do not apply latent optimization.", "For fair comparisons, we use BERT as the text encoder.", "ANT+MAML : In Section 3.3, we discussed the similarity between LO and MAML.", "Therefore, we create a baseline that uses a MAML-like strategy for encouraging the collaboration of different loss terms.", "Instead of optimizing the latent representation z s and z t , we first take a SGD step in the 5 https://nlp.stanford.edu/projects/ glove/ parameter space of w b , w (cid:48) b = w b + L d ( z s , z t ) w b .", "After that, we use w (cid:48) b to compute the gradients used in the actual updates to all model parameters, including w b .", "Model Settings.", "For all models using the BERT text encoder, we use the uncased version of the BERT-base model and take the 768-dimensional output from the last layer corresponding to the [CLS] token to represent a sentence.", "The BERT parameters are always shared between domains.", "For other network components, we randomly initialize the dense layers and classifiers.", "To minimize the effect of different random initializations, we generate the same set of initial parameters for each network component and use them across all baselines wherever possible.", "The source dense layer, the shared dense layer, and the target dense layer are single linear layers with input size of 768 and output size of 768 followed by the tanh activation.", "The classifier in all models consists of two linear layers.", "The first linear layer has input size of 768 2 (taking both shared and domain-specific features) and output size of 768 followed by the ReLU activation.", "The second linear layer has input size 768 and output size 2 for binary classification.", "After that we apply the softmax operation.", "More details can be found in Appendix B. Training Setting.", "We optimize all models using Adam (Kingma and Ba, 2014) with batch size of 128.", "We tune the learning rate (LR) on the development set from 1e-5 to 1e-4 in increments of 2e-5.", "To objectively assess the effects of latent optimization (LO), we first find the best LR for the base models such as ANT and MTL.", "After that, with the best LR unchanged, we apply LO to ANT and MTL.", "We use the cosine learning rate schedule for all models.", "All models are trained for 5 epochs on Nvidia V100 GPUs with 32GB of memory in mixed precision.", "Due to the large model size and pretrained weights of BERT, 5 epochs are sufficient for convergence.", "Evaluation Metrics.", "Following (Wu et al., 2018; Van Hee et al., 2018; Oprea and Magdy, 2020), we select and compare models using the F-score on the sarcastic class in each dataset.", "We additionally Target: SemEval-18 Model F-score Recall Precision Single-task Random 0.3730 0.3730 0.3730 Unigram SVM 0.5890 0.6590 0.5320 LSTM 0.5260 0.4440 0.6450 DenseLSTM 0.6510 0.7106 0.6005 BERT 0.6626 0.7055 0.6246 Source: Ptce S-BERT 0.6676 0.7055 0.6337 MTL 0.6404 0.7896 0.5386 ANT 0.6348 0.8187 0.5184 MTL+LO 0.6598 0.7346 0.5989 ANT+MAML 0.6454 0.7540 0.5641 LOANT (ours) 0.6702 0.8025 0.5754 Source: Ghosh S-BERT 0.6512 0.7766 0.5607 MTL 0.6525 0.7475 0.5789 ANT 0.6626 0.8899 0.5278 MTL+LO 0.6622 0.8058 0.5620 ANT+MAML 0.6338 0.7281 0.5610 LOANT (ours) 0.6818 0.7734 0.6096 Target: iSarcasm Model F-score Recall Precision SIARN 0.3420 0.7820 0.2190 MIARN 0.3640 0.7930 0.2360 LSTM 0.3360 0.7470 0.2170 DenseLSTM 0.3180 0.2760 0.3750 BERT 0.3492 0.4904 0.2711 S-BERT 0.3710 0.5541 0.2788 MTL 0.3767 0.3503 0.4074 ANT 0.3857 0.5159 0.3079 MTL+LO 0.4379 0.4267 0.4496 ANT+MAML 0.3951 0.5605 0.2923 LOANT (ours) 0.4642 0.4968 0.4357 S-BERT 0.3383 0.5732 0.2400 MTL 0.3838 0.5159 0.3056 ANT 0.4063 0.4904 0.3468 MTL+LO 0.3987 0.4012 0.3962 ANT+MAML 0.3589 0.4904 0.2830 LOANT (ours) 0.4101 0.4649 0.3668 Results reported in (Van Hee et al., 2018), in (Wu et al., 2018) and in (Oprea and Magdy, 2020).", "report the corresponding Recall and Precision.", "In all our experiments, we use the development set for model selection and report their performance on the test set.", "To evaluate the efficiency of LOANT versus MAML-based training, we also compare their required GPU memory and average training time in each epoch.", "We compare models on the target domain datasets.", "Additional multi-domain performance can be found in Appendix C. 4.4 Comparison with the States of the Art We compare LOANT with state-of-the-art methods on the SemEval-18 dataset (Van Hee et al., 2018) and the iSarcasm datast (Oprea and Magdy, 2020).", "Table 2 presents the test performance of LOANT and all baseline models.", "Our LOANT model consistently outperforms all single-task baselines by large margins.", "In particular, LOANT outperforms MIARN by 10.02% on iSarcasm (Oprea and Magdy, 2020) whereas the fine-tuned BERT achieved 1.48% lower than MIARN.", "On SemEval18, the fine-tuned BERT achieves better test performance than other four single-task baselines.", "The results indicate that fine-tuning BERT, a popular baseline, does not always outperform the traditional LSTM networks specifically designed for the task.", "We hypothesize that the large BERT model can easily overfit the small datasets used, which highlights the challenge of sarcasm detection.", "The middle and bottom sections of Table 2 present the test performance of six transfer learning models (S-BERT, MTL, ANT, MTL+LO, ANT+MAML, and LOANT) under four groups of transfer learning experiments.", "These models generally outperform the single-task models, demonstrating the importance of transfer learning.", "Among these, we have the following observations.", "Effects of the Domain Discriminator.", "The performance differences between MTL and ANT can be explained by the addition of the domain discriminator, which encourages the shared features under the source domain and the target domain to have the same distributions.", "In the four pairs of experiments, ANT marginally outperforms MTL by an average of 0.9% F-score.", "In the Ptcek SemEval-18 experiment, the domain discriminator causes F-score to decrease by 0.56%.", "Overall, the benefits of the adversarial discriminator to transfer learning appear to be limited.", "As discussed earlier, the competition between the domain-specific losses and the negative domain discrimination loss may have contributed to the ineffectiveness of ANT.", "Effects of Latent Optimization.", "We can observe the effects of LO by comparing ANT with LOANT and comparing MTL with MTL+LO.", "Note that in these experiments we adopted the best learning rates for the baseline models ANT and MTL rather than the latent-optimized models.", "On average, LOANT outperforms ANT by 3.42% in F-score and MTL+LO outperforms MTL by 2.63%, which clearly demonstrates the benefits provided by latent optimization.", "Latent Space vs. Model Parameter Space.", "In the ANT+MAML baseline, we adopt a MAML-like optimization strategy, which performs the lookahead in the BERT parameter space instead of the latent representation space.", "Interestingly, this strategy does not provide much improvements and on average performs 1.40% worse than ANT.", "LOANT clearly outperforms ANT+MAML.", "In addition, optimization in the latent space also provides savings in computational time and space requirements.", "Table 3 shows the time and memory consumption for different transfer learning methods.", "Adding LO to ANT has minimal effects on the memory usage, but adding MAML nearly doubles the memory consumption.", "On average, ANT+MAML increases the running time of LOANT by 3.1 fold.", "The Influence of Domain Divergence.", "In transfer learning, the test performance depends on the similarity between the domains.", "We thus investigate the dissimilarity between datasets using the KullbackLeibler (KL) divergence between the unigram probability distributions, d KL = (cid:88) g VP t ( g )log P t ( g ) P s ( g ) .", "where P s ( g ) and P t ( g ) are the probabilities of unigram g for the source domain and target domain respectively.", "V is the vocabulary.", "Table 4 shows the results.", "Ptcek is more similar to the two target datasets than Ghosh.", "Among the two target datasets, iSarcasm is more similar to Ptcek than SemEval-18.", "Ptcek iSarcasm transfer where domain divergence is the smallest.", "The Ptcek SemEval-18 transfer comes in second with 3.54%.", "Transferring from Ghosh yields smaller improvements.", "Further, we observe the same trend in the comparison between MTL+LO and MTL.", "The largest improvement brought by LO is 6.12% in the Ptcek iSarcasm transfer.", "As one may expect, applying LO leads to greater performance gains when the two domains are more similar.", "Transfer learning holds the promise for the effective utilization of multiple datasets for sarcasm detection.", "In this paper, we propose a latent optimization (LO) strategy for adversarial transfer learning for sarcasm detection.", "By providing look-ahead in the gradient updates, the LO technique allows multiple losses to accommodate each other.", "This proves to be particularly effective in adversarial transfer learning where the domain-specific losses and the adversarial loss potentially conflict with one an-other.", "With the proposed LOANT method, we set a new state of the art for the iSarcasm dataset.", "We hope the joint utilization of multiple datasets will contribute to the creation of contextualized semantic understanding that is necessary for successful sarcasm detection.", "This research is supported by the National Research Foundation, Singapore under its the AI Singapore Programme (AISG2-RP-2020-019), NRF Inves-tigatorship (NRF-NRFI05-2019-0002), and NRF Fellowship (NRF-NRFF13-2021-0006); the Joint NTU-WeBank Research Centre on Fintech (NWJ-2020-008); the Nanyang Assistant/Associate Professorships (NAP); the RIE 2020 Advanced Manufacturing and Engineering Programmatic Fund (A20G8b0102), Singapore; NTU-SDU-CFAIR (NSC-2019-011).", "Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of the funding agencies." ]
[ "method", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "other", "other", "abstain", "objective", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "other", "other" ]
[ "The increased focus on misinformation has spurred development of data and systems for detecting the veracity of a claim as well as retrieving authoritative evidence.", "The Fact Extraction and VERification (FEVER) dataset provides such a resource for evaluating end-to-end fact-checking, requiring retrieval of evidence from Wikipedia to validate a veracity prediction.", "We show that current systems for FEVER are vulnerable to three categories of realistic challenges for fact-checking multiple propositions, temporal reasoning, and ambiguity and lexical variation and introduce a resource with these types of claims.", "Then we present a system designed to be resilient to these attacks using multiple pointer networks for document selection and jointly modeling a sequence of evidence sentences and veracity relation predictions.", "We find that in handling these attacks we obtain state-of-the-art results on FEVER, largely due to improved evidence retrieval.", "The growing presence of biased, one-sided, and often altered discourse, is posing a challenge to our media platforms from newswire to social media (Vosoughi et al., 2018).", "To overcome this challenge, fact-checking has emerged as a necessary part of journalism, where experts examine check-worthy claims (Hassan et al., 2017) published by others for their shades of truth (e.g., FactCheck.org or Poli-tiFact).", "However, this process is time-consuming, and thus building computational models for automatic fact-checking has become an active area of research (Graves, 2018).", "Advances were made possible by new open source datasets and shared tasks: the Fact Extraction and Verification Shared Task (FEVER) 1.0 and 2.0 (Thorne et al., 2018; Thorne Work completed in part at Amazon Claim: Murda Beatz (cid:48) s real name is Marshall Mathers.", "Evidence: [Murda Beatz] Shane Lee Lindstrom (born February 11, 1994), known professionally as Murda Beatz, is a Canadian hip hop record producer and songwriter from Fort Erie, Ontario.", "Label: REFUTES Figure 1: Example from FEVER 1.0 Dataset and Vlachos, 2019), SemEval 2019 Shared Task 8: Fact-Checking in Community Forums (Mihaylova et al., 2019), and LIAR(+) datasets with claims from PolitiFact (Wang, 2017; Alhindi et al., 2018).", "The FEVER 1.0 shared task dataset (Thorne et al., 2018) has enabled the development of end-to-end fact-checking systems, requiring document retrieval and evidence sentence extraction to corroborate a veracity relation prediction (supports, refutes, not enough info).", "An example is given in Figure 1. Since the claims in FEVER 1.0 were manually written using information from Wikipedia, the dataset may lack linguistic challenges that occur in verifying naturally occurring check-worthy claims, such as temporal reasoning or lexical gener-alization/specification.", "Thorne and Vlachos (2019) designed a second shared task (FEVER 2.0) for participants to create adversarial claims (attacks) to break state-of-the-art systems and then develop systems to resolve those attacks.", "We present a novel dataset of adversarial examples for fact extraction and verification in three challenging categories: 1) multiple propositions (claims that require multi-hop document or sentence retrieval); 2) temporal reasoning (date comparisons, ordering of events); and 3) named entity ambiguity and lexical variation (Section 4).", "We show that state-of-the-art systems are vulnerable to adversarial attacks from this dataset (Section 6).", "In addition, we take steps toward addressing these vulnerabilities, presenting a system for end-to-end fact-checking that brings two novel contributions using pointer networks : 1) a document ranking model; and 2) a joint model for evidence sentence selection and veracity relation prediction framed as a sequence labeling task (Section 5).", "Our new system achieves state-of-the-art results for FEVER and we present an evaluation of our models including ablation studies (Section 6).", "Data and code will be released to the community.", "1 2 Related Work Approaches for predicting the veracity of naturally-occurring claims have focused on statements fact-checked by journalists or organizations such as PolitiFact.org (Vlachos and Riedel, 2014; Alhindi et al., 2018), news articles (Pomerleau and Rao, 2017), or answers in community forums (Mi-haylova et al., 2018, 2019).", "However, those datasets are not suited for end-to-end fact-checking as they provide sources and evidence while FEVER (Thorne et al., 2018) requires retrieval.", "Initial work on FEVER focused on a pipeline approach of retrieving documents, selecting sentences, and then using an entailment module (Malon, 2018; Hanselowski et al., 2018; Tokala et al., 2019); the winning entry for the FEVER 1.0 shared task (Nie et al., 2019a) used three homogeneous neural models.", "Other work has jointly learned either evidence extraction and question answering (Nishida et al., 2019) or sentence selection and relation prediction (Yin and Roth, 2018; Hidey and Diab, 2018); unlike these approaches, we use the same sequential evidence prediction architecture for both document and sentence selection, jointly predicting a sequence of labels in the latter step.", "More recently, Zhou et al. (2019) proposed a graph-based framework for multi-hop retrieval, whereas we model evidence sequentially.", "Language-based adversarial attacks have often involved transformations of the input such as phrase insertion to distract question answering systems (Jia and Liang, 2017) or to force a model to always make the same prediction (Wallace et al., 2019).", "Other research has resulted in adversarial methods for paraphrasing with universal replacement rules (Ribeiro et al., 2018) or lexical substitution (Alzantot et al., 2018; Ren et al., 2019).", "While our strategies include insertion and replacement, we focus specifically on challenges in fact-checking.", "The task of natural language inference 1 https://github.com/chridey/ fever2-columbia (Bowman et al., 2015; Williams et al., 2018) provides similar challenges: examples for numerical reasoning and lexical inference have been shown to be difficult (Glockner et al., 2018; Nie et al., 2019b) and improved models on these types are likely to be useful for fact-checking.", "Finally, (Thorne and Vlachos, 2019) provided a baseline for the FEVER 2.0 shared task with entailment-based perturbations.", "Other participants generated adversarial claims using implicative phrases such as not clear (Kim and Allan, 2019) or GPT-2 (Niewinski et al., 2019).", "In comparison, we present a diverse set of attacks motivated by realistic, challenging categories and further develop models to address those attacks.", "We address the end-to-end fact-checking problem in the context of FEVER (Thorne et al., 2018), a task where a system is required to verify a claim by providing evidence from Wikipedia.", "To be successful, a system needs to predict both the correct veracity relation supported ( S ), refuted ( R ), or not enough information ( NEI ) and the correct set of evidence sentences (not applicable for NEI ).", "The FEVER 1.0 dataset (Thorne et al., 2018) was created by extracting sentences from popular Wikipedia pages and mutating them with paraphrases or other edit operations to create a claim.", "Then, each claim was labeled and paired with evidence or the empty set for NEI.", "Overall, there are 185,445 claims, of which 90,367 are S , 40,107 are R , and 45,971 are NEI.", "Thorne and Vlachos (2019) introduced an adversarial set up for the FEVER 2.0 shared task participants submitted claims to break existing systems and a system designed to withstand such attacks.", "The organizers provided a baseline of 1000 adversarial examples with negation and entailment-preserving/-altering transformations and this set was combined with examples from participants to form the FEVER 2.0 dataset.", "Table 1 shows the partition of FEVER 1.0 and 2.0 data (hereafter FV1/FV2-train/dev/test).", "While the FEVER dataset is a valuable resource, our goal is to evaluate complex adversarial claims", "which resemble check-worthy claims found in news articles, speeches, debates, and online discussions.", "We thus propose three types of attacks based on analysis of FV1 or prior literature: those using multiple propositions, requiring temporal and numerical reasoning, and involving lexical variation.", "For the multi-propositional type, Graves (2018) notes that professional fact-checking organizations need to synthesise evidence from multiple sources; automated systems struggle with claims such as Lesotho is the smallest country in Africa.", "In FV1-dev, 83.18% of S and R claims require only a single piece of evidence and 89% require only a single Wikipedia page.", "Furthermore, our previous work on FEVER 1.0 found that our model can fully retrieve 86% of evidence sentences from Wikipedia when only a single sentence is required, but the number drops to 17% when 2 sentences are required and 3% when 3 or more sentences are required (Hidey and Diab, 2018).", "For the second type, check-worthy claims are often numerical (Francis, 2016) and temporal reasoning is especially challenging (Mirza and Tonelli, 2016).", "Rashkin et al. (2017) and Jiang and Wilson (2018) showed that numbers and comparatives are indicative of truthful statements in news, but the presence of a date alone does not indicate its veracity.", "In FV1-dev, only 17.81% of the claims contain dates and 0.22% contain time information.", "2 To understand how current systems perform on these types of claims, we evaluated three state-of-the-art systems from FEVER 1.0 (Hanselowski et al., 2018; Yoneda et al., 2018; Nie et al., 2019a), and examined the predictions where the systems disagreed.", "We found that in characterizing these predictions according to the named entities present in the claims, the most frequent types were numerical and temporal (such as percent, money, quantity, and date).", "Finally, adversarial attacks for lexical variation , where words may be inserted or replaced or changed with some other edit operation, have been shown to be effective for similar tasks such as natural language inference (Nie et al., 2019b) and question answering (Jia and Liang, 2017), so we include these types of attacks as well.", "For the fact-checking task, models must match words and entities across claim and evidence to make a veracity prediction.", "As claims often contain ambiguous entities (Thorne and Vlachos, 2018) or lexical features indicative 2 As determined by NER using Spacy: https://spacy.io of credibility (Nakashole and Mitchell, 2014), we desire models resilient to minor changes in entities (Hanselowski et al., 2018) and words (Alzantot et al., 2018).", "We thus create an adversarial dataset of 1000 examples, with 417 multi-propositional, 313 temporal and 270 lexically variational.", "Representative examples are provided in Appendix A. Multiple Propositions Check-worthy claims often consist of multiple propositions (Graves, 2018).", "In the FEVER task, checking these claims may require retrieving evidence sequentially after resolving entities and events, understanding discourse connectives, and evaluating each proposition.", "Consider the claim Janet Leigh was from New York and was an author.", "The Wikipedia page [Janet Leigh] contains evidence that she was an author, but makes no mention of New York.", "We generate new claims of the CONJUNCTION type automatically by mining claims from FV1-dev and extracting entities from the subject position.", "We then combine two claims by replacing the subject in one sentence with a discourse connective such as and.", "The new label is S if both original claims are S, R if at least one claim is R, and NEI otherwise.", "While CONJUNCTION claims provide a way to evaluate multiple propositions about a single entity, these claims only require evidence from a single page; hence we create new examples requiring reasoning over multiple pages.", "To create MULTI-HOP examples, we select claims from FV1-dev whose evidence obtained from a single page P contains at least one other entity having a valid page Q .", "We then modify the claim by appending information about the entity which can be verified from Q .", "For example, given the claim The Nice Guys is a 2016 action comedy film. we make a multi-hop claim by obtaining the page [Shane Black] (the director) and appending the phrase directed by a Danish screenwriter known for the film Lethal Weapon.", "While multi-hop retrieval provides a way to evaluate the S and R cases, composition of multiple propositions may also be necessary for NEI, as the relation of the claim and evidence may be changed by more general/specific phrases. We thus add ADDITIONAL UNVERIFIABLE PROPOSITIONS that change the gold label to NEI. We selected claims from FV1-dev and added propositions which have no evidence in Wikipedia (e.g. for the claim Duff McKagan is an American citizen, we can add the reduced relative clause born in Seattle ). Temporal Reasoning Many check-worthy claims contain dates or time periods and to verify them requires models that handle temporal reasoning (Thorne and Vlachos, 2017). In order to evaluate the ability of current systems to handle temporal reasoning we modify claims from FV1-dev. More specifically, using claims with the phrase in < date > we automatically generate seven modified claims using simple DATE MANIPULATION heuristics: arithmetic (e.g., in 2001 4 years before 2005 ), range ( in 2001 before 2008 ), and verbalization ( in 2001 in the first decade of the 21st century ).", "We also create examples requiring MULTI-HOP TEMPORAL REASONING , where the system must evaluate an event in relation to another.", "Consider the S claim The first governor of the Indiana Territory lived long enough to see it become a state.", "A system must resolve entity references (Indiana Territory and its first governor, William Henry Harrison) and compare dates of events (the admittance of Indiana in 1816 and death of Harrison in 1841).", "While multi-hop retrieval may resolve references, the model must understand the meaning of lived long enough to see and evaluate the comparative statement.", "To create claims of this type, we mine Wikipedia by selecting a page X and extracting sentences with the pattern is/was/named the A of Y (e.g. A is first governor ) where Y links to another page.", "Then we manually create temporal claims by examining dates on X and Y and describing the relation between the entities and events.", "Named Entity Ambiguity and Lexical Variation As fact-checking systems are sensitive to lexical choice (Nakashole and Mitchell, 2014; Rashkin et al., 2017), we consider how variations in entities and words may affect veracity relation prediction.", "ENTITY DISAMBIGUATION has been shown to be important for retrieving the correct page for an entity among multiple candidates (Hanselowski et al., 2018).", "To create examples that contain ambiguous entities we selected claims from FV1-dev where at least one Wikipedia disambiguation page was returned by the Wikipedia python API.", "3 We then created a new claim using one of the documents returned from the disambiguation list.", "For example the claim Patrick Stewart is someone who does acting for a living. returns a disambiguation page, which in turn gives a list of pages 3 https://pypi.org/project/wikipedia/ such as [Patrick Stewart] and [Patrick Maxwell Stewart] .", "Finally, as previous work has shown that neural models are vulnerable to LEXICAL SUBSTITUTION (Alzantot et al., 2018), we apply their genetic algorithm approach to replace words via counter-fitted embeddings.", "We make a claim adversarial to a model fine-tuned on claims and gold evidence by replacing synonyms, hypernyms, or hyponyms, e.g. created established , leader chief .", "We manually remove ungrammatical claims or incorrect relations.", "Verifying check-worthy claims such as those in Section 4 requires a system to 1) make sequential decisions to handle multiple propositions, 2) support temporal reasoning, and 3) handle ambiguity and complex lexical relations.", "To address the first requirement we make use of a pointer network (Vinyals et al., 2015) in two novel ways:", "i) to rerank candidate documents and", "ii) to jointly predict a sequence of evidence sentences and veracity relations in order to compose evidence (Figure 3).", "To address the second we add a post-processing step for simple temporal reasoning.", "To address the third we use rich, contextualized representations.", "Specifically, we fine-tune BERT (Devlin et al., 2019) as this model has shown excellent performance on related tasks and was pre-trained on Wikipedia.", "Our full pipeline is presented in Figure 2. We first identify an initial candidate set of documents Figure 3: Pointer network architecture.", "(1a) by combining the top M pages from a TF-IDF search using DrQA (Chen et al., 2017) with pages from the approach of Chakrabarty et al. (2018), which provides results from Google search and predicted named entities and noun phrases.", "Then, we perform document ranking by selecting the top D < M pages with a pointer network (1b).", "Next, an N -long sequence of evidence sentences (2) and veracity relation labels (3) are predicted jointly by another pointer network .", "Prior to training, we fine-tune BERT for document and sentence ranking on claim/title and claim/sentence pairs, respectively.", "Each claim and evidence pair in the FEVER 1.0 dataset has both the title of the Wikipedia article and at least one sentence associated with the evidence, so we can train on each of these pairs directly.", "For the claim Michelle Obama's husband was born in Kenya , shown in Figure 3, we obtain representations by pairing this claim with evidence sentences such as Obama was born in Hawaii and article titles such as [Barack Obama] .", "The core component of our approach is the pointer network, as seen in Figure 3. Unlike our previous work (Hidey and Diab, 2018), we use the pointer network to re-rank candidate documents and jointly predict a sequence of evidence sentences and relations.", "Given a candidate set of evidence (as either document titles or sentences) and a respective fine-tuned BERT model, we extract features for every claim c and evidence e p pair by summing the [ CLS ] embedding for the top 4 layers (as recommended by Devlin et al. (2019)): m p = BERT ( c, e p ) (1) Next, to select the top k evidence, we use a pointer network over the evidence for claim c to extract evidence recurrently by computing the extraction probability P ( p t | p 0 p t 1 ) for evidence e p at time t < k .", "At time t , we update the hidden state z t of the pointer network decoder.", "Then we compute the weighted average h qt of the entire evidence set using q hops over the evidence (Vinyals et al., 2016; Sukhbaatar et al., 2015): 4 ot = softmax ( v Th tanh( W g m p + W a h o 1 t )) h ot = (cid:88) j ot W g m j (2) We concatenate m p and h qt and use a multi-layer perceptron (MLP) to predict p t .", "The loss is then: L ( ptr ) = 1 /k k 1 (cid:88) t =0 log P ptr ( p t | p 0: t 1 ) (3) We train on gold evidence and perform inference with beam search for both document ranking (Sec-tion 5.1) and joint sentence selection and relation prediction (Section 5.2).", "In order to obtain representations as input to the pointer network for document ranking, we leverage the fact that Wikipedia articles all have a title (e.g. [Barack Obama] ), and fine-tune BERT on title and claim pairs, in lieu of examining the entire document text (which due to its length is not suitable for BERT).", "Because the title often overlaps lexically with the claim (e.g. [Michelle Obama] ), we can train the model to locate the title in the claim.", "Furthermore, the words in the title co-occur with words in the article (e.g. Barack and Michelle ), which the pre-trained BERT language model may be attuned to.", "We thus fine-tune a classifier on a dataset created from title and claim pairs (where positive examples are titles of gold evidence pages and negative are randomly sampled from our candidate set), obtaining 90.0% accuracy.", "Given the fine-tuned model, we extract features using Equation 1 where e p is a title, and use Equation 3 to learn to predict a sequence of titles as in Figure 3. 4 Initially, h t, 0 is set to z t .", "v h , W g , and W a are learned.", "The sentence selection and relation prediction tasks are closely linked, as predicting the correct evidence is necessary for predicting S or R and the representation should reflect the interaction between a claim and an evidence set.", "Conversely, if a claim and an evidence set are unrelated, the model should predict NEI .", "We thus jointly model this interaction by sharing the parameters of the pointer network the hidden state of the decoder is used for both tasks and the models differ only by a final MLP.", "Sentence Selection Similar to our document selection fine-tuning approach, we fine-tune a classifier on claim and evidence sentence pairs to obtain BERT embeddings.", "However, instead of training a binary classifier for the presence of valid evidence we train directly on veracity relation prediction, which is better suited for the end task.", "We create a dataset by pairing each claim with its set of gold evidence sentences.", "As gold evidence is not available for NEI relations, we sample sentences from our candidate documents to maintain a balanced dataset.", "We then fine-tune a BERT classifier on relation prediction, obtaining 93% accuracy.", "Given the fine-tuned model, we extract features using Equation 1 where e p is a sentence, and use Equation 3 to learn to predict a sequence of sentences.", "Relation Prediction In order to closely link relation prediction with evidence prediction, we reframe the task as a sequence labeling task.", "In other words, rather than make a single prediction given all evidence sentences, we make one prediction at every timestep during decoding to model the relation between the claim and all evidence retrieved to that point .", "This approach provides three benefits: it allows the model to better handle noise (when an incorrect evidence sentence is predicted), to handle multi-hop inference (to model the occurrence of switching from NEI to S / R ), and to effectively provide more training data (for k = 5 timesteps we have 5 times as many relation labels).", "For the claim in Figure 3, the initial label sequence is NEI and R because the first evidence sentence by itself (the fact that Barack Obama was born in Hawaii) would not refute the claim.", "Furthermore for k = 5 , the remaining sequence would be R , R , R , as additional evidence (guaranteed to be non-contradictory in FEVER) would not change the prediction.", "On the other hand, given a claim that requires only a single piece of evidence, such as that in Figure 1, the sequence would be R , R , R , R , R if the correct evidence sentence was selected at the first timestep, NEI , R , R , R , R if the correct evidence sentence was selected at the second timestep, and so forth.", "We augment the evidence sentence selection described previously to use the hidden state of the pointer network after q hops (Equation 2) and an MLP to also predict a label at that time step, closely linking evidence and label prediction: P ( l t ) = softmax ( W l 2 tanh ( W l 1 h ot )) (4) As with evidence prediction (Equation 3), when the gold label sequence is available, the loss term is: L ( rel seq ) = 1 /k k 1 (cid:88) t =0 log P rel seq ( l t ) (5) When training, at the current timestep we use both the gold evidence, i.e. teacher forcing (Williams and Zipser, 1989), and the model prediction from the previous step, so that we have training data for NEI.", "Combining Equations 3 and 5, our loss is: L ( ) = L ( ptr ) + L ( rel seq ) (6) Finally, to predict a relation at inference, we ensemble the sequence of predicted labels by averaging the probabilities over every time step.", "5 Post-processing for Simple Temporal Reasoning As neural models are unreliable for handling numerical statements, we introduce a rule-based step to extract and reason about dates.", "We use the Open Information Extraction system of Stanovsky et al. (2018) to extract tuples.", "For example, given the claim The Latvian Soviet Socialist Republic was a republic of the Soviet Union 3 years after 2009, the system would identify ARG0 as preceding the verb was and ARG1 following.", "After identifying tuples in claims and predicted sentences, we discard those lacking dates (e.g. ARG0 ).", "Given more than one candidate sentence, we select the one ranked higher by the pointer network.", "Once we have both the claim and evidence date-tuple we apply one of three rules to resolve the relation prediction based on the corresponding temporal phrase.", "We either evaluate whether the evidence 5 The subset of timesteps was determined empirically: while at the final timestep the model is likely to have seen the correct evidence it also contains more noise; in future work we will experiment with alternatives.", "date is between two dates in the claim (e.g. be-tween/during/in ), we add/subtract x years from the date in the claim and compare to the evidence date (e.g. x years/days before/after ), or compare the claim date directly to the evidence date (e.g. be-fore/after/in ).", "For the date expression 3 years after 2009, we compare the year 2012 to the date in the retrieved evidence ( 1991 , the year the USSR dissolved) and label the claim as R .", "We evaluate our dataset and system as part of the FEVER 2.0 shared task in order to validate the vulnerabilities introduced by our adversarial claims (Section 4) and the solutions proposed by our system (Section 5).", "We train our system on FV1-train and evaluate on FV1/FV2-dev/test (Section 3).", "We report accuracy (percentage of correct labels) and recall (whether the gold evidence is contained in selected evidence at k = 5 ).", "We also report the FEVER score , the percentage of correct evidence sentences (for S and R ) that also have correct labels, and potency , the inverse FEVER score (subtracted from one) for evaluating adversarial claims.", "Our Baseline-RL: For baseline experiments, to compare different loss functions, we use the approach of Chakrabarty et al. (2018) for document selection and ranking, the reinforcement learning (RL) method of Chen and Bansal (2018) for sentence selection, and BERT (Devlin et al., 2019) for relation prediction.", "The RL approach using a pointer network is detailed by Chen and Bansal (2018) for extractive summarization, with the only difference that we use our fine-tuned BERT on claim/gold sentence pairs to represent each evidence sentence in the pointer network (as with our full system) and use the FEVER score as a reward.", "The reward is obtained by selecting sentences with the pointer network and then predicting the relation using an MLP (updated during training) and the concatenation of all claim/predicted sentence representations with their maximum/minimum pooling.", "Hyper-parameters and settings for all experiments are detailed in Appendix B. 6.1 Adversarial Dataset Evaluation We present the performance of our adversarial claims, obtained by submitting to the shared task server.", "We compare our claims to other participants in the FEVER 2.0 shared task (Table 2) and divided by attack type (Table 3).", "Potency was macro-averaged across different fact-checking systems (Thorne and Vlachos, 2019), correctness of labels was verified by shared task annotators, and adjusted potency was calculated by the organizers as the potency of correct examples.", "Compared to other participants (Table 2), we presented a larger set of claims (501 in dev and 499 in test).", "We rank second in adjusted potency, but we provided a more diverse set than those created by the organizers or other participants.", "The organizers (Thorne and Vlachos, 2019) created adversarial claims using simple pattern-matching and replacement, e.g. quantifiers and negation.", "Niewinski et al. (2019) trained a GPT-2-based model on the FEVER data and manually filtered disfluent claims.", "Kim and Allan (2019) considered a variety of approaches, the majority of which required understanding area comparisons between different regions or understanding implications (e.g. that not clear implies NEI ).", "While GPT-2 is effective, our approach is controllable and targeted at real-world challenges.", "Finally, Table 3 shows that when we select our top 200 most effective examples (multi-hop reasoning and multi-hop temporal reasoning) and compare to the approaches of Niewinski et al. (2019) and Kim and Allan (2019) (who both provided less than 204 examples total) our potency is much higher.", "In particular, multi-hop reasoning has a potency of 88% for SUPPORT relations and 93% for REFUTES relations and multi-hop temporal reasoning obtains 98% for SUPPORT and REFUTES relations.", "Team # Pot.", "Corr.", "Adj.", "Organizer Baseline 498 60.34 82.33 49.68 Kim and Allan (2019) 102 79.66 64.71 51.54 Ours 501 68.51 81.44 55.79 Niewinski et al. (2019) 79 79.97 84.81 66.83 Table 2: The evaluation of our claims relative to other participants.", "Our dual pointer network approach obtains state-of-the-art results on the FEVER 1.0 blind test set (Table 4) on all measures even over systems designed specifically for evidence retrieval (Nishida et al., 2019; Zhou et al., Attack M/A #S/P #R/P #NEI/P Conjunct.", "We also find improvements in accuracy over the remaining pipeline systems, suggesting that joint learning helps.", "Compared to Our Baseline-RL, Our System has 1.8 point improvement in FEVER score on FV1-test with 4 points on FV2-test.", "Notably, our system fin-ishes second (with a score of 36.61) on the FEVER 2.0 shared task test set, even though our claims were designed to be challenging for our model.", "The model of Malon (2018) performs especially well; they use a transformer-based architecture without pre-training but focus only on single-hop claims.", "Tables 6 and 7 on FV1 and FV2 dev, respectively.", "6 Table 6 presents the effect of using different objective functions for sentence selection and relation prediction, compared to joint sentence selection and relation prediction in our full model.", "We compare Our System to Our Baseline-RL system as well as another baseline ( Ptr ).", "The Ptr system is the same as Our Baseline-RL, except the pointer network and MLP are not jointly trained with RL but independently using gold evidence and predicted evidence and relations, respectively.", "Finally, the Oracle upper bound presents the maximum possible recall after our document ranking stage, compared to 94.4% for Chakrabarty et al. (2018), and relation accuracy (given the MLP trained on 5 sentences guaranteed to contain gold evidence).", "We find that by incorporating the relation sequence loss, we improve the evidence recall significantly relative to the oracle upper-bound, reducing the relative error by 50% while also obtaining improvements on relation prediction, even over a strong RL baseline.", "Overall, the best model is able to retrieve 95.9% of the possible gold sentences after the document selection stage, suggesting that further improvements are more likely to come from document selection.", "Table 7 evaluates the impact of the document pointer network and rule-based date handling on FV2-dev, as the impact of multi-hop reasoning and temporal relations is less visible on FV1-dev.", "We again compare Our Baseline-RL system to Our System and find an even larger 7.16 point improvement in FEVER score.", "We find that ablating the date post-processing ( -dateProc ) and both the date post-processing and document ranking components ( -dateProc,-docRank ) reduces the FEVER score by 1.45 and 3.5 points, respectively, with the latter largely resulting from a 5 point decrease in recall.", "While Table 3 presents the macro-average of all systems by attack type, we compare the performance of Our Baseline-RL and Our System in Table 8.", "6 Our system is significantly better on all metrics ( p < 0 . 001 by the approximate randomization test).", "Our System improves on evidence recall for multi-hop claims (indicating that a multi-hop document retrieval step may help) and those with ambiguous entities or words (using a model to re-rank may remove false matches with high lexical simi-larity).", "For example, the claim Honeymoon is a major-label record by Elizabeth Woolridge Grant. requires multi-hop reasoning over entities.", "Our System correctly retrieves the pages [Lana Del Rey] and [Honeymoon (Lana Del Rey album)] , but Our Baseline-RL is misled by the incorrect page [Honeymoon] .", "However, while recall increases on multi-hop claims compared to the baseline, accuracy decreases, suggesting the model may be learning a bias of the claim or label distribution instead of relations between claims and evidence.", "We also obtain large improvements on date manipulation examples (here a rule-based approach is better than our neural one); in contrast, multi-hop temporal reasoning leaves room for improvement.", "For instance, for the claim The MVP of the 1976 Canada Cup tournament was born before the tournament was first held, our full system correctly retrieves [Bobby Orr] and [1976 Canada Cup] (unlike the RL baseline).", "However, a further inference step is needed beyond our current capabilities reasoning that Orr's birth year (1948) is before the first year of the tournament (1976).", "Finally, we enhance performance on multi-propositions as conjunctions or additional unverifi-able information (indicating that relation sequence prediction helps).", "Claims (non-verifiable phrase in brackets) such as Taran Killam is a [stage] actor. and Home for the Holidays stars an actress [born in Georgia]. are incorrectly predicted by the baseline even though correct evidence is retrieved.", "We showed weaknesses in approaches to fact-checking via novel adversarial claims.", "We took steps towards realistic fact-checking with targeted improvements to multi-hop reasoning (by a document pointer network and a pointer network for sequential joint sentence selection and relation pre-Attack Type Acc. Rec. FEVER Conjunction B 16.95 92.0 16.95 S 40.68 92.0 40.68 Multi-hop B 55.81 29.07 19.77 S 33.72 45.35 17.44 Add. Unver. B 48.0 48.0 S 80.0 80.0 Date Manip. B 30.99 79.59 27.46 S 53.52 79.59 42.25 Multi-hop Temp. B 3.33 10.34 0.0 S 3.33 13.79 0.0 Entity Disamb. B 70.83 62.5 58.33 S 79.17 79.17 70.83 Lexical Sub. B 33.33 65.71 25.0 S 29.76 75.71 26.19 Table 8: Attack results for our FV2-dev claims. B: Our Baseline-RL, S: Our System. * : p < 0 . 05 ** : p < 0 . 01 *** : p < 0 . 001 by approximate randomization test diction), simple temporal reasoning (by rule-based date handling), and ambiguity and variation (by fine-tuned contextualized representations).", "There are many unaddressed vulnerabilities that are relevant for fact-checking.", "The Facebook bAbI tasks (Weston et al., 2016) include other types of reasoning (e.g. positional or size-based).", "The DROP dataset (Dua et al., 2019) requires mathematical operations for question answering such as addition or counting.", "Propositions with causal relations (Hidey and McKeown, 2016), which are event-based rather than attribute-based as in FEVER, are also challenging.", "Finally, many verifiable claims are non-experiential (Park and Cardie, 2014), e.g. personal testimonies, which would require predicting whether a reported event was actually possible.", "Finally, our system could be improved in many ways.", "Future work in multi-hop reasoning could represent the relation between consecutive pieces of evidence and future work in temporal reasoning could incorporate numerical operations with BERT (Andor et al., 2019).", "One limitation of our system is the pipeline nature, which may require addressing each type of attack individually as adversaries adjust their techniques.", "An end-to-end approach or a query reformulation step (re-writing claims to be similar to FEVER) might make the model more resilient as new attacks are introduced.", "The authors thank Kathy McKeown, Chris Kedzie, Fei-Tzin Lee, and Emily Allaway for their helpful comments on the initial draft of this paper and the anonymous reviewers for insightful feedback." ]
[ "abstain", "abstain", "result", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other" ]
[ "We propose a distance supervised relation extraction approach for long-tailed, imbalanced data which is prevalent in real-world settings.", "Here, the challenge is to learn accurate few-shot models for classes existing at the tail of the class distribution, for which little data is available.", "Inspired by the rich semantic correlations between classes at the long tail and those at the head, we take advantage of the knowledge from data-rich classes at the head of the distribution to boost the performance of the data-poor classes at the tail.", "First, we propose to leverage implicit relational knowledge among class labels from knowledge graph embeddings and learn explicit relational knowledge using graph convolution networks.", "Second, we integrate that relational knowledge into relation extraction model by coarse-to-fine knowledge-aware attention mechanism.", "We demonstrate our results for a large-scale benchmark dataset which show that our approach significantly outperforms other baselines, especially for long-tail relations.", "Relation extraction (RE) is an important task in information extraction, aiming to extract the relation between two given entities based on their related context.", "Due to the capability of extracting textual information and benefiting many NLP applications (e.g., information retrieval, dialog generation, and question answering), RE appeals to many researchers.", "Conventional supervised models have been widely explored in this task (Zelenko et al., 2003; Zeng et al., 2014); however, their performance heavily depends on the scale and quality of training data.", "To construct large-scale data, (Mintz et al., 2009) proposed a novel distant supervision (DS) mechanism to automatically label training instances by aligning existing knowledge graphs (KGs) with text.", "DS enables RE models to work on large-scale training corpora and has thus become a primary approach for RE recently (Wu et al., 2017; Feng et al., 2018).", "Although these DS models achieve promising results on common relations, their performance still degrades dramatically when there are only a few training instances for some relations.", "Empirically, DS can automatically annotate adequate amounts of training data; however, this data usually only covers a limited part of the relations.", "Many relations are long-tail and still suffer from data deficiency.", "Current DS models ignore the problem of long-tail relations, which makes it challenging to extract comprehensive information from plain text.", "Long-tail relations are important and cannot be ignored.", "Nearly 70% of the relations are longtail in the widely used New York Times (NYT) dataset 1 (Riedel et al., 2010; Lei et al., 2018) as shown in Figure 1. Therefore, it is crucial for mod-1 http://iesl.cs.umass.edu/riedel/ecml/ els to be able to extract relations with limited num-bers of training instances.", "Dealing with long tails is very difficult as few training examples are available.", "Therefore, it is natural to transfer knowledge from data-rich and semantically similar head classes to data-poor tail classes (Wang et al., 2017).", "For example, the long-tail relation /peo-ple/deceased person/place of burial and head relation /people/deceased person/place of death are in the same branch /people/deceased person/* as shown in Figure 2. They are semantically similar, and it is beneficial to leverage head relational knowledge and transfer it to the long-tail relation, thus enhancing general performance.", "In other words, long-tail relations of one entity tuple can have class ties with head relations, which can be leveraged to enhance RE for narrowing potential search spaces and reducing uncertainties between relations when predicting unknown relations (Ye et al., 2017).", "If one pair of entities contains /people/deceased person/place of death, there is a high probability that it will contain /peo-ple/deceased person/place of burial.", "If we can incorporate the relational knowledge between two relations, extracting head relations will provide evidence for the prediction of long-tail relations.", "However, there exist two problems: (1) Learning relational knowledge: Semantically similar classes may contain more relational information that will boost transfer, whereas irrelevant classes (e.g., /location/location/contains and /peo-ple/family/country) usually contain less relational information that may result in negative transfer.", "(2) Leveraging relational knowledge: Integrating relational knowledge to existing RE models is challenging.", "To address the problem of learning relational knowledge, as shown in (Lin et al., 2016; Ye et al., 2017), we use class embeddings to represent relation classes and utilize KG embeddings and graph convolution networks (GCNs) to extract implicit and explicit relational knowledge.", "Specifically, previous studies (Yang et al., 2015) have shown that the embeddings of semantically similar relations are located near each other in the latent space.", "For instance, the relation /people/person/place lived and /peo-ple/person/nationality are more relevant, whereas the relation /people/person/profession has less correlation with the former two relations.", "Thus, it [ ismail_merchant ], whose filmmaking collaboration with jamesivory created a genre of films with visually sumptuous settings that told literate tales of individuals trying to adapt to shifting societal values , died yesterday in a [ London ] hospital [ darren_mcgavin ] , an actor with hundreds of television , movie and theatrical credits to his name , died on saturdayin [ los_angeles ] .", "is natural to leverage this knowledge from KGs.", "However, because there are many one-to-multiple relations in KGs, the relevant information for each class may be scattered.", "In other words, there may not be enough relational signal between classes.", "Therefore, we utilize GCNs to learn explicit relational knowledge.", "To address the problem of leveraging relational knowledge, we first use convolution neural networks (Zeng et al., 2014, 2015) to encode sentences; then introduce coarse-to-fine knowledge-aware attention mechanism for combining relational knowledge with encoded sentences into bag representation vectors.", "The relational knowledge not only provides more information for relation prediction but also provides a better reference message for the attention module to raise the performance of long-tail classes.", "Our experimental results on the NYT dataset show that: (1) our model is effective compared to baselines especially for long-tail relations; (2) leveraging relational knowledge enhances RE and our model is efficient in learning relational knowledge via GCNs.", "Relation Extraction.", "Supervised RE models (Ze-lenko et al., 2003; GuoDong et al., 2005; Mooney and Bunescu, 2006) require adequate amounts of annotated data for training which is time-consuming.", "Hence, (Mintz et al., 2009) proposd DS to automatically label data.", "DS inevitably accompanies with the wrong labeling problem.", "To alleviate the noise issue, (Riedel et al., 2010; Hoffmann et al., 2011) proposed multi-instance learning (MIL) mechanisms.", "Recently, neural models have been widely used for RE; those models can accurately capture textual relations without explicit linguistic analysis (Zeng et al., 2015; Lin et al., 2016; Zhang et al., 2018a).", "To further improve the performance, some studies incorporate external information (Zeng et al., 2017; Ji et al., 2017; Han et al., 2018a) and advanced training strategies (Ye et al., 2017; Liu et al., 2017; Huang and Wang, 2017; Feng et al., 2018; Zeng et al., 2018; Wu et al., 2017; Qin et al., 2018).", "These works mainly adopt DS to make large-scale datasets and reduce the noise caused by DS, regardless of the effect of long-tail relations.", "There are only a few studies on long-tail for RE (Gui et al., 2016; Lei et al., 2018; Han et al., 2018b).", "Of these, (Gui et al., 2016) proposed an explanation-based approach, whereas (Lei et al., 2018) utilized external knowledge (logic rules).", "These studies treat each relation in isolation, regardless of the rich semantic correlations between the relations.", "(Han et al., 2018b) proposed a hierarchical attention scheme for RE, especially for long-tail relations.", "Different from those approaches, we leverage implicit and explicit relational knowledge from KGs and GCNs rather than data-driven learned parameter spaces where similar relations may have distinct parameters, hindering the generalization of long-tail classes.", "Knowledge Graph Embedding.", "Recently, several KG embedding models have been proposed.", "These methods learn low-dimensional vector representations for entities and relations (Bor-des et al., 2013; Wang et al., 2014; Lin et al., 2015).", "TransE (Bordes et al., 2013) is one of the most widely used models, which views relations as translations from a head entity to a tail entity on the same low-dimensional hyperplane.", "Inspired by the rich knowledge in KGs, recent works (Han et al., 2018a; Wang et al., 2018; Lei et al., 2018) extend DS models under the guidance of KGs.", "However, these works neglect rich correlations between relations.", "Relation structure (re-lational knowledge) has been studied and is quite effective for KG completion (Zhang et al., 2018b).", "To the best of our knowledge, this is the first effort to consider the relational knowledge of classes (re-lations) using KGs for RE.", "Graph Convolutional Networks.", "GCNs generalize CNNs beyond two-dimensional and one-dimensional spaces.", "(Defferrard et al., 2016) developed spectral methods to perform efficient graph convolutions.", "(Kipf and Welling, 2016) assumed the graph structure is known over input instances and apply GCNs for semi-supervised learning.", "GCNs were applied to relational data (e.g., link prediction) by (Schlichtkrull et al., 2018).", "GCNs have also had success in other NLP tasks such as semantic role labeling (Marcheg-giani and Titov, 2017), dependency parsing (Strubell and McCallum, 2017), and machine translation (Bastings et al., 2017).", "Two GCNs studies share similarities with our work.", "(1) (Chen et al., 2017) used GCNs on structured label spaces.", "However, their experiments do not handle long-tail labels and do not incorporate attention but use an average of word vectors to represent each document.", "(2) (Rios and Kavu-luru, 2018) proposed a few-shot and zero-shot text classification method by exploiting structured label spaces with GCNs.", "However, they used GCNs in the label graph whereas we utilize GCNs in the hierarchy graph of labels.", "In this section, we introduce the overall framework of our approach for RE, starting with the notations.", "We denote a KG as G = E , R , F , where E , R and F indicate the sets of entities, relations and facts respectively.", "( h, r, t ) F indicates that there is a relation r R between h E and t E .", "We follow the MIL setting and split all instances into multiple entity-pair bags {S h 1 ,t 1 , S h 2 ,t 2 , ... } .", "Each bag S h i ,t i contains multiple instances { s 1 , s 2 , ... } mentioning both entities h i and t i .", "Each instance s in these bags is denoted as a word sequence s = { w 1 , w 2 , ... } .", "Instance Encoder.", "Given an instance and its mentioned entity pair, we employ neural networks to encode the instance semantics into a vector.", "In this study, we implement the instance encoder with convolutional neural networks (CNNs) given both model performance and time efficiency.", "Relational Knowledge Learning.", "Given pretrained KG embeddings (e.g., TransE (Bordes et al., 2013)) as implicit relational knowledge, we employ GCNs to learn explicit relational knowledge.", "By assimilating generic message-passing inference algorithms with neural-network counterpart, we can learn better embeddings for Knowl-(cid:6)(cid:4)(cid:11) (cid:10)(cid:25)(cid:24)(cid:20)(cid:2)(cid:28)(cid:14)(cid:21)(cid:22) (cid:7)(cid:18)(cid:14)(cid:17) (cid:12)(cid:14)(cid:26)(cid:18)(cid:24)(cid:28) (cid:4)(cid:3) (cid:2)(cid:15)(cid:9)(cid:14)(cid:15)(cid:12)(cid:9) (cid:2)(cid:15)(cid:9)(cid:14)(cid:15)(cid:12)(cid:9)(cid:2)(cid:15)(cid:9)(cid:16)(cid:17)(cid:14)(cid:13)(cid:2) (cid:2)(cid:15)(cid:9)(cid:14)(cid:15)(cid:12)(cid:9)(cid:2)(cid:15)(cid:9)(cid:16)(cid:17)(cid:14)(cid:13)(cid:2)(cid:9)(cid:18)(cid:10)(cid:13)(cid:11)(cid:7)(cid:11)(cid:18)(cid:20) (cid:2)(cid:15)(cid:9)(cid:14)(cid:15)(cid:12)(cid:9)(cid:2)(cid:15)(cid:9)(cid:16)(cid:17)(cid:14)(cid:13)(cid:2)(cid:13)(cid:6)(cid:18)(cid:11)(cid:14)(cid:13)(cid:6)(cid:12)(cid:11)(cid:18)(cid:20) (cid:13)(cid:18)(cid:22)(cid:14)(cid:28)(cid:21)(cid:25)(cid:24)(cid:14)(cid:22)(cid:1)(cid:9)(cid:24)(cid:25)(cid:29)(cid:22)(cid:18)(cid:17)(cid:20)(cid:18) (cid:9)(cid:24)(cid:25)(cid:29)(cid:22)(cid:18)(cid:17)(cid:20)(cid:18)(cid:2)(cid:14)(cid:29)(cid:14)(cid:26)(cid:18)(cid:1) (cid:28)(cid:28)(cid:18)(cid:24)(cid:28)(cid:21)(cid:25)(cid:24) (cid:1) !", "edge Relation.", "We concatenate the outputs of GCNs and the pretrained KG embeddings to form the final class embeddings.", "Knowledge-aware Attention.", "Under the guidance of final class embeddings, knowledge-aware attention is aimed to select the most informative instance exactly matching relevant relation.", "Given an instance s = { w 1 , ..., w n } mentioning two entities, we encode the raw instance into a continuous low-dimensional vector x , which consists", "consists of an embedding layer and an encoding layer.", "Embedding Layer.", "The embedding layer is used to map discrete words in the instance into continuous input embeddings.", "Given an instance s , we map each word w i in the instance to a real-valued pretrained Skip-Gram (Mikolov et al., 2013) embedding w i R d w .", "We adopt position embeddings following (Zeng et al., 2014).", "For each word w i , we embed its relative distances to the two entities into two d p -dimensional vectors.", "We then concatenate the word embeddings and position embeddings to achieve the final input embeddings for each word and gather all the input embeddings in the instance.", "We thus obtain an embedding sequence ready for the encoding layer.", "Encoding Layer.", "The encoding layer aims to compose the input embeddings of a given instance into its corresponding instance embedding.", "In this study, we choose two convolutional neural architectures, CNN (Zeng et al., 2014) and PCNN (Zeng et al., 2015) to encode input embeddings into instance embeddings.", "Other neural architectures such as recurrent neural networks (Zhang and Wang, 2015) can also be used as sentence encoders.", "Because previous works show that both convolutional and recurrent architectures can achieve comparable state-of-the-art performance, we select convolutional architectures in this study.", "Note that, our model is independent of the encoder choices, and can, therefore, be easily adapted to fit other encoder architectures.", "Given pretrained KG embeddings and predefined class (relation) hierarchies 2 , we first leverage the implicit relational knowledge from KGs and initialize the hierarchy label graph; then we apply two layer GCNs to learn explicit fine-grained relational knowledge from the label space.", "Hierarchy Label Graph Construction.", "Given a relation set R of a KGG (e.g., Freebase), which consists of base-level relations (e.g., /peo-ple/person/ethnicity), we can generate the corresponding higher-level relation set RH .", "The relations in a high-level set (e.g., people) are more general and common; they usually contain several sub-relations in the base-level set.", "The relation hierarchies are tree-structured, and the generation process can be done recursively.", "We use a virtual father node to construct the highest level associa-2 For datasets without predefined relation hierarchies, hierarchy clustering (Johnson, 1967) or K-means can construct relation hierarchies (Zhang et al., 2018b); details can be found in supplementary materials.", "tions between relations as shown in Figure 3. In practice, we start from R 0 = R which is the set of all relations we focus on for RE, and the generation process is performed L 1 times to get the hierarchical relation sets {R 0 , R 1 , ..., RL } , where RL is the virtual father node.", "Each node has a specific type t { 0 , 1 , ..., L } to identify its layer hierarchies.", "For example, as shown in Figure 3, node /people/person/ethnicity has a specific type 0 to indicate it is in the bottom layer of the graph.", "The vectors of each node in the bottom layer are initialized through pretrained TransE (Bordes et al., 2013) KG embeddings.", "Other KG embeddings such as TransR (Lin et al., 2015) can also be adopted.", "Their parent nodes are initialized by averaging all children vectors.", "For example, the node vector of /people/person/ is initialized by averaging all the nodes under the branch of /peo-ple/person/* (all child nodes).", "GCN Output Layer.", "Due to one-to-multiple relations and incompleteness in KGs, the implicit relevant information obtained by KG embeddings for each label is not enough.", "Therefore, we apply GCNs to learn explicit relational knowledge among labels.", "We take advantage of the structured knowledge over our label space using a two-layer GCNs.", "Starting with the pretrained relation embedding v impliciti R d from KGs, we combine the label vectors of the children and parents for the i -th label to form, v 1 i = f ( W 1 v i + (cid:88) j N p W 1 p v j |N p | + (cid:88) j N c W 1 c v j |N c | + b 1 g ) (1) where W 1 R q d , W 1 p R q d , W 1 c R q d , b 1 g R q , f is the rectified linear unit (Nair and Hinton, 2010) function, and N c ( N p ) is the index set of the i -th labels children (parents).", "We use different parameters to distinguish each edge type where parent edges represent all edges from high level labels and child edges represent all edges from low level labels.", "The second layer follows the same formulation as the first layer and outputs v expliciti .", "Finally, we concatenate both pretrained v impliciti with GCNs node vector v expliciti to form hierarchy class embeddings, q r = v impliciti || v expliciti (2) where q r R d + q .", "Traditionally, the output layer of PCNN/CNN would learn label specific parameters optimized by a cross-entropy loss.", "However, the label specific parameters spaces are unique to each relation, matrices associated with the long-tails can only be exposed to very few facts during training, resulting in poor generalization.", "Instead, our method attempts to match sentence vectors to their corresponding class embeddings rather than learning label specific attention parameters.", "In essence, this becomes a retrieval problem.", "Relevant information from class embeddings contains useful relational knowledge for long-tails among labels.", "Practically, given the entity pair ( h, t ) and its bag of instances S h,t = { s 1 , s 2 , ..., s m } , we achieve the instance embeddings { s 1 , s 2 , ..., s m } using the sentence encoder.", "We group the class embeddings according to their type (i.e., according to their layers in the hierarchy label graph), e.g., q r i , i { 0 , 1 , ..., L } .", "We adopt q r i , i (cid:54) = L (layer L is the virtual father node) as layer-wise attention query vector.", "Then, we apply coarse-to-fine knowledge-aware attention to them to obtain the textual relation representation r h,t .", "For a relation r , we construct its hierarchical chain of parent relations ( r 0 , ..., r L 1 ) using the hierarchy label graph, where r i 1 is the sub-relation of r i .", "We propose the following formulas to compute the attention weight (similarity or relatedness) between each instances feature vector s k and q r i , e k = W s ( tanh [ s k ; q r i ]) + b s ik = exp ( e k ) (cid:80) mj =1 exp ( e j ) (3) where [ x 1 ; x 2 ] denotes the vertical concatenation of x 1 and x 2 , W s is the weight matrix, and b s is the bias.", "We compute attention operations on each layer of hierarchy label graph to obtain corresponding textual relation representations, r ih,t = AT T ( q r i , { s 1 , s 2 , ..., s m } ) (4) Then we need to combine the relation representations on different layers.", "Direct concatenation of all the representations is a straightforward choice.", "However, different layers have different contributions for different tuples.", "For example, the relation /location/br state/ has only one sub-relation /loca-tion/br state/capital, which indicates that it is more important.", "In other words, if the sentence has high attention weights on relation /location/br state/, it has a very high probability to have relation /loca-tion/br state/capital.", "Hence, we use an attention mechanism to emphasize the layers, g i = W g tanh ( r h,t ) i = exp ( g i ) (cid:80) L 1 j =0 exp ( g j ) (5) where W g is a weight matrix, r h,t is referred to as a query-based function that scores how well the input textual relation representations and the predict relation r match.", "The textual relation representations in each layer are computed as, r ih,t = i r ih,t (6) We simply concatenate the textual relation representations on different layers as the final representation, r h,r = Concat ( r 0 h,t , .., r L 1 h,t ) (7) The representation r h,t will be finally fed to compute the conditional probability P ( r | h, t, S h,t ) , P ( r | h, t, S h,t ) = exp ( o r ) (cid:80) r R exp ( o r ) (8) where o is the score of all relations defined as, o = Mr h,t (9) where M is the representation matrix to calculate the relation scores.", "Note that, attention weight q r i is obtained from the outputs of GCNs and pretrained KG embeddings, which can provide more informative parameters than data-driven learned parameters, especially for long-tails.", "We evaluate our models on the NYT dataset developed by (Riedel et al., 2010), which has been widely used in recent studies (Lin et al., 2016; Liu et al., 2017; Wu et al., 2017; Feng et al., 2018).", "The dataset has 53 relations including the NA relation, which indicates that the relations of instances are not available.", "The training set has 522611 sentences, 281270 entity pairs, and 18252 relational facts.", "In the test set, there are 172448 sentences, 96678 entity pairs, and 1950 relational facts.", "In both training and test set, we truncate sentences with more than 120 words into 120 words.", "We evaluate all models in the held-out evaluation.", "It evaluates models by comparing the relational facts discovered from the test articles with those in Freebase and provides an approximate measure of precision without human evaluation.", "For evaluation, we draw precision-recall curves for all models.", "To further verify the effect of our model for long-tails, we follow previous studies (Han et al., 2018b) to report the Precision@N results.", "The dataset and baseline code can be found on Github 3 .", "To fairly compare the results of our models with those baselines, we also set most of the experimental parameters by following (Lin et al., 2016).", "We apply dropout on the output layers of our models to prevent overfitting.", "We also pretrain the sentence encoder of PCNN before training our model.", "3 https://github.com/thunlp/OpenNRE 4 Details of hyper-parameters settings and evaluation of different instances can be found in supplementary materials Training Instances Hits@K (Macro) < 100 < 200 10 15 20 10 15 20 CNN +ATT < 5.0 < 5.0 18.5 < 5.0 16.2 33.3 +HATT 5.6 31.5 57.4 22.7 43.9 65.1 +KATT 9.1 41.3 58.5 23.3 44.1 65.4 PCNN +ATT < 5.0 7.4 40.7 17.2 24.2 51.5 +HATT 29.6 51.9 61.1 41.4 60.6 68.2 +KATT 35.3 62.4 65.1 43.2 61.3 69.2 Table 1: Accuracy (%) of Hits@K on relations with training instances fewer than 100/200.", "To evaluate the performance of our proposed model, we compare the precision-recall curves of our model with various previous RE models.", "The evaluation results are shown in Figure 4 and Figure 5.", "We report the results of neural architectures including CNN and PCNN with various attention based methods: +KATT denotes our approach, +HATT is the hierarchical attention method (Han et al., 2018b), +ATT is the plain selective attention method over instances (Lin et al., 2016), +ATT+ADV is the denoising attention method by adding a small adversarial perturbation to instance embeddings (Wu et al., 2017), and +ATT+SL is the attention-based model using soft-labeling method to mitigate the side effect of the wrong labeling problem at entity-pair level (Liu et al., 2017).", "We also compare our method with feature-based models, including Mintz (Mintz et al., 2009), MultiR (Hoffmann et al., 2011) and MIML (Surdeanu et al., 2012).", "As shown in both figures, our approach achieves the best results among all attention-based models.", "Even when compared with PCNN+HATT, PCNN+ATT+ADV, and PCNN+ATT+SL, which adopt sophisticated denoising schemes and extra information, our model is still more advantageous.", "This indicates that our method can take advantage of the rich correlations between relations through KGs and GCNs, which improve the performance.", "We believe the performance of our model can be further improved by adopting additional mechanisms like adversarial training, and reinforcement learning, which will be part of our future work.", "To further demonstrate the improvements in performance for long-tail relations, following the study by (Han et al., 2018b) we extract a subset of the test dataset in which all the relations have", "fewer than 100/200 training instances.", "We employ the Hits@K metric for evaluation.", "For each entity pair, the evaluation requires its corresponding golden relation in the first K candidate relations recommended by the models.", "Because it is difficult for the existing models to extract long-tail relations, we select K from { 10,15,20 } .", "We report the macro average Hits@K accuracies for these subsets because the micro-average score generally overlooks the influences of long-tails.", "From the results shown in Table 1, we observe that for both CNN and PCNN models, our model outperforms the plain attention model and the HATT model.", "Although our KATT method has achieved better results for long-tail relations as compared to both plain ATT method and HATT method, the results of all these methods are still far from satisfactory.", "This indicates that distantly supervised RE models still suffer from the long-tail relation problem, which may require additional schemes and extra information to solve this problem in the future.", "To analyze the contributions and effects of different technologies in our approach, we perform ablation tests.", "+KATT is our method; w/o hier is the method without coarse-to-fine attention (only utilizes bottom node embeddings of the hierarchy label graph), which implies no knowledge transfer from its higher level classes; w/o GCN is the method without GCNs, which implies no explicit relational knowledge; Word2vec is the method in which the node is initialized with pretrained Skip-Gram (Mikolov et al., 2013) embeddings; and w/o KG is the method in which the node is initialized with random embeddings, which implies no prior relational knowledge from KGs.", "From the evaluation results in Table 2, we observe that the performance slightly degraded without coarse-to-fine attention, which proves that knowledge transfer from the higher node is useful.", "We also noticed that the performance slightly degraded without KG or using word embeddings, and the", "perfor-(a) HATT", "mance significantly degraded when we removed GCNs.", "This is reasonable because GCNs can learn more explicit correlations between relation labels, which boost the performance for long-tail relations.", "We give some examples to show how our method affects the selection of sentences.", "In Table 3, we display the sentence's attention score in the lowest level 5 .", "Both the relation /people/deceased person/place of burial (24 instances) and /location/br state/capital (4 instances) are long-tail relations.", "On one hand, relation /peo-ple/deceased person/place of burial has semantically similar data-rich relation such as /peo-ple/deceased person/place of death.", "We observe that HATT erroneously assigns high attention to the incorrect sentence whereas KATT successfully assigns the right attention weights, which demon-5 Both HATT and KATT methods can successfully select the correct sentence at the higher-level; details can be found in supplementary materials.", "strates the efficacy of knowledge transfer from semantically similar relations (Both HATT and KATT methods can take advantage of knowledge transfer of high-level relations.).", "On the other hand, the relation /location/br state/capital does not have semantically similar relations.", "However, we notice that KATT still successfully assigns the right attention weights, which demonstrates the efficacy of knowledge transfer from high-level relations using coarse-to-fine knowledge-aware attention.", "We visualize the class embeddings via t-SNE (Maaten and Hinton, 2008) to further show how GCNs and KG embeddings can help RE for longtail relations.", "We observe that (1) Figure", "6(a) and", "6(d) show that semantically similar class embeddings are closer with GCNs and pretrained KG embeddings, which help select long-tail instances.", "(2) Figure", "6(b) and", "6(c) show that KG embeddings and GCNs have different contributions for different relations to learn relational knowledge between classes.", "For example, /location/location/contain has a sparse hierarchy structure, which leads to inefficient learning for GCNs; therefore, the relative distance changes only slightly, which reveals the necessity of implicit relational knowledge from KGs.", "(3) Figure", "6(d) shows that there are still some semantically similar class embeddings located far away, which may degrade the performance for long-tails.", "This may be caused by either sparsity in the hierarchy label graph or equal treatment for nodes with the same parent in GCNs, which is not a reasonable hypothesis.", "We will address this by integrating more information such as relation descriptions or combing logic reasoning as a part of future work.", "In this paper, we take advantage of the knowledge from data-rich classes at the head of distribution to boost the performance of the data-poor classes at the tail.", "As compared to previous methods, our approach provides fine-grained relational knowledge among classes using KG and GCNs, which is quite effective and encoder-agnostic.", "In the future, we plan to explore the following directions: (1) We may combine our method with recent denoising methods to further improve performance.", "(2) We may combine rule mining and reasoning technologies to learn better class embeddings to boost performance.", "(3) It will be promising to apply our method to zero-shot RE and further adapt to other NLP scenarios.", "We want to express gratitude to the anonymous reviewers for their hard work and kind comments, which will further improve our work in the future.", "This work is funded by NSFC91846204/61473260, national key research program YS2018YFB140004, Alibaba CangJingGe (Knowledge Engine) Research Plan and Natural Science Foundation of Zhejiang Province of China (LQ19F030001)." ]
[ "objective", "abstain", "method", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "method", "method", "other", "other" ]
[ "Chinese Spelling Check (CSC) is a task to detect and correct spelling errors in Chinese natural language.", "Existing methods have made attempts to incorporate the similarity knowledge between Chinese characters.", "However, they take the similarity knowledge as either an external input resource or just heuristic rules.", "This paper proposes to incorporate phonological and visual similarity knowledge into language models for CSC via a specialized graph convolutional network (SpellGCN).", "The model builds a graph over the characters, and SpellGCN is learned to map this graph into a set of inter-dependent character classifiers.", "These classifiers are applied to the representations extracted by another network, such as BERT, enabling the whole network to be end-to-end trainable.", "Experiments 1 are conducted on three human-annotated datasets.", "Our method achieves superior performance against previous models by a large margin.", "Spelling errors are common in our daily life, caused typically by human writing, automatic speech recognition, and optical character recognition sys-tems.", "Among these errors, misspelling a character frequently occurs due to the similarity between characters.", "In Chinese, many characters are phonologically and visually similar, but semantically very different.", "According to Liu et al. (2010), about 83% of errors are related to phonological similarity and 48% are related to visual similarity.", "The Chinese Spelling Check (CSC) task aims to detect and correct such misuse of the Chinese language.", "Despite recent development, CSC remains a challenging task.", "Notably, the spelling checking on Chinese is very different from English, due to its language Equal contribution.", "nature.", "Chinese is a language consisting of many pictographic characters without word delimiters.", "And the meaning of each character changes dramatically when the context changes.", "Therefore, a CSC system needs to recognize the semantics and aggregate the surrounding information for necessary modifications.", "Previous methods followed the line of generative models.", "They used either language models (Liu et al., 2013, 2010; Yu and Li, 2014) or sequence-to-sequence models (Wang et al., 2019).", "To fuse the external knowledge of the similarity between characters, some of them leveraged a confusion set, which contains a set of similar character pairs.", "For instance, Yu and Li (2014) proposed to produce several candidates by retrieving the confusion set and then filter them via language models.", "Wang et al. (2019) used a pointer network to copy a similar character from the confusion set.", "These methods attempted to utilize the similarity information to confine the candidates, rather than modeling the relationship between characters explicitly.", "captures the pronunciation/shape similarity and explore the prior dependencies between characters.", "Specifically, two similarity graphs are constructed for the pronunciation and shape relationship correspondingly.", "SpellGCN takes the graphs as input and generates for each character a vector representation after the interaction between similar characters.", "These representations are then constructed into a character classifier for the semantic representation extracted from another backbone module.", "We use BERT (Devlin et al., 2019) due to its powerful semantic capacity.", "Combining the graph representations with BERT, SpellGCN can leverage the similarity knowledge and generate the right corrections accordingly.", "Regarding the example as in Table 1, SpellGCN is able to modify the sentence correctly within the pronunciation constraint.", "Experiments were conducted on three open benchmarks.", "The results demonstrate that SpellGCN improves BERT evidently, outperforming all competitor models by a large margin.", "In summary, our contributions are as follows: We propose a novel end-to-end trainable SpellGCN to integrate the pronunciation and shape similarities into the semantic space.", "Its essential components such as the specialized graph convolution and attentive combination operations are carefully investigated.", "We investigate the performance of SpellGCN both quantitatively and qualitatively.", "Experimental results indicate that our method achieves the best results on three benchmark datasets.", "The CSC task is a long-standing problem and has attracted much attention from the community.", "The research emerges in recent years (Jia et al., 2013; Xin et al., 2014; Yu and Li, 2014; Tseng et al., 2015; Fung et al., 2017; Wang et al., 2019; Hong et al., 2019), together with other topics, e.g., grammar error correction (GEC) (Rao et al., 2018; Ji et al., 2017; Chollampatt et al., 2016; Ge et al., 2018).", "CSC focuses on detecting and correcting character errors, while GEC also includes errors that need deletion and insertion.", "Previous work handles CSC using unsupervised language models (Liu et al., 2013; Yu and Li, 2014).", "The errors are detected/corrected by evaluating the perplexity of sentences/phrases.", "However, these models were unable to condition the correction on the input sentence.", "To circumvent this problem, several discriminative sequence tagging methods were adopted for CSC (Wang et al., 2018).", "For more flexibility and better performance, several sequence-to-sequence models were also employed (Wang et al., 2019; Ji et al., 2017; Chollampatt et al., 2016; Ge et al., 2018), as well as BERT (Hong et al., 2019).", "Recent attention was paid to utilizing the external knowledge of character similarity.", "The similarity knowledge can be gathered into a dictionary, i.e., confusion set, where similar pairs are stored.", "Yu and Li (2014) first used the dictionary to retrieve similar candidates for potential errors.", "Wang et al. (2019) incorporated a copy mechanism into a recurrent neural model.", "When given similar characters as input, their model uses the copy mechanism to directly copy the character to the target sentence.", "In a sense, these models face difficulty in modeling the relationship between similar characters as the similarity information is solely used for candidate selection.", "To capture the pronunciation/shape similarity and explore the prior dependencies between characters, we propose to use graph convolution network (GCN) (Kipf and Welling, 2017) to model character inter-dependence, which is combined with the pre-training of BERT (Devlin et al., 2019; Cheng et al., 2019) for the CSC task.", "GCN has been applied to model the relationship on several tasks.", "Yan et al. (2019) equipped it into the relation extraction task where relations construct a hierarchical tree.", "Li et al. (2018); Cheng et al. (2018) use it to model spatial-temporal to predict traffic flow.", "GCN was also used to model the relationship between labels in a multi-label task (Chen et al., 2019).", "In this paper, it is the first time that GCN is applied successfully into the CSC task.", "The relationship in CSC is much different from those tasks where objects in the graph are semantically related.", "By contrast, the similar characters are semantically distinct in CSC.", "Therefore, we deeply investigate the effect of our SpellGCN and propose several essential techniques.", "In this section, we elaborate on our method for CSC.", "Firstly, the problem formulation is presented.", "Then, we introduce the motivations for SpellGCN, followed by its detailed description.", "At last, we present its application in the CSC task.", "The Chinese Spelling Check task aims to detect and correct the errors in the Chinese language.", "When given a text sequence X = { x 1 , x 2 , ..., x n } consisting of n characters, the model takes X as input and output a target character sequence Y = { y 1 , y 2 , ..., y n } .", "We formulate the task as a conditional generation problem by modeling and maximizing the conditional probability p ( Y | X ) .", "The framework of the proposed method is depicted in Figure", "1. It consists of two components, i.e., a character representation extractor and a SpellGCN.", "The extractor derives a representation vector for each character.", "Above the extractor, SpellGCN is used to model the inter-dependence between characters.", "It outputs target vectors containing the information of similar characters after interactions.", "As illustrated in Table 1, a vanilla language model is able to provide feasible corrections in semantic meaning but faces the difficulty in meeting the pronunciation constraint.", "Although the correction is semantically plausible, its phonics differs much from and .", "This indicates that the similarity information between characters is necessary so that the model can learn to generate related answers.", "Previous methods have taken the similarity into consideration.", "However, they typically regarded similar characters as potential candidates, neglecting their inter-relationship in terms of pronunciation and shape.", "This work makes a preliminary attempt to handle this issue, trying to fuse both the symbolic space (phonological and visual similarity knowledge) and the semantic space (language semantic knowledge) into one model.", "To achieve this, we leverage the power of graph neural network (GNN) to infuse the similarity knowledge directly.", "The essential idea is to update the representations by aggregating the information between similar characters.", "Intuitively, a model is likely to have a sense of similar symbols when equipped with our method.", "Among various GNN models, we use GCN in our implementation.", "Since there are up to 5K Chinese characters in the graph, the light-weight GCN is more suitable for our problem.", "The proposed SpellGCN is depicted as follows in detail.", "SpellGCN requires two similarity graphs A p , A s for pronunciation and shape similarities correspondingly, which are derived from an open-sourced confusion set (Wu et al., 2013).", "For simplicity, the superscript will be omitted if unnecessary and A denotes one of these two similarity graphs.", "Each similarity graph is a binary adjacent matrix of size RN N , constructed from N characters in the confusion set.", "The edge A i,j { 0 , 1 } between i -th character and j -th character denotes whether the ( i, j ) pair exists in the confusion set.", "The goal of SpellGCN is to learn a map function to map the input node embedding H l RN D of l -th layer (where D is the dimensionality of character embedding) to a new representation H l +1 via convolutional operation defined by A .", "This map function has two main sub-components: a graph convolution operation and an attentive graph combination operation.", "Graph Convolution Operation The graph convolution operation is to absorb the information from neighboring characters in the graph.", "In each layer, the light-weight convolution layer in GCN (Kipf and Welling, 2017) is adopted: f ( A , H l ) = AH l W lg , (1) where W lg RD D is a trainable matrix and A RN N is the normalized version of the adjacent matrix A .", "For the definition of A , we direct you to the original paper (Kipf and Welling, 2017).", "Note that we use the character embedding of BERT as the initial node features H 0 , and we omit the non-linearity function after convolution.", "Since we adopted BERT as our extractor, which has its own learned semantic space, we remove the activation function from the equation to keep the derived representation identical with original space, rather than a completely different space.", "During our experiments, using non-linearity activation such as ReLU is ineffective, resulting in a performance drop.", "Attentive Graph Combination Operation The graph convolution operation handles the similarity of a single graph.", "To combine the pronunciation and shape similarity graphs, the attention mechanism (Bahdanau et al., 2015) is adopted.", "For each character, we represent the combination operation SpellGCN BERT Extractor $ $ $ :Word Embedding Layer-1 Layer-2 PronunciationSimilarity Graph ShapeSimilarity Graph A tt e n t i v e C o m b i n a t i o n : GeneratedClassifier Layer-3 DotProduct O u t o ft he C on f u s i on S e t Figure 1: The framework of the proposed SpellGCN.", "where C l RN D and f k ( A k , H l ) i is the i -th row of convolved representation of graph k , i,k is a scalar for i -th character denoting the weight of graph k .", "The weight i,k is computed by i,k = exp( w a f k ( A k , H l ) i / ) (cid:80) k (cid:48) exp( w a f k (cid:48) ( A k (cid:48) , H l ) i / ) , (3) where w a RD is a learnable vector shared across the layers and is a hyper-parameter which controls the smoothness of attention weights.", "We found essential for the attention mechanism.", "Accumulated Output After graph convolution and attentive combination operations, we obtain a representation C l for l -th layer.", "To maintain the original semantic of the extractor, all outputs of previous layers are accumulated as the output: H l +1 = C l + l (cid:88) i =0 H i .", "In this way, SpellGCN is able to focus on capturing the knowledge of character similarity, leaving the responsibility of semantic reasoning to the extractor.", "Hopefully, each layer can learn to aggregate the information for the specific hop.", "During the experiments, the model failed when excluding H 0 .", "Similarity Graphs from Confusion Set The similarity graphs used in SpellGCN are constructed from the confusion set provided in (Wu et al., 2013).", "It is a pre-defined set consisting of similar characters for most of ( 95%) the Chinese characters and these characters are categorized into five categories, i.e., (1) similar shape, (2) same pronunciation and same tone, (3) same pronunciation and different tone, (4) similar pronunciation and same tone, (5) similar pronunciation and different tone.", "Since the pronunciation similarity is more fine-grained compared with the shape similarity category, we combine the pronunciation similarities into one graph.", "Consequently, we construct two graphs corresponding to pronunciation and shape similarities.", "Character Representation by Extractor The representation of characters used for final classification is given by an extractor.", "We can use any model that is able to output representation vectors V = { v 1 , v 2 , ..., v n } (where v i RD ) for n characters X = { x 1 , x 2 , ..., x n } .", "In our experiment, we adopt BERT as the backbone model.", "It takes X as input and uses the output of the last layer as V .", "We conduct the experiment using the base version, which has 12 layers, 12 self-attention heads with a hidden size of 768 2 .", "SpellGCN as Character Classifier When given the representation vector v i of a character x i , the model needs to predict a target character through a fully-connected layer whose weight W RM D is configured by the output of SpellGCN ( M is the size of the extractor vocabulary): p ( y i | X ) = softmax ( Wv i ) .", "Concretely, the output vectors of SpellGCN plays the role of the classifier in our task.", "We use the output of the last layer of SpellGCN HL (where L is the number of layers) to classify the characters in the confusion set.", "And since the confusion set only covers a subset of vocabulary, we use the word embedding of the extractor as the classifier for those excluded by the confusion set.", "In this way, denoting u i { 1 , ..., N } is the index of confusion set for the i -th character in the extractor vocabulary, W is presented by: W i = (cid:40) H Lu i , if i -th character confusion set E i , otherwise , (6) where E RM D is the embedding matrix of extractor.", "In brief, we use the embedding from SpellGCN if the character is in the confusion set.", "Otherwise, the embedding vectors are used as in BERT.", "Instead of modeling a large compact graph containing all characters in the extractor vocabulary, we chose this implementation for computational efficiency, since there are around 5K characters in the confusion set and more than 20K characters in the extractor vocabulary.", "Overall, the objective is to maximize the log likelihood of target characters: L = (cid:88) X , Y (cid:88) i log p ( y i = y i | X ) .", "(7) 3.5 Prediction Inference The CSC task consists of two sub-tasks in evaluation, i.e., detection and correction.", "Some previous work (Yu and Li, 2014; Liu et al., 2013) used two models for these sub-tasks separately.", "In this work, we simply use the character with the highest probability arg max y i p ( y i | X ) as the prediction for the correction task.", "And the detection is achieved by checking whether the prediction matches the target character y i .", "as well as the evaluation metrics.", "Then we introduce our main results for SpellGCN.", "After that, the ablation studies are made to analyze the effect of the proposed components, followed by a case study.", "Finally, quantitative results are provided.", "Training Data The training data is composed of three training datasets (Wu et al., 2013; Yu et al., 2014; Tseng et al., 2015), which has 10K data samples in total.", "Following (Wang et al., 2019), we also include additional 271K samples as the training data, which are generated by an automatic method (Wang et al., 2018) 3 .", "Test Data To evaluate the performance of the proposed method, we used three test datasets from the SIGHAN 2013, SIGHAN 2014, SIGHAN 2015 benchmarks (Wu et al., 2013; Yu et al., 2014; Tseng et al., 2015) as in (Wang et al., 2019).", "We also follow the same data pre-processing procedure, i.e., the characters in these datasets are converted to simplified Chinese using OpenCC 4 .", "The statistic of the data is listed in Table", "2. Baseline Models We compare our method with five typical baselines.", "LMC (Xie et al., 2015): This method utilizes the confusion set to replace the characters and then evaluates the modified sentence via a N-gram L anguage M odel.", "is adopted for detection.", "The incorrect characters are marked as 1 (0 otherwise).", "PN (Wang et al., 2019): This method incorporates a P ointer N etwork to consider the extra candidates from the confusion set.", "FASpell (Hong et al., 2019): This model utilizes a specialized candidate selection method based on the similarity metric.", "This metric is measured using some empirical methods, e.g., edit distance, rather than a pre-defined confusion set.", "BERT (Devlin et al., 2019): The word embedding is used as the softmax layer on the top of BERT for the CSC task.", "We trained this model using the same setting, i.e., the comparable model w/o SpellGCN.", "Evaluation Metrics The precision, recall and F1 scores are reported as the evaluation metrics, which are commonly used in the CSC tasks.", "These metrics are provided for the detection and correction sub-tasks.", "Besides the evaluation on the character level, we also report the sentence-level metrics on the detection and correction sub-tasks, which is more appealing for real-world applications.", "On the sentence level, we consider a sentence to be correctly annotated only if all errors in the sentence are corrected as in (Hong et al., 2019) 5 .", "On the character level, we calculate the metrics using the evaluation script from (Wang et al., 2019) 6 .", "We also evaluated BERT and SpellGCN by the official evaluation metrics tools 7 , which gives False Positive Rate (FTR), Accuracy and Precision/Recall/F1.", "Our code is based on the repository of BERT 8 .", "We fine-tune the models using AdamW (Loshchilov and Hutter, 2018) optimizer for 6 epochs with a batch size of 32 and a learning rate of 5e-5.", "The number of the layer in SpellGCN is 2, and the attentive combination operation with factor 3 is used.", "All experiments were conducted for 4 runs and the averaged metric is reported.", "The code and trained models will be released publicly after review (cur-rently, the code is attached in the supplementary files).", "Table 3 shows the performance of the proposed method on the three CSC datasets, compared with five typical CSC systems.", "When using SpellGCN, the model achieves better results in all test sets against vanilla BERT, which verifies its effectiveness.", "The improvement is considerable with such a large amount of training data (cf. the comparison in Figure 2).", "This indicates the similarity knowledge is essential for CSC and it can hardly be learned by simply increasing the data amount.", "In terms of sentence-level F1score metric in the correction subtask, i.e., C-F score in the last column, the improvements against previous best results (FASPell) are 9.2%, 9.7% and 13.3% respectively.", "Nevertheless, it should be noted that FASpell was trained on different training data while this paper follows the setting mentioned in the PN paper (Wang et al., 2019).", "Ideally, our method is compatible with FASpell and better results can be achieved when FASpell is employed.", "FASpell used their own metrics, which are different from the sentence-level false postive and false negtivate counting strategy of the official evaluation toolkit.", "We used the scripts by PGNet and FASpell to compute their metrics for fair comparison.", "We further add the official evaluation results of BERT and SpellGCN in Table 4.", "Actually, SpellGCN consistently improves the performance when evaluated by the PGNet/FASpell scripts and the official evaluation toolkit.", "We will add the FPR results in our revision.", "The FPR scores are 14.1% (SpellGCN)", "v.s.", "15.3% (BERT) on SIGHAN 14, and 13.2% (SpellGCN)", "v.s.", "13.6% (BERT) on SIGHAN 15.", "FPR on SIGHAN 13 is statistically meaningless since almost all the tested sentences have the spelling errors.", "In this subsection, we analyze the effect of several components, including the number of layers and the attention mechanism.", "The ablation experiments were performed using 10K training data.", "Effect of the Number of Layers Generally, the performance of a GCN varies with the number of layers.", "We investigate how the number of SpellGCN layers influence the performance in CSC.", "In this comparison, the number of layers changes from 1 to 4, and the results are illustrated in Figure 3.", "For clarity, we report the character-level C-F on the three test datasets.", "The results indicate that SpellGCN is able to make use of multiple layers.", "With multiple layers, SpellGCN can aggregate the information in more hops and therefore, achieve better performance.", "However, the F1score drops when the number of layers is larger than 3.", "This is reasonable due to the over-smooth problem noted in (Yan et al., 2019).", "When the number of GCN layers increases, the representations of neighboring characters in the similarity graph will get more and more similar since they all are calculated via those of their neighbors in the similarity graph.", "Effect of Attention Mechanism We investigate how to better combine the graphs in the SpellGCN layer.", "Here, we compare the attention mechanism against sum-pooling and mean-pooling, with different hyper-parameter mentioned in Section 3.3.", "The experiments are conducted based on the 2-layer SpellGCN on SIGHAN 2013 test set.", "The results presented in Table 5 show that the sum pooling fails in the CSC task.", "We suggest that the sum pooling is inconsistent with the normalization of GCN and fails to combine the information from different channels (i.e., graphs).", "The mean pooling is feasible but is surpassed by the attention mechanism.", "This indicates that the adaptive combination for each character node is beneficial.", "We incorporate a hyper-parameter into the attention operation since the dot products may grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients.", "With these results, we chose the attention mechanism with a of 3 in SpellGCN.", "We show several correction results to demonstrate the properties of SpellGCN.", "In addition to the sample illustrated in Table 1, several prediction results are given in Table 6.", "From these cases, we can tell that our SpellGCN is capable of revising the incorrect characters into correct ones with the pronunciation and shape constraint.", "For instance, in the first case, (fang) is detected as errors and modified into (fan).", "Without pronunciation similarity constraint, (m`u) becomes the most probable answer.", "And surprisingly, in the second case, our SpellGCN successfully modifies the character reasonable in the context.", "The meaning of input sentence is watch the audio recorder, and our method corrects it into which means watch the video recorder.", "We suggest that SpellGCN injects a prior similarity between and in the representation space so that the model derives a higher posterior probability of .", "In the last case, we show a correction result under the shape constraint.", "In the confusion set, is similar to and therefore, using SpellGCN is able to retrieve the correct result.", "Previous experiments have explored the performance of SpellGCN quantitatively in detail.", "To qualitatively study whether SpellGCN learns meaningful representations, we dive into the target embedding space W derived from SpellGCN.", "In Figure 4, the embedding of characters with phonics chang and s` is presented using t-SNE (Maaten and Hinton, 2008).", "The embedding \u0000\u0010\u0000\u0019\u0000\u0013 \u0000\u0010\u0000\u0017\u0000\u0013 \u0000\u0010\u0000\u0015\u0000\u0013 \u0000\u0013 \u0000\u0015\u0000\u0013 \u0000\u0017\u0000\u0013 \u0000\u0019\u0000\u0013 \u0000\u0010\u0000\u0019\u0000\u0013 \u0000\u0010\u0000\u0017\u0000\u0013 \u0000\u0010\u0000\u0015\u0000\u0013 \u0000\u0013 \u0000\u0015\u0000\u0013 \u0000\u0017\u0000\u0013", "learned by BERT captures the semantic similarity but fails to model the similarity in terms of pronunciation for the CSC task.", "This is reasonable as this similarity knowledge is absent in the modeling.", "In contrast, our SpellGCN successfully infuses this prior knowledge into the embedding and the resulting embedding exhibits cluster patterns.", "The embedding of characters with these two different pronunciations forms two clusters, corresponding to chang and s` respectively.", "Due to this property, the model tends to recognize similar characters and hence is able to retrieve the answers under pronunciation constraint.", "Figure 5 shows the same situation for the shape similarity, where two sets of characters with the shape similar to and are scattered.", "This verifies the ability of SpellGCN in modeling shape similarity.", "We proposed SpellGCN for CSC to incorporate both phonological and visual similarities into language models.", "The empirical comparison and the results of analytical experiments verify its effectiveness.", "Beyond CSC, SpellGCN can be generalized to other situations where specific prior knowledge is available, and to other languages by leveraging specific similarity graphs analogously.", "Our method can also be adapted to grammar error correction, which needs insertion and deletion, by utilizing more flexible extractors such as Levenshtein Transformer (Gu et al., 2019).", "We leave this direction to future work." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "objective", "other", "other", "objective", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain" ]
[ "Automatic evaluation of open-domain dialogue response generation is very challenging because there are many appropriate responses for a given context.", "Existing evaluation models merely compare the generated response with the ground truth response and rate many of the appropriate responses as inappropriate if they deviate from the ground truth.", "One approach to resolve this problem is to consider the similarity of the generated response with the conversational context.", "In this paper, we propose an automatic evaluation model based on that idea and learn the model parameters from an unlabeled conversation corpus.", "Our approach considers the speakers in defining the different levels of similar context.", "We use a Twitter conversation corpus that contains many speakers and conversations to test our evaluation model.", "Experiments show that our model outperforms the other existing evaluation metrics in terms of high correlation with human annotation scores.", "We also show that our model trained on Twitter can be applied to movie dialogues without any additional training.", "We provide our code and the learned parameters so that they can be used for automatic evaluation of dialogue response generation models.", "Evaluating the system generated responses for open-domain dialogue is a difficult task.", "There are many possible appropriate responses given a dialogue context, and automatic metrics such as BLEU (Papineni et al., 2002) or ROUGE (Lin, 2004) rate the responses that deviate from the ground truth as inappropriate.", "Still, it is important to develop and use an automatic metric because human annotation is very costly.", "In addition to BLEU and ROUGE, there is a widely-used evaluation metric based on the distributed word representation (Liu et al., 2016), but this metric shows low correlations with human judgments.", "One reason for the difficulty in developing an automatic metric that correlates well with human judgements is that the range of appropriate responses for a given context is very wide.", "Table 1 shows an example of a conversation between Speaker A and B .", "While there is a ground truth response Yeah let's go to the theater, A could have also said That sounds good! Have you seen Thor? or Good. What movie?", "Note that based on word overlap with the ground truth, these two responses would receive low scores.", "Responses labeled N#, such as The weather is no good for walking are not appropriate.", "As the Table shows, the existing metrics from BLEU to RUBER are not able to tell apart these appropriate A# responses from the in-approriate N# responses.", "Some recent metrics such as ADEM (Lowe et al., 2017) and RUBER (Tao et al., 2018) compute the similarity between a context and a generated response.", "However, ADEM requires human-annotated scores to train and thus cannot be applied to new datasets and domains.", "RUBER overcomes this limitation by using the idea that a random response should be used as a negative sample, but it is not able to distinguish the responses in the example in Table 1, because it uses only one random sample which does not provide sufficient information about appropriate and inappropriate responses.", "In this paper, we propose Speaker Sensitive Responses Evaluation Model (SSREM) that analyzes the appropriateness of the responses.", "We use speaker sensitive responses that are generated by one speaker to train the model.", "We test SSREM in comparison with other evaluation metrics.", "First, we make annotated human scores for responses in Twitter conversation data.", "The evaluation scores of SSREM shows a higher correlation with human scores than other evaluation metrics.", "And SSREM outperforms other metrics in terms of identifying the ground truth responses given a context.", "We Context A : What do you want to do tonight?", "show the additional advantage of SSREM: it can be applied to evaluate a new corpus in a different domain.", "We train SSREM on Twitter corpus and test it on a corpus of movie reviews, and we show that SSREM outperforms other metrics in terms of the correlation with human scores and the task of identifying the ground truth response.", "We present SSREM, a new response evaluation model trained with speaker sensitive negative samples (Sec 3).", "We conduct experiments on a Twitter conversation corpus and show that SSREM outperforms the others (Sec 5 and 6).", "We further show the applicability of SSREM with Movie dialogue corpus that are not using in the training (Sec 7).", "We provide our code and the learned parameters of SSREM which can be used for evaluation of generated responses 1 .", "In this section, we describe existing automatic evaluation metrics for dialogue response generation and discuss their limitations.", "For task-oriented dialogue models such as airline travel information system (Tur et al., 2010), completing the given task is most important, and the evaluation metrics reflect that (Hastie, 2012; Bordes et al., 2017).", "But open-domain conversation models do not have specific assigned tasks; the main goal of an open-domain conversation model 1 https://github.com/NoSyu/SSREM is generating appropriate responses given a conversation about any topic.", "Existing automatic evaluation metrics compare a generated response and the ground truth response.", "The most widely-used metric are BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) based on the overlap of words between the two responses.", "A limitation of these word overlap-based metrics is that they cannot identify the synonyms, and to overcome this limitation, the embedding-based metrics use distributed word vector representations (Liu et al., 2016).", "However, these metrics have poor correlation with human judgments (Liu et al., 2016; Novikova et al., 2017; Gupta et al., 2019) because they still only look at the similarity between the generated response and the ground truth.", "SSREM is a model with the awareness that a response can be different from the ground truth response but still appropriate for the conversation context.", "The responses for a casual conversation can be varied.", "For example, there are four appropriate responses including ground truth response for a given context in Table 1. Some previous approaches suggest considering the context together with the response such as ADEM (Lowe et al., 2017) and RUBER (Tao et al., 2018).", "ADEM uses pre-trained VHRED (Serban et al., 2017) to encode the texts and compute the score by mixing similarities among the context, generated response and a ground truth.", "One limitation of ADEM is that it requires human annotated scores to learn the model.", "Human labeling is cost-intensive, so it is impractical to apply to a new dataset or domain.", "RUBER uses negative sampling to overcome this issue, but it uses only one random negative sample against one positive sample which is not ideal (Gutmann and Hyvarinen, 2010).", "SSREM does not require A B CA: I like sports.", "human scores to learn the model and uses many speaker sensitive negative samples.", "This section describes our Speaker Sensitive Response Evaluation Model (SSREM) that trains with speaker sensitive utterance samples.", "SSREM looks at a given context and its ground truth response together to evaluate a generated response.", "We describe the motivation of SSREM with empirical observations in section 3.1.", "We present the structure of SSREM in section 3.2.", "With the motivation, we present a training method of SSREM with speaker sensitive utterance samples in section 3.3.", "We are motivated by the assumption that there is varying degree of similarity among utterances in a corpus of conversations containing many speakers and conversations.", "1. If we pick a set of random utterances from the corpus, they will not be very similar.", "2. If we pick a set of utterances from a single speaker conversing with multiple partners, those utterances will be more similar than the random utterances in 1. 3.", "If we pick a set of utterances from conversations between a single dyad, even if the conversations are far apart in time, those utterances would be more similar than those in 2. 4. If we pick a set of utterances in a single conversation session, they are the most similar, even more so than those in 3.", "To test these assumptions, we first categorize one speaker A 's utterances into four types of sets corresponding to the assumptions above.", "Random ( Rand A ): Random utterances from speakers who are not A Same Speaker ( SSA ): Speaker A 's utterances Same Partner ( SPA ): A 's utterances in conversations with the same partner B Same Conversation ( SCA ): A 's utterances in a single conversation Figure 1 shows one example of the sets.", "We make three SCA sets because A participates in three conversations.", "We make two SPA sets because A has conversations with B and C .", "SSA is all utterances from A so we create one set of utterances for A .", "Finally, Rand A is random utterances from nonA 's utterances.", "We create five sets for each speaker.", "From these sets, we compute the similarity among utterances in a set.", "First, we convert an utterance into a vector by averaging the words in the utterance with GloVe Twitter 200d (Pennington et al., 2014).", "And we compute the similarity of the vectors by Frobenius norm.", "Finally, we calculate the mean similarity of each set with a 95% confidence interval.", "Table 2 shows the results.", "Rand has the lowest similarity mean value, so it supports the first assumption.", "SS has higher similarity mean value than Rand .", "It supports the second assumption.", "The mean similarity value of SP is higher than SS .", "It supports the third assumption.", "Finally, SC has the highest mean similarity value.", "It also supports the last assumption.", "From the observations, we assume that utterances are clustered by the speakers and addressees.", "SSREM evaluates a generated response r from a context c and a ground truth response r .", "The output of SSREM is as follows: SSREM ( c , r , r ) = h ( f ( c , r ) , g ( r , r )) (1) where f ( c , r ) = tanh ( V ( c ) TM V ( r )) is a parametrized function to measure the similarity between the context c and the generated response r .", "V is a function to convert a sequence of words to a vector.", "M is a matrix that weights of the similarity between two vectors.", "It is the parameter of the f function.", "g ( r , r ) is another function to measure the ground-truth response and the generated one.", "h is a function to mix the values of f and g functions.", "To normalize each output of the f and g functions, we adopt linear scaling to unit range (Aksoy and Haralick, 2001) which rescale the value x as follows: x = x l u l (2) where u is an maximum and l is minimum of x .", "SSREM is similar to RUBER, which computes the similarities among c , r and r separately and merge it at the end.", "However, SSREM uses speaker sensitive samples, whereas RUBER takes one positive sample and one negative sample.", "SSREM has a parametrized function f that takes context c and a generated response r .", "To train the f function, we define a classification problem to identify the ground truth response r from a set of candidate responses R cand .", "The R cand has the ground truth response and some negative samples.", "A clas-sifier tries to identify the ground truth response with the negative samples.", "Negative samples are usually selected from the uniform distribution.", "But we sample the speaker sensitive utterances which described in section 3.1 for SSREM.", "Formally speaking, let A be the speaker of the ground truth response r A .", "It means it is A 's turn to say the response for the context c .", "The candidate response set R cand A is given by R cand A = { r A , sc A , sp A , ss A , rand A } (3) where sc A SCA \\ c , sp A SPA \\ c , ss A SSA \\ c and rand A Rand A are the negative samples from speaker sensitive responses.", "Then, the probability of a ground truth response r A given context c and R cand A is as follows: p ( r A c , R cand A ) = exp ( f ( c , r A )) r R candA exp ( f ( c , r )) (4) We maximize this probability among all context-ground truth response pair.", "So the loss function of the classification problem is c log exp ( f ( c , r A )) r R candA exp ( f ( c , r )) (5) This approach is similar to learning the sentence representations (Logeswaran and Lee, 2018), but we use the speaker sensitive negative samples.", "It is also similar to Noise Contrastive Estimation (NCE) (Gutmann and Hyvarinen, 2010; Mnih and Teh, 2012).", "But we set the noise distribution to speaker sensitive distribution and only take the data sample term in the objective function of the NCE.", "Selecting negative samples is important for learning.", "When we choose the noise distribution, it would be close to the data distribution, because otherwise, the classification problem might be too easy to learn the data (Gutmann and Hyvarinen, 2010).", "Mnih and Teh (2012) shows that using samples from the unigram distribution outperforms using samples from a naive uniform distribution for learning a neural probabilistic language model.", "Likewise, we create negative samples from the speaker sensitive utterances.", "sc A is more similar to the r A than any other negative samples.", "We show the patterns by empirical observations in section 3.1 and experimental results in section 6.2.", "These speaker sensitive samples make the classification problem harder and lead to learning the function f better than using the naive uniform distributed random samples.", "To train SSREM, we need a conversation corpus that has many conversations from one speaker.", "We choose the Twitter conversation corpus (Bak and Oh, 2019) as it has 770K conversations with 27K Twitter users.", "We split the data as 80/10/10 for training/validation/test.", "To measure the correlation SSREM with human judgments, we first gather human judgments of responses given a conversation context.", "We use Amazon Mechanical Turk (MTurk) to annotate the scores of the responses.", "We select 300 conversations from a dataset of Twitter conversations.", "And we generate responses for annotation using three conversation models and the ground truth response for each conversation.", "Retrieval model (Pandey et al., 2018): A Human Score 1 2 3 4 5 Twitter 211 258 342 278 71 Movie 279 267 311 217 126 Table 3: Basic statistics of human scores of the responses on Twitter conversation and Movie scripts BM25 retrieval model (Robertson et al., 2009) that uses TF-IDF vector space.", "VHCR (Park et al., 2018): A variational autoencoder model that has a global variable for a conversation.", "VHUCM (Bak and Oh, 2019): A variational autoencoder model that considers the speakers of a conversation.", "Then we ask two questions to the MTurkers.", "(1) How appropriate is the response overall?", "(2) How on-topic is the response?", "These questions are used in (Lowe et al., 2017).", "The authors show that these questions have high inter-annotator agreement among workers.", "They suggest using the first question to annotate the human score, and so we follow the suggestion.", "But we ask the second question to workers to filter out workers who submit random answers.", "Each worker answers these questions on a five-point Likert scale.", "We annotate 1,200 responses in total.", "One worker answers ten conversations, four responses per conversation for a total of 40 responses.", "Each response is tagged by five workers for a total of 287 workers of which we retain the responses from 150 workers who passed all the tests.", "We tag the most selected score as the human score for each response.", "The inter-annotator Fleiss' kappa (Fleiss, 1971) is = 0 .", "61 which is consistent with the results in (Lowe et al., 2017).", "Table 3 shows the basic statistics of the annotations.", "This section describes the experiment that looks at the correlation between the model scores and the human scores for given contexts and responses.", "We use a Twitter conversation corpus (Bak and Oh, 2019) to train and validate SSREM and other baseline models.", "For the test, we remove the ground truth responses in human-annotated corpus since it always produces the maximum score on BLEU and ROUGE.", "We compare SSREM with the following response evaluation methods: BLEU (Papineni et al., 2002): We compute the sentence-level BLEU score with the smoothing seven technique (Chen and Cherry, 2014).", "ROUGE (Lin, 2004): We compute the F score of ROUGE-L.", "EMB (Liu et al., 2016): We compute the average cosine similarity between ground truth response and test response in a word embedding 2 .", "We use pre-trained Google news word embedding (Mikolov et al., 2013) to avoid the dependency between the training data and embedding.", "RUBER (Tao et al., 2018): We train with a random negative sample to train unreferenced metric in RUBER.", "And we use arithmetic averaging to hybrid the referenced and unreferenced metrics.", "RSREM: We use the same structure of SSREM, but train with uniformly random negative samples, not speaker sensitive samples.", "We choose functions in SSREM for the experiment.", "For V function, We use the word averaging technique that averages the vectors of words in the sequence.", "We can use advanced methods such as RNN or sentence embeddings (Reimers and Gurevych, 2019).", "But for the fair comparisons with RUBER, we select a similar approach.", "We use GloVe Twitter 200d word embedding (Penning-ton et al., 2014).", "For g function, we use sentence movers similarity that is the state of the art evaluating reference-candidate pair of sentences by using word and sentence embeddings (Clark et al., 2019).", "To avoid dependency between the training data and embedding, we use Elmo embedding (Peters et al., 2018).", "For h function, we use arithmetic averaging that shows good results in (Tao et al., 2018).", "Table 4 shows the Spearman and Pearson correlations between human scores and models scores.", "First, BLEU, ROUGE, and EMB are not correlated 2 We experimented with the greedy and extreme embedding for comparison, but these methods were not better than the average embedding.", "with human scores.", "It means evaluating responses with ground truth only is not useful.", "These results are the same in previous research (Liu et al., 2016; Lowe et al., 2017; Tao et al., 2018).", "RUBER shows a higher correlation with human scores than other baselines but has a high p -value that means low statistically significant.", "RSREM performs better than RUBER and other baselines.", "It shows using multiple negative samples improves the performance of learning the model.", "Finally, SSREM outperforms all other methods for two correlations with low p values.", "It shows the effectiveness of using speaker sensitive negative samples.", "Figure 2 shows scatterplots of the human and model scores.", "A dot is one response, and a red line is a linear regression line.", "The x-axis is the human score, and the y-axis is each automatic evaluation metric.", "To visualize the dots better, we adopt the technique from (Lowe et al., 2017) that adds random number ( N ( 0 , 0 . 3 ) ) to x-axis value.", "But, we train the linear regression with original scores.", "First, BLEU and ROUGE have many zero values since there are few overlapped words between the generated response and the ground-truth response.", "The dots in EMB that uses word embedding to overcome the limitation are more distributed.", "But there are few relationships with human scores, and the linear regression coefficient is flattened.", "RUBER is better than BLEU, ROUGE, and EMB.", "RSREM that uses more negative samples shows better than RUBER.", "Finally, SSREM shows a higher positive correlation with human scores than other baselines.", "The second experiment presents the performance of f function in SSREM by comparing it with baselines.", "RUBER, RSREM, and SSREM compute the score from the context of the conversation and generated responses.", "To investigate the performance of the score, we set up the task that identifies the true and false responses for a given context.", "The RUBER RSREM SSREM 0.40 0.45 0.50 0.55 0.60 0.65 GT SC SP SS Rand Figure 3: Difference of scores on various responses in Twitter conversation corpus.", "The data for this experiment is the test data of the Twitter conversation corpus.", "We extract contexts, true and false responses from the data.", "The true response is the ground-truth response ( GT ).", "And the false responses are four types that are described in section 3.3 ( SC , SP , SS , Rand ).", "We compare SSREM with RUBER and RSREM that compute the similarity between a context and a response.", "We take the unreferenced metric score in RUBER.", "And we take the output of the f function in RSREM and SSREM.", "We use the same trained models in section 5. 6.2 Results and Discussion Figure 3 shows the results.", "The x-axis is the models, and the y-axis is the output of the unreferenced metric or f function.", "All models perform well on distinguishing between GT utterances and Rand utterances.", "But RUBER performs poor on identifying SC , SP , and SS .", "And RSREM cannot identify false responses from SC .", "Finally, SSREM outperforms the other two models for identifying all cases.", "It also maximizes the difference between GT and Rand than the other two models.", "It is another clue for showing the effectiveness of using speaker sensitive negative samples.", "One interesting result is that the output scores decrease from GT to Rand .", "It is the same observation about the differences of speaker sensitive utterances in section 3.1.", "And it also means that identifying GT and SC is a harder problem than GT and Rand pair.", "It is another evidence for why we use speaker sensitive negative samples, as we discussed in section 3.3.", "SC consists of negative samples that are most difficult for the model to distinguish, so it makes sense to consider only SC negative samples.", "But we include SP and SS for the following two reasons.", "First, there are only a limited number of SC utterances because they must all come from the same conversation, whereas we need a pretty large number of negative samples to effectively train the model (Mnih and Teh, 2012).", "Second, we also sample from SP and SS because they represent different degree of similarity to the context utterances.", "SC utterances are from the same conversation, leading to decreased model generalization.", "In this section, we investigate the applicability of SSREM to a new conversation corpus.", "SSREM takes the speaker sensitive samples from Twitter.", "But there are many open-domain conversation corpora such as Movie scripts (Danescu-Niculescu-Mizil and Lee, 2011).", "Tao et al. (2018) run a similar experiment with RUBER, but they use the similar domain of data, Chinese online forum (Train-ing from Douban and testing on Baidu Tieba).", "We choose the Movie scripts corpus because it is written by the script writers whereas Twitter is personal causal online conversations.", "We present the performance of SSREM on the new corpus.", "First, we annotate 1,200 responses to the movie dialog corpus.", "We use HRED (Sordoni et al., 2015) rather than VHUCM.", "The next procedure of annotation is the same when we create human scores for Twitter conversation responses in section 4. Two hundred forty-four workers tagged all responses.", "But, 94 workers failed the attention check question, so we collect the 150 workers' answers.", "The inter-annotator Fleiss' kappa (Fleiss, 1971) for Movie is = 0 .", "63 .", "It is still consistent with the results in (Lowe et al., 2017) and annotated Twitter conversations.", "The bottom row in Table 3 shows the basic statistics of the annotated responses.", "We run two experiments, comparing with human scores and identifying true and false responses.", "We use the same models in section 5. We use the Twitter conversation corpus to train RUBER, RSREM, and SSREM.", "And we test the models on annotated movie dialogs.", "Unlike the Twitter conversation cor-Metric Spearman Pearson BLEU 0.036 ( 0 . 378 ) 0.063 ( 0 . 124 ) ROUGE 0.041 ( 0 . 322 ) 0.054 ( 0 . 191 ) EMB 0.022 ( 0 . 586 ) 0.010 ( 0 . 815 ) RUBER 0.004 ( 0 . 920 ) -0.009 ( 0 . 817 ) RSREM 0.009 ( 0 . 817 ) 0.024 ( 0 . 550 ) SSSREM 0.132 ( < 0 . 001 ) 0.119 ( < 0 . 005 ) Table 5: Correlation between human and model scores with Movie corpus.", "In the experiment on comparing with human scores on the movie dialogs corpus, Table 5 shows the results.", "First, BLEU, ROUGE, and EMB are not correlated with human scores.", "RUBER shows worse performance than testing on the Twitter corpus.", "RSREM performs better than RUBER and other baselines, but it also shows worse performance than testing on the Twitter corpus.", "Finally, SSREM outperforms all other methods for two correlations with low p -values.", "It shows the effectiveness of using speaker sensitive negative samples for the new corpus.", "Figure 2 shows the similar results by plotting scatter plots.", "In the experiment on identifying true and false responses with the movie dialogs corpus, Figure 5 shows the results of the identification task.", "RUBER performs poor on distinguishing between GT and Rand statistically significantly.", "RSREM performs better than RUBER.", "And SSREM outperforms the other two models for identifying all cases in the new corpus.", "In this paper, we presented SSREM, an automatic evaluation model for conversational response generation.", "SSREM looks at the context of the conversation and the ground-truth response together.", "We proposed negative sampling with speaker sensitive samples to train SSREM.", "We showed that SSREM outperforms the other metrics including RSREM that uses random negative samples only.", "We also showed that SSREM is effective in evaluating a movie conversation corpus even when it is trained with Twitter conversations.", "There are several future directions to improve SSREM.", "First, we can make SSREM more robust on adversarial attacks.", "Sai et al. (2019) shows limitations of ADEM on adversarial attacks such as removing stopwords and replacing words with synonyms.", "We investigated another type of the adversarial attack named copy mechanism that copies one of the utterances in the context as the generated response.", "All existing automatic evaluation methods including RUBER that compare the context and the response can be cheated by the copy mechanism.", "SSREM is also susceptible.", "However, SSREM is fooled less than other existing models because SSREM learns with negative samples from the set of utterances in the same conversation.", "SSREM learns to differentiate among utterances in the same context.", "We show this empirically with an experiment to identify true and false responses (Sec 6.2).", "When we look at the mean score for the context utterances that shows this copy mechanism compared to the mean score of the ground-truth response (GT), the mean score of context utterances is 0.07 higher by RUBER, but only 0.01 higher by SSREM.", "SSREM does not give lower scores for the context utterances than GT, but it is not as bad as RUBER.", "We will make SSREM more robust on the attacks.", "Second, we can improve SSREM for a higher correlation with human judgement.", "We chose to approach SSREM with a classification loss because it is simple and widely used to estimate the models using negative sampling.", "Although the classification loss is simple, SSREM outperforms all existing automatic evaluation models.", "However, as Table 2 and Figure 3 are shown, each negative samples has different correlation with the context.", "We will use ranking loss (Wang et al., 2014; Schroff et al., 2015) to learn the difference among samples.", "Recently, Zhang et al. (2020) uses BERT (Devlin et al., 2019) to evaluate generated candidate sentences by comparing reference sentence.", "We used word embeddings to represent an utterance to the vector for the simplicity, but contextual embeddings are much better since it generates more context-related representation than word embeddings.", "We will use the contextual embedding to represent utterances.", "Third, we can extend using SSREM to various conversation corpora such as task-oriented dialogues.", "We trained and tested SSREM on open-1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0", "domain conversation corpora.", "However, contextual coherence between the input context and the generated text is important in multi-turn conversations.", "We will apply SSREM to various conversation tasks for evaluating the generated text automatically.", "We will explore these directions in our future work.", "We would like to thank Jeongmin Byun 3 for building the annotation webpage, and the anonymous reviewers for helpful questions and comments.", "This work was supported by Institute for Information & 3 https://jmbyun.github.io communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2017-0-01779, A machine learning and statistical inference framework for explainable artificial intelligence)." ]
[ "abstain", "abstain", "abstain", "objective", "method", "method", "result", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "result", "objective", "result", "result", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "result", "result", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "method", "abstain", "abstain", "method", "abstain", "method", "method", "objective", "method", "abstain", "abstain", "method", "objective", "other", "other" ]
[ "Named Entity Recognition (NER) in Few-Shot setting is imperative for entity tagging in low resource domains.", "Existing approaches only learn class-specific semantic features and intermediate representations from source domains.", "This affects generalizability to unseen target domains, resulting in suboptimal performances.", "To this end, we present CONTAINER, a novel contrastive learning technique that optimizes the inter-token distribution distance for Few-Shot NER.", "Instead of optimizing class-specific attributes, CONTAINER optimizes a generalized objective of differentiating between token categories based on their Gaussian-distributed embeddings.", "This effectively alleviates overfitting issues originating from training domains.", "Our experiments in several traditional test domains (OntoNotes, CoNLL'03, WNUT '17, GUM) and a new large scale Few-Shot NER dataset (Few-NERD) demonstrate that, on average, CONTAINER outperforms previous methods by 3%-13% absolute F1 points while showing consistent performance trends, even in challenging scenarios where previous approaches could not achieve appreciable performance.", "The source code of CONTAINER will be available at: https://github.com/ psunlpgroup/CONTaiNER .", "Named Entity Recognition (NER) is a fundamental NLU task that recognizes mention spans in unstructured text and categorizes them into a predefined set of entity classes.", "In spite of its challenging nature, recent deep-learning based approaches (Huang et al., 2015; Ma and Hovy, 2016; Lample et al., 2016; Peters et al., 2018; Devlin et al., 2018) have achieved impressive performance.", "As these supervised NER models require large-scale human-annotated datasets, few-shot techniques that can effectively perform NER in resource constraint settings have recently garnered a lot of attention.", "Few-shot learning involves learning unseen classes from very few labeled examples (Fei-Fei et al., 2006; Lake et al., 2011; Bao et al., 2020).", "To avoid overfitting with the limited available data, meta-learning has been introduced to focus on how to learn (Vinyals et al., 2016; Bao et al., 2020).", "Snell et al. (2017) proposed Prototypical Networks to learn a metric space where the examples of a specific unknown class cluster around a single prototype.", "Although it was primarily deployed in computer vision, Fritzler et al. (2019) and Hou et al. (2020) also used Prototypical Networks for few-shot NER.", "Yang and Katiyar (2020), on the other hand, proposed a supervised NER model that learns class-specific features and extends the intermediate representations to unseen domains.", "Additionally, they employed a Viterbi decoding variant of their model as \"StructShot\".", "Few-shot NER poses some unique challenges that make it significantly more difficult than other few-shot learning tasks.", "First, as a sequence labeling task, NER requires label assignment according to the concordant context as well as the dependencies within the labels (Lample et al., 2016; Yang and Katiyar, 2020).", "Second, in NER, tokens that do not refer to any defined set of entities are labeled as Outside ( O ).", "Consequently, a token that is labeled as O in training entity set may correspond to a valid target entity in test set.", "For prototypical networks, this challenges the notion of entity exam-6338 ples being clustered around a single prototype.", "As for Nearest Neighbor based methods such as Yang and Katiyar (2020), they are initially pretrained\" with the objective of source class-specific supervision. As a result, the trained weights will be closely tied to the source classes and the network will project training set O-tokens so that they get clustered in embedding space. This will force the embeddings to drop a lot of useful features pertaining to its true target entity in the test set. Third, in few-shot setting, there are not enough samples from which we can select a validation set. This reduces the capability of hyperparameter tuning, which particularly affects template based methods where prompt selection is crucial for good performance (Cui et al., 2021). In fact, the absence of held-out validation set puts a lot of earlier few-shot works into question whether their strategy is truly \"Few-Shot\" (Perez et al., 2021).", "To deal with these challenges, we present a novel approach , CONTAINER that harnesses the power of contrastive learning to solve Few-Shot NER.", "CONTAINER tries to decrease the distance of token embeddings of similar entities while increasing it for dissimilar ones (Figure 1).", "This enables CONTAINER to better capture the label dependencies.", "Also, since CONTAINER is trained with a generalized objective, it can effectively avoid the pitfalls of O-tokens that the prior methods struggle with.", "Lastly, CONTAINER does not require any dataset specific prompt or hyperparameter tuning.", "Standard settings used in prior works (Yang and Katiyar, 2020) works well across different domains in different evaluation settings.", "Unlike traditional contrastive learners (Chen et al., 2020; Khosla et al., 2020) that optimize similarity objective between point embeddings, CONTAINER optimizes distributional divergence effectively modeling Gaussian Embeddings.", "While point embedding simply optimizes sample distances, Gaussian Embedding faces an additional constraint of maintaining class distribution through the variance estimation.", "Thus Gaussian Embedding explicitly models entity class distributions which not only promotes generalized feature representation but also helps in few-sample target domain adaptation.", "Previous works in Gaussian Embedding has also shown that mapping to a density captures representation uncertainties (Vilnis and McCallum, 2014) and expresses natural asymmetries (Qian et al., 2021) while showing better generalization requiring less data to achieve optimal performance (Bojchevski and Gnnemann, 2017).", "Inspired by these unique qualities of Gaussian Embedding, in this work we leverage Gaussian Embedding in contrastive learning for Few-Shot NER.", "A nearest neighbor classification scheme during evaluation reveals that on average, CONTAINER significantly outperforms previous SOTA approaches in a wide range of tests by up to 13% absolute F1-points.", "In particular, we extensively test our model in both in-domain and out-of-domain experiments as proposed in Yang and Katiyar (2020) in various datasets (CoNLL '03, OntoNotes 5.0, WNUT '17, I2B2) .", "We also test our model in a large dataset recently proposed for Few-Shot NER Few-NERD (Ding et al., 2021) where CONTAINER outperforms all other SOTA approaches setting a new benchmark result in the leaderboard.", "In summary, our contributions are as follows: (1) We propose a novel Few-Shot NER approach CONTAINER that leverages contrastive learning to infer distributional distance of their Gaussian Embeddings.", "To the best of our knowledge we are the first to leverage Gaussian Embedding in contrastive learning for Named Entity Recognition.", "(2) We demonstrate that CONTAINER representations are better suited for adaptation to unseen novel classes, even with a low number of support samples.", "(3) We extensively test CONTAINER in a wide range of experiments using several datasets and evaluation schemes.", "In almost every case, our model largely outperforms present SOTAs establishing new benchmark results.", "Given a sequence of n tokens { x 1 , x 2 , . . . x n } NER aims to assign each token x i to its corresponding tag label y i .", "Few-shot Setting For Few-shot NER, a model is trained in a source domain with a tag-set { C s ( i ) } and tested in a data-scarce target domain with a tag-set { C d ( j ) } where i, j are index of different tags.", "Since { C s ( i ) } { C d ( j ) } = , it is very challenging for models to generalize to unseen test tags.", "In an N-way K-shot setting, there are N tags in the target domain |{ C d ( j ) }| = N , and each tag is associated with a support set with K examples.", "Tagging Scheme For fair comparison of CONTAINER against previous SOTA models, we follow an IO tagging scheme where I-type repre-6339 Figure 2: Illustration of our proposed CONTAINER framework based on Contrastive Learning over Gaussian Embedddings:", "Evaluation Scheme To compare with SOTA models in Few-NERD leaderboard (Ding et al., 2021), we adpot episode evaluation as done by the authors.", "Here, a model is assessed by calculating the micro-F1 score over multiple number of test episodes.", "Each episode consists of a K-shot support set and a K-shot unlabeled query (test) set to make predictions .", "While Few-NERD is explicitly designed for episode evaluation, traditional NER datasets (e.g., OntoNotes, CoNLL'03, WNUT '17, GUM) have their distinctive tag-set distributions.", "Thus, sampling test episodes from the actual test data perturbs the true distribution that may not represent the actual performance.", "Consequently, Yang and Katiyar (2020) proposed to sample multiple support sets from the original development set and use them for prediction in the original test set.", "We also use this evaluation strategy for these traditional NER datasets.", "CONTAINER utilizes contrastive learning to optimize distributional divergence between different token entity representations.", "Instead of focusing on label specific attributes, this contradistinction explicitly trains the model to distinguish between different categories of tokens.", "Furthermore, modeling Gaussian Embedding instead of traditional point representation effectively lets CONTAINER model the entity class distribution, which incites generalized representation of tokens.", "Finally, it lets us carefully finetune our model even with a small number of samples without overfitting which is imperative for domain adaptation.", "As demonstrated in Figure 2, we first train our model in source domains.", "Next, we finetune model representations using few-sample support sets to adapt it to target domains.", "The training and finetuning of CONTAINER is illustrated in Algorithm 1.", "Finally, we use an instance level nearest neighbor classifier for inference in test sets.", "Figure 2 shows the key components of our model.", "To generate contextualized representation of sentence tokens, CONTAINER incorporates a pretrained language model encoder PLM .", "For proper comparison against existing approaches, we use BERT (Devlin et al., 2018) as our PLM encoder.", "Thus given a sequence of n tokens [ x 1 , x 2 , . . . , x n ] , we take the final hidden layer output of the PLM as the intermediate representations h i R l .", "These intermediate representations are then channeled through simple projection layer for generating the embedding.", "Unlike SimCLR (Chen et al., 2020) that uses projected point embedding for contrastive learning, we assume that token embeddings 6340 follow Gaussian distributions.", "Specifically, we employ projection network f and f for producing Gaussian distribution parameters: i = f ( h i ) , i = ELU ( f ( h i ))+(1+ ) (2) where i R l , i R l l represents mean and diagonal covariance (with nonzero elements only along the diagonal of the matrix) of the Gaussian Embedding respectively; f and f are implemented as ReLU followed by single layer networks; ELU for exponential linear unit; and e 14 for numerical stability.", "For calculating the contrastive loss, we consider the KL-divergence between all valid token pairs in the sampled batch.", "Two tokens x p and x q are considered as positive examples if they have the same label y p = y q .", "Given their Gaussian Embeddings N ( p , p ) and N ( q , q ) , we can calculate their KL-divergence as following: DKL [ N q ||N p ] = DKL [ N ( q , q ) ||N ( p , p )] = 1 2 (cid:18) Tr( 1 p q ) + ( p q ) T 1 p ( p q ) l + log | p | | q | (cid:19) (3) Both directions of the KL-divergence are calculated since it is not symmetric.", "We first train our model in resource rich source domain having training data X tr .", "At each training step, we randomly sample a batch of sequences (without replacement) X X tr from the training set having batch size of b .", "For each ( x i , y i ) X , we obtain its Gaussian Embedding N ( i , i ) by channeling the corresponding token sequence through the model (Algorithm 1: Line 3-6).", "We find in-batch positive samples X p for sample p and subsequently calculate the Gaussian embedding loss of x p with respect to that of all other valid tokens in the batch: X p = { ( x q , y q ) X | y p = y q , p = q } (5) ( p ) = log (cid:80) ( x q ,y q ) X p exp( d ( p, q )) / |X p | (cid:80) ( x q ,y q ) X ,p = q exp( d ( p, q )) (6) In this way we can calculate the distributional divergence of all the token pairs in the batch (Algorithm 1: Line 7-10 ).", "We do not scale the contrastive loss by any normalization factor as proposed by Chen et al. (2020) since we did not find it to be beneficial for optimization.", "After training in source domains, we finetune our model using a small number of target domain support samples following a similar procedure as in the training stage.", "As we have only a few samples for finetuning, we take them in a single batch.", "When multiple few-shot samples (e.g., 5-shot) are available for the target classes, the model can effectively adapt to the new domain by optimizing KL-divergence of Gaussian Embeddings as in Eq.", "4.", "In contrast, for 1-shot case, it turns out challenging for models to adapt to the target class distribution.", "If the model has no prior knowledge about target classes (either from direct training or indirectly from source domain training where the target class entities are marked as O-type ), a single example might not be sufficient to deduce the variance of the target class distribution.", "Thus, for 1-shot scenario, we optimize d ( p, q ) = || p q || 22 , the squared euclidean distance between mean of the embedding distributions.", "When the model has direct/indirect prior knowledge about the target classes involved, we still optimize the KL-divergence of the distributions similar to the 5-shot scenario.", "We demonstrate in Table 7 that optimizing with squared euclidean distance gives us slightly better performance in 1-shot scenario.", "Nevertheless, in all cases with 5-shot support set, optimizing the KL-divergence between the Gaussian Embeddings gives us the best result.", "Early Stopping Finetuning with a small support set runs the risk of overfitting and without access to a held out validation set due to data scarcity in the target domain, we cannot keep tabs on the saturation point where we need to stop finetuning.", "To alleviate this, we rely on the calculated contrastive loss and use it as our early stopping criteria with a patience of 1.", "(Algorithm 1: Line 16-17, 24 ) 6341 Algorithm 1 Training and Finetuning of CONTAINER Require: Training data X tr , Support Data X sup , Train loss function d tr , Finetune loss function d ft , f , f , PLM 1: // training in source domain 2: for sampled (w/o replacement) minibatch X X tr do 3: for all i ( x i , y i ) X do 4: i = f ( PLM ( x i )) //[Eq. 1] 5: i = ELU ( f ( PLM ( x i ))) + (1 + ) //[Eq. 2] 6: end for 7: for all i ( x i , y i ) X do 8: Calculate ( i ) as in Eq.", "After training and finetuning the network with train and support data respectively, we extract the pretrained language model encoder PLM for inference.", "Similar to SimCLR (Chen et al., 2020), we found that representations before the projection layers actually contain more information than the final output representation which contributes to better performance, so f and f projection heads are not used for inference.", "We thus calculate the representations of the test data from PLM and find nearest neighbor support set representation for inference (Wang et al., 2019; Yang and Katiyar, 2020).", "The PLM representations h sup j of each of the support token ( x sup j , y sup j ) X sup can be calculated as in Eq.", "1.", "Similarly for test data X test , we get the PLM representations h test i where x test i X test .", "Here we assign x test i the same label as the support token that is nearest in the PLM representation space: y test i = arg min y sup k where ( x sup k ,y sup k ) X sup || h test i h sup k || 22 (7) Dataset Domain # Class # Sent OntoNotes General 18 76K I2B2'14 Medical 23 140K CoNLL'03 News 4 20K WNUT'17 Social 6 5K GUM Mixed 11 3.5K FEW-NERD Wikipedia 66 188K Table 1: Summary Statistics of Datasets Viterbi Decoding Most previous works (Hou et al., 2020; Yang and Katiyar, 2020; Ding et al., 2021) noticed a performance improvement by using CRFs (Lafferty et al., 2001) which removes false predictions to improve performance.", "Thus we also employ Viterbi decoding in the inference stage with an abstract transition distribution as in StructShot (Yang and Katiyar, 2020).", "For the transition probabilities , the transition between three abstract tags O , I , and I-other is estimated by counting their occurrences in the training set.", "Then for the target domain tag-set, these transition probabilities are evenly distributed into corresponding target distributions.", "The emission probabilities are calculated from Nearest Neighbor Inference stage.", "Comparing domain transfer results (Table 3) against other tasks (Table 2,4,5) we find that, interestingly, if there is no significant domain shift involved in the test data, contrastive learning allows CONTAINER to automatically extract label dependencies, obviating the requirement of extra Viterbi decoding stage.", "Datasets For evaluation, we use datasets across different domains: General (OntoNotes 5.0 (Weischedel et al., 2013)), Medical (I2B2 (Stubbs and Uzuner, 2015)), News (CoNLL'03 (Sang and De Meulder, 2003)), Social (WNUT'17 (Derczyn-ski et al., 2017)).", "We also test on GUM (Zeldes, 2017) that represents wide variety of texts: interviews, news articles, instrumental texts, and travel guides.", "The miscellany of domains makes it a challenging dataset to work on.", "Ding et al. (2021) argue that the distribution of these datasets may not be suitable for proper representation of Few-Shot capability.", "Thus, they proposed a new large scale dataset Few-NERD that contains 66 fine-grained entities across 8 coarse grained entities, significantly richer than previous datasets.", "A summary of these datasets is given in Table 1.", "Baselines We compare the performance of CONTAINER with state-of-the-art Few-Shot NER models on different datasets across several settings.", "We first measure the model performance in traditional NER datasets in tag-set extension and domain transfer tasks as proposed in Yang and Katiyar (2020).", "We then evaluate our model in Few-NERD (Ding et al., 2021) dataset that is explicitly designed for Few-Shot NER, and compare it against the Few-NERD leaderboard baselines.", "Similar to Ding et al. (2021), we take Prototypical Network based ProtoBERT (Snell et al., 2017; Fritzler et al., 2019; Hou et al., 2020), nearest neighbor based metric method NNShot that leverages the locality of in-class samples in embedding space, and additional Viterbi decoding based Structshot (Yang and Katiyar, 2020) as the main SOTA baselines.", "A common use-case of Few-Shot NER is that new entity types may appear in the same existing text domain.", "Thus (Yang and Katiyar, 2020) proposed to experiment tag-set extension capability using OntoNotes (Weischedel et al., 2013) dataset.", "The eighteen existing entity classes are split in three groups: A, B, and C, each having six classes.", "Models are tested in each of these groups having few sample support set while being trained in the remaining two groups.", "During training, all test group entities are replaced with O -tag.", "Since the source and destination domains are the same, the training phase will induce some indirect information about unseen target entities.", "So, during finetuning of CONTAINER, we optimize the KL-divergence between ouptut embeddings as in Eq.", "4.", "We use the same entity class splits as used by Yang and Katiyar (2020) and used bert-base-cased as the backbone encoder for all models.", "Since they could not share the sampled support set for licensing reasons, we sampled five sets of support samples for each group and averaged the results, as done by the authors.", "We show these results in Table", "2. We see that in different entity groups, CONTAINER outperforms present SOTAs by upto 12.75 absolute F1 points, a substantial improvement in performance.", "In this experiment a model trained on a source domain is deployed to a previously unseen novel text domain.", "Here we take OntoNotes (General) as our source text domain, and evaluate the Few-Shot performance in I2B2 (Medical), CoNLL (News), WNUT (Social) domains as in (Yang and Katiyar, 2020).", "We also evaluate the performance in GUM (Zeldes, 2017) dataset due to its particularly challenging nature.", "We show these results in Table", "3. While all the other domains have almost no intersection with OntoNotes, target entities in CoNLL are fully contained within OntoNotes entities, that makes it comparable to supervised learning.", "For few-shot setting, Ding et al. (2021) proposed two different settings: Few-NERD (IN-TRA) and Few-NERD (INTER) .", "In Few-NERD 6343 Model 5-way 10-way Avg.", "(INTRA) train, dev, and test sets are divided according to coarse-grained types.", "As a result, fine-grained entity types belonging to People, Art, Product, MISC coarse grained types are put in the train set, Event, Building coarse grained types in dev set, and ORG, LOC in test set.", "So, there is no overlap between train, dev, test set classes in terms of coarse grained types.", "On the other hand, in Few-NERD (INTER) coarse grained types are shared, although all the fine grained types are mutually disjoint.", "Because of the restrictions of sharing coarse-grained types, Few-NERD (IN-TRA) is more challenging.", "Since, few-shot performance of any model relies on the sampled support set, the authors also released train, dev, test split for both Few-NERD (INTRA) and Few-NERD (INTER) .", "We evaluate our model performance using these provided dataset splits and compare the performance in Few-NERD leaderboard.", "All models use bert-base-uncased as the backbone encoder.", "As shown in Table 4 and Table 5, CONTAINER establishes new benchmark results in the leaderboard in both of these tests.", "We prudently analyze different components of our model and justify the design choices made in the scheming of CONTAINER.", "We also examine the results discussed in Section 4 that gives some intuitions about few-shot NER in general.", "Table 2-5 demonstrates that overall, in every scenario CONTAINER convincingly outperforms all other baseline approaches.", "This improvement is particularly noticeable in challenging scenarios, where all other baseline approaches perform poorly.", "For example, FEW-NERD (intra) (Table 4) is a challenging scenario where the coarse grained entity types corresponding to train and test sets do not overlap.", "As a result, other baseline approaches face a substantial performance hit, whereas CONTAINER still performs well.", "In tag-set extension (Table 2), we see a similar performance trend CONTAINER performs consistently well across the board.", "Likewise, in domain transfer to a very challenging unseen text domain like GUM (Zeldes, 2017), baseline models performs miserably; yet CONTAINER manages to perform consistently outperforming SOTA models by a significant margin.", "Analyzing these results more closely, we notice that while CONTAINER surpasses other baselines in almost every tests, more prominently in 5-shot cases.", "Evidently, CONTAINER is able to make better use of multiple few-shot samples thanks to distribution modeling via contrastive Gaussian Embedding optimization.", "In this context, note that StructShot actually got marginally higher F1-score in 1-shot CoNLL domain adaptation and 1 2 shot FEW-NERD (INTER) cases.", "In CoNLL, the target classes are subsets of training classes, so supervised learning based feature extractors are expected to get an advantage in prediction.", "On the other hand, Ding et al. (2021) carefully tuned the hyperparameters for baselines like StructShot for best performance.", "We could also improve performance in a similar manner, however for uniformity of model across different few-shot settings, we use the same model architecture in every test.", "Nevertheless, CONTAINER shows comparable performance even in these cases while significantly outperforming in every other test.", "Traditional contrastive learners usually optimize cosine similarity of point embeddings (Chen et al., 2020).", "While this has proven to work well in image data, in more challenging NLU tasks like Few-Shot NER, it gives subpar performance.", "We compare the performance of point embeddings with euclidean distance and cosine similarity to that of CONTAINER using Gaussian Embedding and KL-divergence in OntoNotes tag-set extension.", "We report these performance in Table 8 in Appendix.", "Basically, Gaussian Embedding leads to learning generalized representation during training, which is more suitable for finetuning to few sample target domain.", "In Appendix C, we examine this aspect by comparing the t-SNE representations from point 6344 embedding and Gaussian Embedding.", "Being a contrastive learner, CONTAINER can take advantage of extremely small support set to refine its representations through fine-tuning.", "To closely examine the effects of fine-tuning, we conduct a case study with OntoNotes tag-extension task using PERSON, DATE, MONEY, LOC, FAC, PRODUCT target entities.", "As shown in Table 6, we see that finetuning indeed improves few-shot performance.", "Besides, the effect of finetuning is even more marked in 5-shot prediction indicating that CONTAINER finetuning process can make the best use of few-samples available in target domain.", "Analyzing the results, we observe that domain transfer (Table 3) sees some good gains in performance from using Viterbi decoding.", "In contrast, tag-set extension (Table 2) and FEW-NERD (Table 4,5) gets almost no improvement from using Viterbi decoding.", "This indicates an interesting property of CONTAINER.", "During domain transfer the text domains have no overlap in train and test set.", "So, an extra Viterbi decoding actually provides additional information regarding the label dependencies, giving us some nice improvement.", "Otherwise, the train and target domain have substantial overlap in both tagset extension and FEW-NERD.", "Thus the model can indirectly learn the label dependencies through in-batch contrastive learning.", "Consequently, unless there is a marked shift in the target text domain, we can achieve the best performance even without employing additional Viterbi decoding.", "Meta Learning The idea of Few-shot learning was popularized in computer vision through Matching Networks (Vinyals et al., 2016).", "Subsequently, Prototypical Network (Snell et al., 2017) was proposed where class prototypical representations were learned.", "Test samples are given labels according to the nearest prototype.", "Later this technique was proven successful in other domains as well.", "Wang et al. (2019), on the other hand found simple feature transformations to be quite effective in few shot image recognition These metric learning based approaches have also been deployed in different NLP tasks (Geng et al., 2019; Bao et al., 2020; Han et al., 2018; Fritzler et al., 2019).", "Contrastive Learning Early progress was made by contrasting positive against negative samples (Hadsell et al., 2006; Dosovitskiy et al., 2014; Wu et al., 2018).", "Chen et al. (2020) proposed SimCLR by refining the idea of contrastive learning with the help of modern image augmentation techniques to learn robust sets of features.", "Khosla et al. (2020) leveraged this to boost supervised learning performance as well.", "In-batch negative sampling has also been explored for learning representation (Doer-sch and Zisserman, 2017; Ye et al., 2019).", "Storing instance class representation vectors is another popular direction (Wu et al., 2018; Zhuang et al., 2019; Misra and Maaten, 2020).", "Gaussian Embedding Vilnis and McCallum (2014) first explored the idea of learning word embeddings as Gaussian Distributions.", "Although the authors used RANK-SVM based learning objective instead of modern deep contextual modeling, they found that embedding densities in a Gaussian space enables natural represenation of uncertainty through variances.", "Later, Bojchevski and Gnne-mann (2017) leveraged Gaussian Embedding in Graph representation.", "Besides state-of-the-art performance, they found Gaussian Embedding to be surprisingly effective in inductive learning, generalizing to unseen nodes with few training data.", "Moreover, KL-divergence between Gaussian Embeddings allows explicit consideration of asymmetric distance which better represents inclusion, similarity or entailment (Qian et al., 2021) and preserve the hierarchical structures among words (Athiwaratkun and Wilson, 2018).", "Few-Shot NER Established few-shot learning approaches have also been applied in Named Entity Recognition.", "Fritzler et al. (2019) leveraged prototypical network (Snell et al., 2017) for few shot NER.", "Inspired by the potency of simple feature extractors and nearest neighbor inference (Wang et al., 2019; Wiseman and Stratos, 2019) in few-Shot learning, Yang and Katiyar (2020) used super-6345 vised learner based feature extractors for Few-Shot NER.", "Pairing it with abstract transition tag Viterbi decoding, they achieved current SOTA result in Few-Shot NER tasks.", "Huang et al. (2020) proposed noisy supervised pre-training for Few-Shot NER.", "However, this method requires access to a large scale noisy NER dataset such as WiNER (Ghaddar and Langlais, 2017) for the supervised pretraining.", "Acknowledging the shortcomings and evaluation scheme disparity in Few-Shot NER, Ding et al. (2021) proposed a large scale dataset specifically designed for this task.", "Wang et al. (2021) explored model distillation for Few-Shot NER.", "However, this requires access to a large unlabelled dataset for good performance.", "Very recently, prompt based techniques have also surfaced in this domain (Cui et al., 2021).", "However, the performance of these methods rely heavily on the chosen prompt.", "As denoted by the author, the performance delta can be massive (upto 19% absolute F1 points) depending on the prompt.", "Thus, in the absence of a large validation set, their applicability becomes limited in true few-shot learning (Perez et al., 2021).", "We propose a contrastive learning based framework CONTAINER that models Gaussian embedding and optimizes inter token distribution distance.", "This generalized objective helps us model a class agnostic feature extractor that avoids the pitfalls of prior Few-Shot NER methods.", "CONTAINER can also take advantage of few-sample support data to adapt to new target domains.", "Extensive evaluations in multiple traditional and recent few-shot NER datasets reveal that, CONTAINER consistently outperforms prior SOTAs, even in challenging scenarios.", "While we investigate the efficacy of distribution optimization based contrastive learning in Few-Shot NER, it will be of particular interest to investigate its potency in other domains as well.", "We thank the ACL Rolling Review reviewers for their helpful feedback.", "We also want to thank Nan Zhang, Ranran Haoran Zhang, and Chandan Akiti for their insightful comments on the paper.", "With CONTAINER, we have achieved state-of-the-art Few-Shot NER performance leveraging Gaussian Embedding based contrastive learning.", "However, the overall performance is still quite low compared to supervised NER that takes advantage of the full training dataset.", "Consequently, it is still not ready for deployment in high-stake domains (e.g. Medical Domain, I2B2 dataset), leaving a lot of room for improvement in future research." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "other", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "abstain", "abstain", "objective", "other", "other", "abstain", "abstain", "abstain" ]
[ "We present INSTAMAP , an instance-based method for learning projection-based crosslingual word embeddings.", "Unlike prior work, it deviates from learning a single global linear projection.", "INSTAMAP is a non-parametric model that learns a non-linear projection by iteratively: (1) finding a globally optimal rotation of the source embedding space relying on the Kabsch algorithm, and then (2) moving each point along an instance-specific translation vector estimated from the translation vectors of the point's nearest neighbours in the training dictionary.", "We report performance gains with INSTAMAP over four representative state-of-the-art projection-based models on bilingual lexicon induction across a set of 28 diverse language pairs.", "We note prominent improvements, especially for more distant language pairs (i.e., languages with non-isomorphic monolingual spaces).", "Induction of cross-lingual word embeddings (CLWEs) (Vulic et al., 2011; Mikolov et al., 2013; Xing et al., 2015; Smith et al., 2017; Artetxe et al., 2018) has been one of the key mechanisms for enabling multilingual modeling of meaning and facilitating cross-lingual transfer for downstream NLP tasks.", "Even though CLWEs are recently being contested in cross-lingual downstream transfer by pretrained multilingual language models (Pires et al., 2019; Conneau et al., 2020; Artetxe et al., 2019; Wu and Dredze, 2019; Wu et al., 2020), they are still paramount in word-level translation, that is, bilingual lexicon induction (BLI).", "While earlier work focused on joint induction of multilingual embeddings from multilingual corpora, relying on word(Klementiev et al., 2012; Kocisk`y et al., 2014; Gouws and Sgaard, 2015), sentence(Zou et al., 2013; Hermann and Blunsom, 2014; Luong et al., 2015; Coulmance et al., 2015; Levy et al., 2017), or document-level (Sgaard et al., 2015; Mogadala and Rettinger, 2016; Vulic and Moens, 2016) alignments, most recent efforts focus on post-hoc alignment of independently trained monolingual embeddings (the so-called projection or mapping approaches) (Smith et al., 2017; Artetxe et al., 2018; Conneau et al., 2018; Joulin et al., 2018; Patra et al., 2019, inter alia ).", "Despite some recent evidence that joint CLWE induction may lead to better bilingual spaces (Or-mazabal et al., 2019), projection-based methods still dominate the field (Hoshen and Wolf, 2018; Ruder et al., 2018; Nakashole, 2018; Grave et al., 2019; Zhang et al., 2019, inter alia ) due to their conceptual attractiveness: they operate on top of vectors produced with any embedding model and need at most a few thousand word pairs of supervision (Glavas et al., 2019; Vulic et al., 2019).", "Most projection-based CLWE models induce bilingual spaces by orthogonally projecting one monolingual space to another.", "Since orthogonal projections do not affect the topology of the source space, the performance of these methods is bound by the degree of isomorphism of the two monolingual spaces.", "Yet, evidence suggests that monolingual spaces, especially those of etymologically and typologically distant languages, are far from isomorphic (Sgaard et al., 2018; Vulic et al., 2019; Patra et al., 2019).", "What is more, unsupervised CLWE models (Conneau et al., 2018; Artetxe et al., 2018; Alvarez-Melis and Jaakkola, 2018; Hoshen and Wolf, 2018, inter alia ), which additionally exploit the isomorphism assumption when inducing initial translation dictionaries, have been shown to yield near-zero BLI results for pairs of distant languages (Sgaard et al., 2018; Vulic et al., 2019).", "Following these theoretical limitations of effectiveness of orthogonal mapping between non-isomorphic spaces, Joulin et al. (2018) and Patra et al. (2019) relax the orthogonality constraint and report BLI improvements.", "These models, however, still learn only a linear transformation, i.e., an oblique projection matrix.", "While oblique projections may scale or skew the source space, there still exists a strong topological similarity between the original space and its oblique projection.", "In this work, we deviate from learning a linear projection matrix (i.e., a parametric model) and propose a non-parametric model which translates vectors by estimating instance-specific geometric translations.", "Our method, INSTAMAP , iteratively (1) applies the Kabsch algorithm (Horn, 1987) on the full training dictionary to learn a globally optimal rotation of the source space w.r.t. the target space; and then (2) translates each point along the instance-specific translation vector, which we compute from the translation vectors of the point's nearest neighbours from the training dictionary.", "We extensively evaluate INSTAMAP on the benchmark BLI dataset (Glavas et al., 2019) encompassing 28 diverse language pairs.", "Our results show the non-linear mappings with INSTAMAP to be substantially more robust than linear projections, both orthogonal (Smith et al., 2017; Artetxe et al., 2018) and oblique (Joulin et al., 2018; Patra et al., 2019).", "We also show that, unlike INSTAMAP , oblique projection models RCSLS (Joulin et al., 2018) and BLISS (Patra et al., 2019) cannot surpass the performance of the best-performing orthogonal projection model VecMap (Artetxe et al., 2018) for distant languages (i.e., for low isomor-phicity).", "Finally, we report additional significant gains by applying INSTAMAP on top of VecMap.", "The core idea of INSTAMAP is illustrated in Figure 1.", "We iteratively: (1) use the entire training dictionary to learn a single global rotation matrix and then (2) perform an instance-based computation of translation vectors.", "Let X and Y be monolingual embedding spaces of the source and target language, respectively, and let D = { ( w iL 1 , w iL 2 ) } , i = 1 . . . N , be the training dictionary.", "We first transform each of the two spaces by (independently) performing a full PCA transformation (i.e., no dimensionality reduction): this way we represent vectors in each of the spaces as combinations of linearly uncorrelated principal components of that space, which facilitates the learning of the optimal rotation between the spaces.", "Let XD = { x iL 1 } Ni =1 X and YD = { y iL 2 } Ni =1 Y be the dictionary-aligned subsets of the two monolingual spaces.", "We aim to learn the optimal rotation matrix between X and Y , i.e., the matrix WR that minimizes the sum of square distances between the source vector projections and corresponding target vectors, WR = arg min W (cid:107) XDW YD (cid:107) .", "If we constrain WR to be orthogonal, the optimal solution is obtained by solving the Procrustes problem (Schonemann, 1966) adopted by most projection-based CLWE models (Smith et al., 2017; Conneau et al., 2018; Artetxe et al., 2018).", "However, our aim is to avoid introducing the orthogonal constraint and learn only the optimal rotation between the spaces.", "To this end, we use the Kabsch algorithm (Horn, 1987), which computes the optimal rotation matrix WR as follows: WR = V IRUT , with (1) UV T = SVD ( XTDYD ) , (2) where IR is a modification of the identity matrix, in which the last element (i.e., last row, last column) is not 1 , but rather the determinant of VUT .", "Upon obtaining WR , we rotate X w.r.t. Y , XR = XWR .", "We then perform localized, instance-specific translations in a rotationally-aligned bilingual space.", "For each point from both XR and Y , we compute a personalized translation vector, as the weighted average of the translation vectors of its closest dictionary entries.", "That is, for some vector x XR let x 1 , . . . , x K be the set of K vectors from XDWR (corresponding to words w 1 L 1 , w 2 L 1 , . . . , w KL 1 in D ) which are closest to x in terms of cosine similarity and let y 1 , y 2 , . . . , y K be the vectors of the corresponding dictionary translations w 1 L 2 , w 2 L 2 , . . . , w KL 2 from D from the target language space.", "We then compute the instance-based translation of x , x (cid:48) , as follows: x (cid:48) = x + (cid:80) Kk =1 cos( x , x k ) ( y k x k ) (cid:80) Kk =1 cos( x , x k ) (3) We perform an instance-specific translation of the vectors from Y analogously.", "Let y 1 , . . . , y K be the set of vectors from YD that are closest to some vector y Y .", "The translation y (cid:48) is then as follows: y (cid:48) = y (cid:80) Kk =1 cos( y , y k ) ( y k x k ) (cid:80) Kk =1 cos( y , y k ) (4) dog lazy laufen Hund run faul dog lazy run faul Hund laufen dog/Hund lazy/faul run/laufen", "Because we compute a different translation vector for each point in both vector spaces, the final mapping function between the two spaces is globally non-linear.", "Also, being based on K nearest neighbours in the training dictionary D , INSTAMAP is, unlike all other projection-based CLWE models, a non-parametric model (i.e., the number of model parameters is not fixed, it depends on the number of entries in the training dictionary D ).", "We repeat the two steps global rotation and instance-based translation aiming to obtain an iterative refinement of the non-linear mapping between the two spaces.", "Following the established practice found in other iterative models (Conneau et al., 2018; Artetxe et al., 2018), we augment the training dictionary for the next iteration with the mutual nearest neighbours in the bilingual space induced in the previous iteration.", "Intuitively, with INSTAMAP being a non-parametric model, we expect it to benefit more from the dictionary augmentation than the parametric projection models, which have been shown to saturate in performance when training dictionaries exceed 5K-10K translation pairs (Vulic and Korhonen, 2016; Glavas et al., 2019).", "Data.", "We evaluate on the BLI benchmark dataset introduced by Glavas et al. (2019), containing 28 pairs between eight diverse languages: English (EN), German (DE), Italian (IT), French (FR), Russian (RU), Croatian (HR), Turkish (TR), and Finnish (FI).", "1 Comprising both close and distant language pairs, this dataset allows us to compare model performance in settings with varying degree of isomorphism between monolingual spaces.", "We start from monolingual FastText vectors trained on Wikipedias of respective languages, 2 with vocabularies trimmed to the 200K most frequent words.", "Baselines.", "We compare INSTAMAP to the baseline orthogonal projection solution to the Procrustes problem (PROC ), and three state-of-the-art projection-based models: (1) VecMap (Artetxe et al., 2018) emerged in recent comparative evaluations (Glavas et al., 2019; Vulic et al., 2019) as the best-performing orthogonal-projection model; (2) RCSLS (Joulin et al., 2018) learns an oblique (i.e., non-orthogonal) projection and yields best performance overall in a recent comparative evaluation (Glavas et al., 2019); (3) BLISS (Patra et al., 2019) combines an orthogonal projection objective with an objective based on adversarial learning, inducing a weakly-orthogonal projection matrix.", "Model Variants and Hyperparameter Tuning.", "We evaluate two variants of INSTAMAP : (1) the base model is applied directly on unaligned monolingual vector spaces; (2) IM VM is the variant in which we apply INSTAMAP on top of the bilingual space induced with VecMap (Artetxe et al., 2018): because VecMap induces an orthogonal projection, the topologies of the monolingual subspaces of the VecMap bilingual space are preserved compared to 1 We use the training dictionaries with 5K instances.", "respective original monolingual spaces this holds promise of no undesirable side-effects originating from the composition.", "INSTAMAP has only two hy-perparameters: 3 the number of nearest neighbours K from D , and the number of algorithm iterations T .", "We identified, via fixed-split cross-validation on the training dictionaries, that configuration K = 70 and T = 4 works best for most language pairs.", "4 3.2 Results We show BLI performance ( P @1 ), aggregated over several different sets of language pairs, in Table 1.", "5 Overall, INSTAMAP significantly outperforms all competing models 6 Somewhat surprisingly, VecMap, which induces an orthogonal projection (i.e., more strongly relies on the assumption of isomorphism), significantly outperforms RCSLS and BLISS, models that relax the orthogonality constraint and induce oblique linear projections.", "Only INSTAMAP , by removing the constraint of having a global linear projection altogether and by inducing a non-linear mapping, is able to consistently yield improvements over the orthogonal projection (VecMap).", "What is more, the IM VM composition yields even larger performance gains.", "Analysis of results across different groups of language pairs identifies INSTAMAP as particularly beneficial for pairs of distant languages (setups No-EN and HARD) and languages with least reliable monolingual vectors (TR, HR).", "For example, 3 Competing models VecMap, RCSLS, and BLISS all come with much larger sets of hyperparameters.", "language pairs in the supplemental material.", "6 Non-parametric shuffling test (Yeh, 2000) with the Bon-ferroni correction: < 0 .", "05 in comparison with VecMap and < 0 .", "01 in comparison with other models.", "while INSTAMAP alone and IM VM yield gains of 0.9 and 2.6 points, respectively, w.r.t. VecMap across ALL language pairs, these gaps widen to 1.5 and 3.5 points on most challenging language pairs (HARD).", "In contrast, BLISS, a model specifically tailored to improve the mappings between non-isomorphic spaces, appears to be robust only on pairs of close languages (e.g., HR-RU) and pairs involving EN (setup EN-*).", "It exhibits barely any improvement over the baseline orthogonal projection (PROC ) on distant language pairs (HARD) and a significant degradation w.r.t. VecMap, a state-of-the-art model based on orthogonal projection.", "RCSLS is more robust than BLISS on difficult language pairs, but still performs worse than VecMap.", "Further Analysis.", "We further analyze the performance of INSTAMAP (applied on top of VecMap) with respect to: (1) size of the training dictionary | D | and (2) number of nearest dictionary neighbours K .", "We analyze the performance of IM VM for three language pairs with lowest BLI scores: DE-TR, TR-FI, and TR-HR.", "We prepare dictionaries with 2.5K to 12.5K entries (with a 2.5K step), following steps described in (Glavas et al., 2019).", "7 Figure 2 shows the performance for different training dictionary sizes.", "We can see that adding INSTAMAP on top of VecMap yields stable improvements for all dictionary sizes.", "On the one hand, this shows that INSTAMAP is equally helpful for any number of available word translations.", "On the other hand, since InstaMap is not constrained to learning a single global projection, we hoped to see bigger gains for larger dictionaries, but this is not the case.", "With larger dictionaries, we are 7 We translate 20K most frequent EN words to DE, TR, FI, and HR and keep for each language pair only word pairs (1) found in respective monolingual FastText vocabularies, (2) not present in the 2K test dictionaries from (Glava s et al., 2019).", "more likely to find more semantically similar dictionary neighbours for each word and this should lead to better performance.", "We speculate, however, that larger dictionaries also increase the likelihood of selecting spurious neighbours due to hubness (Dinu et al., 2015; Conneau et al., 2018) and that this cancels out the positive effect promised by having more candidates to choose the neighbours from.", "This could perhaps be remedied by using hubness-aware similarity scores like CSLS (Conneau et al., 2018) instead of simple cosine similarity.", "Figure 3 illustrates how INSTAMAP performance (on top of VecMap, i.e., IM VM) varies with different values for the number of dictionary neighbours K .", "The best performance is typically reached for values of K between 50 and 90 and there are no further improvements for larger values of K (TR-FI, where K = 130 gives the best score, is an exception).", "For very small K performance drops are substantial and here INSTAMAP even degrades the quality of the input space produced by VecMap.", "We believe this happens because INSTAMAP in this case has too few dictionary neighbours to accurately model the meaning of any given word and, in turn, compute a reliable mapping vector.", "We have proposed INSTAMAP , a simple and effective approach for improving the post-hoc cross-lingual alignment between non-isomorphic monolingual embedding spaces.", "Unlike existing projection-based CLWE induction models, which learn a global linear projection matrix, INSTAMAP couples global rotation with instance-specific translations.", "This way, we learn a globally non-linear projection.", "Our experiments show that (1) INSTAMAP significantly outperforms four state-of-the-art projection-based CLWE models on a benchmark BLI dataset with 28 language pairs and (2) that it yields largest improvements for pairs of distant languages with a lower degree of isomorphism between their respective monolingual spaces.", "We plan to extend this work in two directions.", "First, we will explore mechanisms for instance-specific translation that are more sophisticated than the aggregation of translation vectors of nearest dictionary neighbours.", "Second, we plan to couple instance-based mapping with other informative features (e.g., character-level features) in classification-based BLI frameworks (Heyman et al., 2017; Karan et al., 2020).", "The INSTAMAP code is available at: https://github.com/codogogo/instamap .", "We thank the anonymous reviewers for their insightful suggestions.", "GG is supported by the Eliteprogramm of the Baden-Wurttemberg Stiftung (AGREE grant).", "IV is supported by the ERC Consolidator Grant LEXICAL (no 648909)." ]
[ "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "method", "method", "result", "result", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "result", "objective", "objective", "abstain", "other", "other", "other", "other" ]
[ "We propose a novel approach using representation learning for tackling the problem of extracting structured information from form-like document images.", "We propose an extraction system that uses knowledge of the types of the target fields to generate extraction candidates, and a neural network architecture that learns a dense representation of each candidate based on neighboring words in the document.", "These learned representations are not only useful in solving the extraction task for unseen document templates from two different domains, but are also interpretable, as we show using loss cases.", "In this paper, we present a novel approach to the task of extracting structured information from form-like documents using a learned representation of an extraction candidate.", "Form-like documents like invoices, purchase orders, tax forms and insurance quotes are common in day-to-day business work-flows, but current techniques for processing them largely still employ either manual effort or brittle and error-prone heuristics for extraction.", "The research question motivating our work is the following: given a target set of fields for a particular domain e.g., due date and total amount for invoices along with a small set of manually-labeled examples, can we learn to extract these fields from unseen documents?", "Take, for instance, the domain of invoices, a document type that large enterprises often receive and process thousands of times every week (iPayables, 2016).", "Invoices from different vendors often present the same types of information but with different layouts and positioning.", "Figure 1 shows the headers of invoices from a few different vendors Work done during an internship at Google Research Figure 1: Excerpts from sample invoices from different vendors.", "showing the invoice date (highlighted in green) and number in different layouts.", "Furthermore, invoices from the same supplier even share similar presentation and differ only in specific values.", "We refer to this unit of visual pattern that is similar across a collection of documents as a template , and the fields of information that are common across templates in a domain as the schema .", "The schema consists of fields like invoice_date and total_amount , each associated with a type like date and currency .", "Extracting values for these fields from a given document, particularly one belonging to an unseen template, is a challenging problem for many reasons.", "In contrast to most prior work on information extraction (Sarawagi, 2008), templatic documents do not contain much prose.", "Approaches that work well on natural text organized in sentences cannot be applied directly to such documents where spatial layout elements like tables and grid formatting are commonplace.", "Understanding spatial relationships is critical for achieving good extraction performance on such documents.", "Moreover, these documents are usually in PDF or scanned image formats, so these presentation hints are not explicitly available in a markup language.", "Techniques that are successful on HTML documents such as 6496 web pages, including traditional wrapper induction approaches (Dalvi et al., 2011), are therefore not immediately applicable.", "Recently, there has been a surge in research interest in solving this extraction task adapting techniques in natural language processing (Liu et al., 2019), computer vision (Davis et al., 2019), or combinations thereof (Katti et al., 2018).", "In contrast to this body of work, we propose an approach based on representation learning for this task.", "We first generate extraction candidates for each target field using its associated type (e.g., all dates as candidates for invoice_date ).", "We then use a neural network model to learn a dense representation for each extraction candidate independent of the field to which it belongs.", "We also learn a separate representation for the field itself, and use the similarity between the candidate and field representations to score the candidate according to how likely it is to be the true extraction value for that field.", "The design of our extraction system rests on a few observations about how information is often laid out in form-like documents (see Section 2).", "An advantage of our representation learning approach is that it allows us to encode certain priors we developed based on these observations into the architecture of the neural network and its input features (see Section 4).", "In fact, our experiments show that our proposed neural architecture outperforms a more naive MLP baseline using the same input features by about 10 F1 points on the extraction task for two different domains (see Section 6).", "Furthermore, the learned candidate representations are also meaningful and lend themselves to interpretation, as we show by delving into some loss cases.", "Observation 1 Each field often corresponds to a well-understood type .", "For example, the only likely extraction candidates for the invoice_date field in an invoice are instances of dates.", "A currency amount like $25.00 would clearly be incorrect.", "Since there are orders of magnitude fewer dates on an invoice as there are text tokens, limiting the search space by type dramatically simplifies the problem.", "Consequently, we use a library of detectors for several common types such as dates, currency amounts, integers, address portals, emails addresses, etc. to generate candidates.", "Observation 2 Each field instance is usually associated with a key phrase that bears an apparent visual relationship with it.", "Consider the invoice excerpt in Figure", "1(c).", "It contains two date instances, only one of which is the true invoice_date , as indicated by the word Date next to it.", "Similarly, in the bottom-right invoice excerpt, we are easily able to distinguish between the invoice number (indicated by Invoice #) and the purchase order number (indicated by PO #).", "We call such indicative words key phrases .", "Proximity is not the only criterion that defines a key phrase.", "For instance, the word Date is not the nearest one to the true invoice_date instance in Figure", "1(c); the document number in the line above and the page number below are clearly closer.", "It is also not the case that the key phrase always occurs on the same line; Figure", "1(a) shows a case where the key phrase DATE occurs just above the true invoice_date .", "An effective solution needs to combine the spatial information along with the textual information.", "Fortunately, in our experience, these spatial relationships exhibit only a small number of variations across templates, and these tend to generalize across fields and domains.", "Observation 3 Key phrases for a field are largely drawn from a small vocabulary of field-specific variants.", "In a corpus of invoices we collected, we observed that, as exemplified by the samples in Figure 1, about 93% of the nearly 8400 invoice date instances were associated with key phrases that included the words date or dated and about 30% included invoice.", "Only about 7% of invoice dates had neither of these words in their key phrases.", "Similarly, 87% of the nearly 2800 due_date instances in our corpus had key phrases that contained the word due and 81% contained date.", "We found similar patterns for all other fields we investigated.", "The fact that there are only a small number of field-specific key phrases suggests that this problem may be tractable with modest amounts of training data.", "While these observations are applicable to many fields across different document types, there are several exceptions which we plan to tackle in future work.", "We leveraged the observations laid out in Section 2 to build a system to solve the information extraction task for form-like documents.", "Given a document 6497 and a target schema, we generate extraction candidates for each field from the document text using the field type.", "We then score each candidate independently using a neural scoring model.", "Finally, we assign at most one scored candidate as an extraction result for each field.", "We discuss the stages of this pipeline here, and delve into the architecture of the scoring model in Section 4.", "Our system can ingest both native digital as well as scanned documents.", "We render each document to an image and use a cloud OCR service 1 to extract all the text in it.", "The text in the OCR result is arranged in the form of a hierarchy with individual characters at the leaf level, and words, paragraphs and blocks respectively in higher levels.", "The nodes in each level of the hierarchy are associated with bounding boxes represented in the 2D Cartesian plane of the document page.", "The words in a paragraph are arranged in reading order, as are the paragraphs and blocks themselves.", "In Section 2, we made the observation that fields in our target schema correspond to well-understood types like dates, integers, currency amounts, addresses, etc.", "There are well-known techniques to detect instances of these types in text, ranging from regular expression matching and heuristics to sequence labeling using models trained on web data.", "We associate each field type supported by our system with one or more candidate generators.", "These generators use a cloud-based entity extraction service 2 to detect spans of the OCR text extracted from the documents that are instances of the corresponding type.", "For example, every date in an invoice becomes a candidate for every date field in the target schema, viz. invoice_date , due_date and delivery_date .", "Since the recall of the overall extraction system cannot exceed that of the candidate generators, it is important that their recall be high.", "Precision is, however, largely the responsibility of the scorer and assigner.", "Given a set of candidates from a document for each field in the target schema, the crux of the extraction", "task is to identify the correct extraction candidate (if any) for each field.", "While there are many approaches one could take to solve this problem, we made the design choice to break it down to two steps: first, we compute a score [0 , 1] for each candidate independently using a neural model, then we assign to each field the scored candidate that is most likely to be the true extraction for it.", "This separation of scoring and assignment allows us to learn a representation for each candidate based only on its neighborhood, independently of other candidates and fields.", "It also frees us to encode arbitrarily complex business rules into the assigner if required, for example, that the due date for an invoice cannot (chronologically) precede its invoice date, or that the line item prices must sum up to the total.", "For brevity, we omit the details of the assignment module and report results using a simple assigner that chooses the highest-scoring candidate for each field independently of other fields.", "The scoring module takes as input the target field from the schema and the extraction candidate to produce a prediction score [0 , 1] .", "While the downstream assignement module consumes the scores directly, the scorer is trained and evaluated as a binary classifier.", "The target label for a candidate is determined by whether the candidate matches the ground truth for that document and field.", "An important desideratum for us in the design of the scorer is that it learns a meaningful candidate representation.", "We propose an architecture where the model learns separate embeddings for the candidate and the field it belongs to, and where the similarity between the candidate and field embeddings determines the score.", "We believe that such an architecture allows a single model to learn candidate representations that generalize across fields and document templates.", "We can conceptualize the learned representation of a candidate as encoding what words in its neighborhood form its associated key phrase since, apropos Observation 2, the spatial relationships between candidates and their key phrases are observed to generalize across fields.", "On the other hand, the embedding for a field can be conceptualized as encoding the key phrase variants that are usually indicative of it, apropos Observation 3.", "We would like our model to learn a representation of a candidate that captures its neighborhood.", "Accordingly, the essential features of a candidate are the text tokens that appear nearby, along with their positions.", "We use a simple heuristic to determine what OCR text tokens we consider to be the neighbors of a given candidate: we define a neighborhood zone around the candidate extending all the way to the left of the page and about 10% of the page height above it.", "Any text tokens whose bounding boxes overlap by more than half with the neighborhood zone is considered to be a neighbor.", "As shown in Figure 2, we represent the position of a candidate and each of its neighbors using the 2-D Cartesian coordinates of the centroids of their respective bounding boxes.", "These coordinates are normalized by dividing by the corresponding page dimensions so that the features are independent of the pixel resolution of the input documents.", "We calculate the relative position of a neighbor as the difference between its normalized 2-D coordinates and those of the candidate.", "An additional feature we found to be helpful is the absolute position of the candidate itself.", "An important design choice we made is to not incorporate the candidate text into the input.", "Note that this text was already the basis for generating the candidate in the first place.", "Withholding this information from the input to the model avoids accidental overfitting to our somewhat-small training datasets.", "For instance, since the invoices we collected were all dated prior to 2019, it is possible that providing the date itself as input to the model could cause it to learn that true invoice_date instances always occur prior to 2019.", "As shown in Figure 3", "(a)-(d), we embed each of the candidate features separately in the following ways.", "Each neighbor relative position is embedded through a nonlinear positional embedding consisting of two ReLU-activated layers with dropout.", "This nonlinear embedding allows the model to learn to resolve fine-grained differences in position, say between neighbors sharing the same line as the candidate and those on the line above.", "The candidate position feature is embedded using just a linear layer.", "We also use an embedding table for the field to which a candidate belongs.", "In a model with embedding dimension d , the sizes of each neighbor's word and position embeddings are set to be d .", "We experimented with different sizes for the word and position embeddings, but it did not make a significant difference.", "For simplicity of exposition, we use the same value for both.", "Since each candidate is padded to have the same number of neighbors, say N , we denote the neighbor embeddings { h 1 , h 2 , . . . , h N } , with each h i R 2 d .", "We also set the sizes of the candidate position embedding as well as the field embedding to be d .", "Neighbor Encodings It is important to note that the initial neighbor embeddings h i (Figure 3", "(d)) are independent of each other.", "In order to capture interactions between neighbors, we employ self-attention (Vaswani et al., 2017), allowing each neighbor to have its embedding affected by all others.", "This is useful, for example, for the model to downweight a neighbor that has other neighbors between itself and the candidate.", "We pack the neighbor embeddings h i into a matrix H RN 2 d , then transform these em-6499 bdeddings into query, key and value embeddings through three different linear projection matrices W q , W k and W v R 2 d 2 d .", "q i = h i W q K = HW k V = HW v For each neighbor i , its query embedding q i and the key embeddings K are used to obtain the attention weight vector i RN as follows.", "The self-attended neighbor encoding h i R 2 d (see Figure", "3(e)) for neighbor i is a linear combination of the value embeddings, V RN 2 d , using the above attention weights for all the neighbors h i = i V .", "As in Vaswani et al. (2017), we use a normalization constant of 2 d to improve stability.", "We project the self-attended neighbor encodings to a larger 4 2 d dimensional space using a linear projection with ReLU nonlinearity, and then project them back to 2 d .", "We combine the N neighbor encodings of size 2 d each to form a single encoding of size 2 d for the entire neighborhood.", "Since we already capture information about the relative positions of the neighbors with respect to the candidates in the embeddings themselves, it is important to ensure that the neighborhood encoding is invariant to the (arbi-trary) order in which the neighbors are included in the features.", "Our experiments indicate that max-pooling the neighbor encodings together was the best strategy, slightly beating out mean-pooling.", "Next, we obtain a candidate encoding (see Figure 3(f, h, i)) by concatenating the neighborhood encoding R 2 d with the candidate position embedding R d and projecting (through a ReLU-activated linear layer) back down to d dimensions.", "Candidate Scoring The candidate encoding is expected to contain all relevant information about the candidate, including its position and its neighborhood.", "By design, it is independent of the field to which said candidate belongs.", "This neural network is, however, trained as a binary classifier to score a candidate according to how likely it is to be the true extraction value for some field and document.", "Drawing inspiration from prior work in metric learning (Kulis, 2013), given a field with embedding f R d and its candidate with encoding c Corpus Split # Docs # Templates Invoices1 Train 11,390 11,390 Validation 2,847 2,847 Invoices2 Test 595 595 Receipts Train 237 141 Validation 71 47 Test 170 46 Table 1: Invoices and Receipts corpora R d , we compute CosineSimilarity( c , f ) [ 1 , 1] .", "Finally, the model's prediction is simply a (con-stant) linear rescaling of this similarity so that the scores lie in [0 , 1] .", "The model is trained using binary cross entropy between this prediction and the target label as the loss function.", "Intuitively, this architecture ensures that the positive candidates for a field cluster together near its field embedding, and that these clusters are set far apart from each other.", "We use TSNE (Maaten and Hinton, 2008) to visualize this phenomenon in Section 6.2.", "To analyze the performance of our model, we used datasets belonging to two different domains, summarized in Table", "1. Invoices We collected two corpora of invoices from different sources.", "The first corpus, Invoices1, contains 14,237 single-page invoices.", "Each invoice was from a different vendor, so the documents do not share any common templates.", "Documents from the same vendor are generated from the same template.", "The second corpus, Invoices2, contains 595 documents belonging to different templates, with no templates in common with Invoices1.", "In all of our experiments, we used a 60-40 split of templates in Invoices1 as our training and validation sets, and all the templates in Invoices2 as our test set.", "We asked human annotators to provide us ground truth extraction results for the fields shown in Table", "2. The candidate generator associated with each field type was used to generate examples, which were then labeled using the ground truth.", "About 95% of documents and fields present the training set had at least one positive example produced by our candidate generators.", "The field-level recall of our candidate generators varies from about 87% for invoice_id to about 99% for invoice_date .", "Improving the recall of candidate generators is part of our ongoing effort.", "While the candidate generators have reasonably high recall, their precision varies dramatically from field to field.", "For common fields like invoice_date and total_amount that are present in nearly all documents, we generate fewer than ten negatives for each positive example.", "On the other hand, for rare fields like total_tax_amount as well as for fields with low-precision candidate generators such as the alphanum candidate generator for purchase_order , there can sometimes be dozens of negatives for each positive.", "Overall, since the negatives far outnumber the positives, we found it helpful to randomly downsample negatives in the training set to keep at most 40 negatives for each positive per field.", "The negatives in the validation and test sets were not downsampled.", "We created a vocabulary of the 512 most frequent tokens, case-normalized, taken from the OCR text of the documents in Invoices1.", "The vocabulary also includes special tokens for numbers ( [NUMBER] ), out-of-vocabulary tokens ( [RARE] ) and padding ( [PAD] ).", "Despite the small size of this vocabulary, it covered at least 95% of words that occurred in key phrases across the entire corpus where excluded words were usually OCR errors.", "Receipts We also evaluated our model using a publicly-available corpus of scanned receipts published as part of the ICDAR 2019 Robust Reading Challenge on Scanned Receipts OCR and Information Extraction 3 .", "This corpus contains 626 receipt images with ground truth extraction results for four fields, viz., address , company , date and total .", "Using the company annotation as the template mapping, we found that these documents belong to 234 templates.", "The largest template contains 46 receipts and about half the documents belong to 13 templates with more than 10 documents each.", "On the other hand, nearly 70% of templates only have a single document.", "In all of our experiments, we used a 60-20-20 split of templates as our training, validation and test sets respectively, sampling at most 5 documents from each template.", "Our target schema for this extraction task consists of the date and total fields.", "We generated labeled examples for these two fields using a vocabulary created as above from the 512 most frequent terms in the OCR text of the receipts.", "The fields in this dataset did not suffer from the label imbalance problem highlighted above for invoices.", "In this section, we evaluate our scoring model with respect to our two key desiderata.", "First, in Section 6.1, we show that our model is able to help the extraction system generalize to unseen templates.", "Then, in Section 6.2, we probe the model to show that it learns meaningful internal representations.", "In the experiments described below, we trained models using the Rectified Adam (Liu et al., 2020) optimizer with a learning rate of 0.001 for 50 epochs.", "For both the Invoices and Receipts datasets described in Section 5, we used the training split to train the model, the validation split to pick the model with the best hold-out loss, and the test split to report performance metrics.", "We measured the performance of our model's scoring predictions using ROC AUC on the test split.", "We also analyzed its performance in the context of the overall extraction system using the accuracy of the end-to-end extraction results as measured by the maximum F1 score over all decision thresholds, averaged across all fields in the target schema shown in Table", "2. To demonstrate the benefits of our proposed neural architecture over a naive approach, we use two different baseline models for encoding a candidate and scoring it.", "The bag-of-words BoW baseline incorporates only the neighboring tokens of a candidate, but not their positions.", "The MLP baseline uses the same input features as our proposed model, including the relative positions of the candi-date's neighbors, and encodes the candidate using 3 hidden layers.", "Both these baselines follow our representation learning approach, encoding the candidate and the field separately.", "Just as in our model, the final score is the cosine distance between the candidate and field encodings, normalized to [0 , 1] using a sigmoid.", "We chose the dimension size for each model architecture using a grid-based hyperparameter search.", "All the metrics we report were obtained from performing 10 training runs and picking the model with the best validation ROC AUC.", "Table 2 summarizes the results of this performance comparison.", "On both our evaluation datasets, our model showed a significant improvement over the baselines by both metrics.", "For the invoice corpus, our model outperforms the BoW baseline by about 1 point in the scorer ROC AUC, 6501 Corpus Field Field Type Train Test Scorer ROC AUC End-to-End Max F1 # +ves % +ves BoW MLP Ours BoW MLP Ours I nvo i ce s amount_due currency 5,930 4.8% 0.967 0.968 0.973 0.800 0.789 0.801 due_date date 5,788 12.9% 0.977 0.973 0.984 0.835 0.850 0.861 invoice_date date 13,638 57.4% 0.983 0.986 0.986 0.933 0.939 0.940 invoice_id alphanum 13,719 6.8% 0.983 0.988 0.993 0.913 0.937 0.949 purchase_order alphanum 13,262 2.2% 0.959 0.967 0.976 0.826 0.851 0.896 total_amount currency 8,182 12.5% 0.966 0.972 0.980 0.834 0.849 0.858 total_tax_amount currency 2,949 7.5% 0.975 0.967 0.980 0.756 0.812 0.839 Macro-average -14.9% 0.973 0.974 0.982 0.842 0.861 0.878 R ece i p t s date date 258 85.5% 0.748 0.792 0.737 0.885 0.885 0.854 total currency 475 16.7% 0.834 0.796 0.889 0.631 0.607 0.813 Macro-average -51.1% 0.791 0.794 0.813 0.758 0.746 0.833 Table 2: Performance on the test set of unseen templates for Invoices and Receipts.", "which translates to about 3.6 points improvement in the end-to-end Max F1.", "In fact, our model beats the baseline in every field in our invoice target schema as well.", "This difference in performance clearly demonstrates the need to incorporate token positions to extract information accurately from form-like documents.", "Using neighbor position information, the MLP baseline is able to outperform the BoW baseline as well, but the improvement in end-to-end Max F1 is only about 2 points.", "This result demonstrates that our proposed architecture is better able to encode position information than a naive MLP.", "Similarly, for the receipt corpus also, our model outperforms both the baselines.", "The improvement is much larger for the total field, more than 20 points.", "For the date field, since there are too few negative candidates in the dataset, all the models have comparable performance end-to-end.", "A close examination of the per-field performance metrics in Table 2 reveals that model performance is greatly affected by both the number of positive training candidates, as well as by the ratio of positives to negatives.", "The best performance is observed for fields that occur frequently in invoices (e.g., invoice_id ) and where the candidate generator emits only a small number of negatives for each positive (e.g., invoice_date ).", "Conversely, the fields that are hardest to extract are those that are relatively rare and have low-precision candidate generators, viz., amount_due and total_tax_amount .", "We also studied our model performance over various ablation setups and found that the relative order in which various features influence generalization performance is: neighbor text > candidate position > neighbor position.", "This result is also borne out by the fact that the BoW baseline, which omits the last of these features, is quite competitive with the other approaches.", "We also compared the performance of our proposed architecture with and without the self-attention layer applied to the neighbor encodings.", "We found that self-attention contributes greatly to model performance for the invoice corpus: not only did self-attention lead to a 1-point improvement in scorer ROC AUC and a 1.7 point improvement in end-to-end max F1, we also observed an improvement in every single field in our invoice schema.", "We investigated the internal representations learned by our model by visualizing their 2-D projections using TSNE.", "Figure", "4(a) shows the representations learned for date candidates.", "They are colored based on the ground truth data indicating if they belong to one of invoice_date , due_date , or delivery_date .", "The learned encodings clearly show three distinct (by color) coherent clusters matching the respective field labels.", "Figure", "4(b) shows the candidate encodings for a sample of positive and negative date candidates for the invoice_date field, along with the embedding for that field.", "It is apparent that the encodings of the positive examples are largely clustered together whereas the sampled negatives show a more uniform and sparse spatial distribution.", "Furthermore, the field embedding lies close to the cluster of positive examples.", "It is interesting to note that the field embedding lies not at the center of the cluster, but rather at its edge, as far away as possible from the clusters of positive examples for other 6502 Figure 4: TSNE visualizations for", "fields.", "This pattern is predicted by the fact that the loss function is essentially trying to minimize the cosine distance between the field embedding and its positives, while maximizing its distance from its negatives, most importantly the positives for the other fields.", "We also indicate three cases of misclustered candidate encodings in Figure", "4(a), whose corresponding invoice candidates and their neighborhoods are excerpted below.", "Figure", "4(c) shows a ground truth positive invoice_date example whose encoding is far from the invoice_date cluster.", "It is clear from examining the invoice that this is an error in the ground truth labels provided by the human annotator.", "In fact, this date is the date of purchase and not the invoice date.", "The candidate shown in Figure", "4(d) has a candidate encoding that lies midway between due_date , its true label, and invoice_date .", "We believe this is explained by the fact that this date has both the terms Due Date and date of invoice nearby, which are usually indicative of due_date and invoice_date respectively.", "Finally, Figure", "4(e) shows a true invoice_date example whose encoding is far away from all the field clusters.", "A closer examination of the features of this candidate showed that our OCR engine was unable to detect the word Date just above the date due to scanning noise.", "Since this crucial word was missing from the neighbors of this candidate, the learned neighborhood representation was clearly incorrect.", "Information extraction from plain text documents for tasks like named entity recognition and relation extraction have benefited from recent advances in deep learning (Lample et al., 2016; Peng et al., 2017).", "However, these techniques are not directly applicable to our task on form-like documents.", "Palm et al. (2017) attempts to use RNNs to extract information from form-like documents.", "However, they treat each line as a vector of n-grams limiting the resulting accuracy.", "The importance of understanding visual layout was recognized even in the context of information extraction of webpages in recent work (Cai et al., 2004; Yu et al., 2003; Zhu et al., 2006; Cai et al., 2003).", "The techniques developed by them are, however, not immediately applicable in our context since we do not have access to the source markup representation for the documents we deal with.", "A common approach to solving the problem of extracting information from form-like documents is to register templates in a system, match new documents to an existing template, and use an extractor learnt from said template (Chiticariu et al., 2013; Schuster et al., 2013).", "The learning problem we tackle in this paper is more ambitious; we seek to generalize to unseen templates.", "Our work is most closely related to recent attempts to combine layout features with text signals.", "Liu et al. (2019) use a document graph and intro-6503 duce a graph combination model to combine visual and textual signals in the document.", "Katti et al. (2018) represent a document as a two-dimensional grid of text tokens.", "Zhao et al. (2019) show that using grid information can be useful for information extraction tasks.", "Denk and Reisswig (2019) combine the grid-based approach with BERT-based text encodings.", "While an apples-to-apples comparison with these approaches is difficult without a shared benchmark, our system has several advantages: in contrast to the graph-based approaches (Liu et al., 2019) we focus on the harder problem of generalizing to unseen templates rather than dealing with the variations within a template.", "Since we are not starting with raw pixels, our approach is computationally less expensive than grid-based approaches.", "Further, we do not require clever heuristics to construct a multi-scale grid that is required for the image-segmentation style abstraction to work well.", "To the best of our knowledge, our approach of using representation learning for this task is the first of its kind.", "We gain many of the well-known benefits of this approach (Bengio et al., 2013), most notably interpretability.", "In this paper, we presented a novel approach to the task of extracting structured information from templatic documents using representation learning.", "We showed that our extraction system using this approach not only has promising accuracy on unseen templates in two different domains, but also that the learned representations lend themselves to interpretation of loss cases.", "In this initial foray into this challenging problem, we limited our scope to fields with domain-agnostic types like dates and numbers, and which have only one true value in a document.", "In future work, we hope to tackle repeated fields and learn domain-specific candidate generators.", "We are also actively investigating how our learned candidate representations can be used for transfer learning to a new domain and, ultimately, in a few-shot setting.", "Acknowledgements We are grateful to Lauro Costa, Evan Huang, Will Lu, Lukas Rutishauser, Mu Wang, and Yang Xu on the Google Cloud team for their support with data collection, benchmarking, and continuous feedback on our ideas.", "We are also grateful for our research intern, Beliz Gunel, who helped re-run several experiments and fine-tune our training pipeline." ]
[ "objective", "objective", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "result", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "other", "other", "other", "objective", "other", "method", "method", "other", "other", "other", "other", "abstain", "abstain", "method", "objective", "abstain", "objective", "result", "method", "method", "objective", "abstain", "abstain" ]
[ "Various natural language processing tasks are structured prediction problems where outputs are constructed with multiple interdependent decisions.", "Past work has shown that domain knowledge, framed as constraints over the output space, can help improve predictive accuracy.", "However, designing good constraints often relies on domain expertise.", "In this paper, we study the problem of learning such constraints.", "We frame the problem as that of training a two-layer rectifier network to identify valid structures or substructures, and show a construction for converting a trained network into a system of linear constraints over the inference variables.", "Our experiments on several NLP tasks show that the learned constraints can improve the prediction accuracy, especially when the number of training examples is small.", "In many natural language processing (NLP) tasks, the outputs are structures which can take the form of sequences, trees, or in general, labeled graphs.", "Predicting such output structures (e.g. Smith, 2011) involves assigning values to multiple interdependent variables.", "Certain joint assignments may be prohibited by constraints designed by domain experts.", "As a simple example, in the problem of extracting entities and relations from text, a constraint could disallow the relation married to between two entities if one of the entity is not a person.", "It has been shown that carefully designed constraints can substantially improve model performance in various applications (e.g., Chang et al., 2012; Anzaroot et al., 2014), especially when the number of training examples is limited.", "Designing constraints often requires task-specific manual effort.", "In this paper, we ask the question: can we use neural network methods to automatically discover constraints from data, and use them to predict structured outputs?", "We provide a general framework for discovering constraints in the form of a system of linear inequalities over the output variables in a problem.", "These constraints can improve an already trained model, or be integrated into the learning process for global training.", "A system of linear inequalities represents a bounded or unbounded convex polytope.", "We observe that such a system can be expressed as a two-layer threshold network, i.e., a network with one hidden layer of linear threshold units and an output layer with a single threshold unit.", "This two-layer threshold network will predict 1 or 1 depending on whether the system of linear inequalities is satisfied or not.", "In principle, we could try to train such a threshold network to discover constraints.", "However, the zero-gradient nature of the threshold activation function prohibits using backpropagation for gradient-based learning.", "Instead, in this paper, we show that a construction of a specific two-layer rectifier network represents linear inequality constraints.", "This network also contains a single linear threshold output unit, but in the hidden layer, it contains rectified linear units (ReLUs).", "Pan and Srikumar (2016) showed that a two-layer rectifier network constructed in such a way is equivalent to a threshold network, and represents the same set of linear inequalities as the threshold network with far fewer hidden units.", "The linear constraints thus obtained can augment existing models in multiple ways.", "For example, if a problem is formulated as an integer program (e.g., Roth and Yih, 2004, 2005; Riedel and Clarke, 2006; Martins et al., 2009), the learned constraints will become additional linear inequalities, which can be used directly.", "Alternatively, a structure can be constructed using graph search (e.g., Collins and Roark, 2004; Daume et al., 2009; Doppa et al., 2014; Chang et al., 2015; Wiseman and Rush, 2016), in which case the learned constraints can filter available actions during search-node expansions.", "Other inference techniques that extend Lagrangian Relaxation (Komodakis et al., 2007; Rush et al., 2010; Martins et al., 2011) can also employ the learned constraints.", "Essentially, the learned constraints can be combined with various existing models and inference techniques and the framework proposed in this paper can be viewed as a general approach to improve structured prediction.", "We report experiments on three NLP tasks to verify the proposed idea.", "The first one is an entity and relation extraction task, in which we aim to label the entity candidates and identify relations between them.", "In this task, we show that the learned constraints can be used while training the model to improve prediction.", "We also show that the learned constraints in this domain can be interpreted in a way that is comparable to manually designed constraints.", "The second NLP task is to extract citation fields like authors, journals and date from a bibliography entry.", "We treat it as a sequence labeling problem and show that learned constraints can improve an existing first-order Markov model trained using a structured SVM method (Tsochan-taridis et al., 2004).", "In the final experiment we consider chunking, i.e., shallow parsing, which is also a sequence labeling task.", "We train a BiLSTM-CRF model (Huang et al., 2015) on the training set with different sizes, and we show that learned constraints are particularly helpful when the number of training examples is small.", "In summary, the contributions of this paper are: 1. We propose that rectifier networks can be used to represent and learn linear constraints for structured prediction problems.", "2. In tasks such as entity and relation extraction, the learned constraints can exactly recover the manually designed constraints, and can be interpreted in a way similar to manually designed constraints.", "3. When manually designed constraints are not available, we show via experiments that the learned constraints can improve the original model's performance, especially when the original model is trained with a small dataset.", "1 1 The scripts for replaying the experiments are available at https://github.com/utahnlp/learning-constraints 2 Representing Constraints In this section, we formally define structured prediction and constraints.", "In a structured prediction problem, we are given an input x belonging to the instance space, such as sentences or images.", "The goal is to predict an output y Y x , where Y x is the set of possible output structures for the input x .", "The output y have a predefined structure (e.g., trees, or labeled graphs), and the number of candidate structures in Y x is usually large, i.e., exponential in the input size.", "Inference in such problems can be framed as an optimization problem with a linear objective function: y = argmax y Y x ( x , y ) , (1) where ( x , y ) is a feature vector representation of the input-output pair ( x , y ) and are learned parameters.", "The feature representation ( x , y ) can be designed by hand or learned using neural networks.", "The feasible set Y x is predefined and known for every x at both learning and inference stages.", "The goal of learning is to find the best parameters (and, also perhaps the features if we are training a neural network) using training data, and the goal of inference is to solve the above argmax problem given parameters .", "In this paper, we seek to learn additional constraints from training examples { ( x , y ) } .", "Suppose we want to learn K constraints, and the k th one is some Boolean function 2 : c k ( x , y ) = 1 if ( x , y ) satisfies the k th constraint, and c k ( x , y ) = 1 if it does not.", "Then, the optimal structure y is the solution to the following optimization problem: max y Y x ( x , y ) , (2) subject to k, c k ( x , y ) = 1 .", "Boolean functions over inference variables may be expressed as linear inequalities over them (Roth and Yih, 2004).", "In this paper, we represent constraints as linear inequalities over some feature vector ( x , y ) of a given input-output pair.", "The k th constraint c k is equivalent to the linear inequality w k ( x , y ) + b k 0 , (3) 2 We use 1 to indicate true and 1 to indicate false .", "whose weights w k and bias b k are learned.", "A Boolean constraint is, thus, a linear threshold function, c k ( x , y ) = sgn (cid:0) w k ( x , y ) + b k (cid:1) .", "The feature representations ( x , y ) should not be confused with the original features ( x , y ) used in the structured prediction model in Eq.", "(1) or (2).", "Hereafter, we refer to ( x , y ) as constraint features .", "Constraint features should be general properties of inputs and outputs, since we want to learn domain-specific constraints over them.", "They are a design choice, and in our experiments, we will use common NLP features.", "In general, they could even be learned using a neural network.", "Given a constraint feature representation ( ) , the goal is thus to learn the parameters w k 's and b k 's for every constraint.", "For an input x , we say the output y is feasible if it satisfies constraints c k for all k = 1 , . . . , K .", "We can define a Boolean variable z ( x , y ) indicating whether y is feasible with respect to the input x : z ( x , y ) = c 1 ( x , y ) c K ( x , y ) .", "That is, z is a conjunction of all the Boolean functions corresponding to each constraint.", "Since conjunctions are linearly separable, we can rewrite z ( x , y ) as a linear threshold function: z ( x , y ) = sgn (cid:16) 1 K + K (cid:88) k =1 c k ( x , y ) (cid:17) .", "It is easy to see that z ( x , y ) = 1 if, and only if, all c k 's are 1 precisely the definition of a conjunction.", "Finally, we can plug Eq.", "(4) into Eq.", "(5): z = sgn (cid:16) 1 K + K (cid:88) k =1 sgn (cid:0) w k ( x , y ) + b k (cid:1)(cid:17) (6) Observe that Eq.", "(6) is exactly a two-layer threshold neural network: ( x , y ) is the input to the network; the hidden layer contains K linear threshold units with parameters w k and b k ; the output layer has a single linear threshold unit.", "This neural network will predict 1 if the structure y is feasible with respect to input x , and 1 if it is infeasible.", "In other words, constraints for structured prediction problems can be written as two-layer threshold networks.", "One possible way to learn constraints is thus to learn the hidden layer parameters w k and b k , with fixed output layer parameters.", "However, the neural network specified in Eq.", "(6) is not friendly to gradient-based learning; the sgn( ) function has zero gradients almost everywhere.", "To circumvent this, let us explore an alternative way of learning constraints using rectifier networks rather than threshold networks.", "We saw in the previous section that a system of linear inequalities can be represented as a two-layer threshold network.", "In this section, we will see a special rectifier network that is equivalent to a system of linear inequalities, and whose parameters can be learned using backpropagation.", "Denote the rectifier (ReLU) activation function as R( x ) = max(0 , x ) .", "Consider the following two-layer rectifier network: z = sgn (cid:16) 1 K (cid:88) k =1 R (cid:0) w k ( x , y ) + b k (cid:1)(cid:17) (7) The input to the network is still ( x , y ) .", "There are K ReLUs in the hidden layer, and one threshold unit in the output layer.", "The decision boundary of this rectifier network is specified by a system of linear inequalities.", "In particular, we have the following theorem (Pan and Srikumar, 2016, Theorem 1): Theorem 1. Consider a two-layer rectifier network with K hidden ReLUs as in Eq.", "(7) .", "Define the set [ K ] = { 1 , 2 , . . . , K } .", "The network output z ( x , y ) = 1 if, and only if, for every subset S of [ K ] , the following linear inequality holds: 1 (cid:88) k S (cid:0) w k ( x , y ) + b k (cid:1) 0 (8) The proof of Theorem 1 is given in the supplementary material.", "To illustrate the idea, we show a simple example rectifier network, and convert it to a system of linear inequalities using the theorem.", "The rectifier network contains two hidden ReLUs ( K = 2 ): z = sgn (cid:16) 1 R (cid:0) w 1 + b 1 (cid:1) R (cid:0) w 2 + b 2 (cid:1)(cid:17) Our theorem says that z = 1 if and only if the following four inequalities hold simultaneously, one per subset of [ K ] : 1 0 1 (cid:0) w 1 + b 1 (cid:1) 0 1 (cid:0) w 2 + b 2 (cid:1) 0 1 (cid:0) w 1 + b 1 (cid:1) (cid:0) w 2 + b 2 (cid:1) 0 The first inequality, 1 0 , corresponding to the empty subset of [ K ] , trivially holds.", "The rest are just linear inequalities over .", "In general, [ K ] has 2 K subsets, and when S is the empty set, inequality (8) is trivially true.", "The rectifier network in Eq.", "(7) thus predicts y is a valid structure for x , if a system of 2 K 1 linear inequalities are satisfied.", "It is worth mentioning that even though the 2 K 1 linear inequalities are constructed from a power set of K elements, it does not make them dependent on each other.", "With general choice of w k and b k , these 2 K 1 inequalities are linearly independent.", "This establishes the fact that a two-layer rectifier network of the form of Eq.", "(7) can represent a system of linear inequality constraints for a structured prediction problem via the constraint feature function .", "In the previous section, we saw that both threshold and rectifier networks can represent a system of linear inequalities.", "We can either learn a threshold network (Eq.", "(6)) to obtain constraints as in (3), or we can learn a rectifier network (Eq.", "(7)) to obtain constraints as in (8).", "The latter offers two advantages.", "First, a rectifier network has non-trivial gradients, which facilitates gradient-based learning 3 .", "Second, since K ReLUs can represent 2 K 1 constraints, the rectifier network can express constraints more compactly with fewer hidden units.", "We will train the parameters w k 's and b k 's of the rectifier network in the supervised setting.", "First, we need to obtain positive and negative training examples.", "We assume that we have training data for a structured prediction task.", "Positive examples can be directly obtained from the training data of the structured prediction prob-3 The output threshold unit in the rectifier network will not cause any trouble in practice, because it can be replaced by sigmoid function during training.", "Our theorem still follows, as long as we interpret z ( x , y ) = 1 as ( x , y ) 0 .", "5 and z ( x , y ) = 1 as ( x , y ) < 0 .", "5 .", "We can still convert the rectifier network into a system of linear inequalities even if the output unit is the sigmoid unit.", "lem.", "For each training example ( x , y ) , we can apply constraint feature extractors to obtain positive examples of the form ( ( x , y ) , +1) .", "Negative examples can be generated in several ways; we use simple but effective approaches.", "We can slightly perturb a structure y in a training example ( x , y ) to obtain a structure y (cid:48) that we assume to be invalid.", "Applying the constraint feature extractor to it gives a negative example ( ( x , y (cid:48) ) , 1) .", "We also need to ensure that ( x , y (cid:48) ) is indeed different from any positive example.", "Another approach is to perturb the feature vector ( x , y ) directly, instead of perturbing the structure y .", "In our experiments in the subsequent sections, we will use both methods to generate negative examples, with detailed descriptions in the supplementary material.", "Despite their simplicity, we observed performance improvements.", "Exploring more sophisticated methods for perturbing structures or features (e.g., using techniques explored by Smith and Eisner (2005), or using adversarial learning (Goodfellow et al., 2014)) is a future research direction.", "To verify whether constraints can be learned as described here, we performed a synthetic experiment where we randomly generate many integer linear program (ILP) instances with hidden shared constraints.", "The experiments show that constraints can indeed be recovered using only the solutions of the programs.", "Due to space constraints, details of this synthetic experiment are in the supplementary material.", "In the remainder of the paper we focus on three real NLP tasks.", "In the task of entity and relation extraction, we are given a sentence with entity candidates.", "We seek to determine the type of each candidate, as in the following example (the labels are underlined): [ Organization Google LLC] is headquartered in [ Location Mountain View, California].", "We also want to determine directed relations between the entities.", "In the above example, the relation from Google LLC to Mountain View, California is OrgBasedIn , and the opposite direction is labeled NoRel , indicating there is no relation.", "This task requires predicting a directed graph and represents a typical structured prediction problemwe cannot make isolated entity and relation predictions.", "Dataset and baseline : We use the dataset from (Roth and Yih, 2004).", "It contains 1441 sentences with labeled entities and relations.", "There are three possible entity types: Person , Location and Organization , and five possible relations: Kill , LiveIn , WorkFor , LocatedAt and OrgBasedIn .", "Additionally, there is a special entity label NoEnt meaning a text span is not an entity, and a special relation label NoRel indicating that two spans are unrelated.", "We used 70% of the data for training and the remaining 30% for evaluation.", "We trained our baseline model using the integer linear program (ILP) formulation with the same set of features as in (Roth and Yih, 2004).", "The baseline system includes manually designed constraints from the original paper.", "An example of such a constraint is: if a relation label is WorkFor , the source entity must be labeled Person , and the target entity must be labeled Organization .", "For reference, the supplementary material lists the complete set of manually designed constraints.", "We use three kinds of constraint features:", "(i) source-relation indicator, which looks at a given relation label and the label of its source entity;", "(ii) relation-target indicator, which looks at a relation label and the label of its target entity; and", "(iii) relation-relation indicator, which looks at a pair of entities and focuses on the two relation label, one in each direction.", "The details of the constraint features, negative examples and hyper-parameters are in the supplementary material.", "We compared the performance of two ILP-based models, both trained in the presence of constraints with a structured SVM.", "One model was trained with manually designed constraints and the other used learned constraints.", "These models are compared in Table 1. We manually inspected the learned constraints and discovered that they exactly recover the designed constraints, in the sense that the feasible output space is exactly the same regardless of whether we use designed or learned constraints.", "As an additional confirmation, we observed that when a model is trained with designed constraints and tested with learned constraints, we get the same model perfor-Performance Metric Designed Learned entity F-1 84 .", "mance as when tested with designed constraints.", "Likewise, a model that is trained with learned constraints performs identically when tested with learned and designed constraints.", "Below, we give one example of a learned constraint, and illustrate how to interpret such a constraint.", "(The complete list of learned constraints is in the supplementary material.)", "A learned constraint using the source-relation indicator features is 1 .", "98 x 1 + 3 .", "53 x 2 1 .", "90 x 3 + 0 .", "11 x 4 + 2 .", "66 x 5 2 .", "84 x 6 2 .", "84 x 7 2 .", "84 x 8 + 2 .", "58 x 9 + 0 .", "43 x 10 + 0 .", "32 0 (9) where x 1 through x 10 are indicators for labels NoEnt , Person , Location , Organization , NoRel , Kill , LiveIn , WorkFor , LocatedAt , and OrgBasedIn , respectively.", "This constraint disallows a relation labeled as Kill having a source entity labeled as Location , because 1 .", "90 2 .", "84 + 0 .", "32 < 0 .", "Therefore, the constraint Location cannot Kill is captured in (9).", "In fact, it is straightforward to verify that the inequality in (9) captures many more constraints such as NoEnt cannot LiveIn , Location cannot LiveIn , Organization cannot WorkFor , etc.", "A general method for interpreting learned constraints is a direction of future research.", "Note that the metric numbers in Table 1 based on learned constraints are lower than those based on designed constraints.", "Since the feasible space is the same for both kinds of constraints, the performance difference is due to the randomness of the ILP solver picking different solutions with the same objective value.", "Therefore, the entity and relation experiments in this section demonstrate that our approach can recover the designed constraints and provide a way of interpreting these constraints.", "In the citation field extraction task, the input is a citation entry.", "The goal is to identify spans corresponding to fields such as author, title, etc.", "In the example below, the labels are underlined: [ Author A . M . Turing . ] [ Title Computing machinery and intelligence . ] [ Journal Mind , ] [ Volume 59 , ] [ Pages 433-460 . ] [ Date October , 1950 . ] Chang et al. (2007) showed that hand-crafted constraints specific to this domain can vastly help models to correctly identify citation fields.", "We show that constraints learned from the training data can improve a trained model without the need for manual effort.", "Dataset and baseline.", "We use the dataset from Chang et al. (2007, 2012) whose training, development and test splits have 300, 100 and 100 examples, respectively.", "We train a first-order Markov model using structured SVM (Tsochantaridis et al., 2004) on the training set with the same raw text features as in the original work.", "Constraint features.", "We explore multiple simple constraint features ( x , y ) in the citation field extraction experiments as shown in Table 2. Detailed descriptions of these features, including how to develop negative examples for each feature, and experiment settings are in the supplementary material.", "For each constraint feature template, we trained a rectifier network with 10 ReLUs in the hidden layer.", "We then use Theorem 1 to convert the resulting network to a system of 2 10 1 , or 1023 linear inequalities.", "We used beam search with beam size 50 to combine the learned inequalities with the original sequence model to predict on the test set.", "States in the search space correspond to partial assignments to a prefix of the sequence.", "Each step we predict the label for the next token in the sequence.", "The pretrained sequence model (i.e., the baseline) ranks search nodes based on transition and emission scores, and the learned inequality prunes the search space accordingly 4 .", "Table 3 shows the token level accuracies of various methods.", "The results show that all versions of constrained search outperform the baselines, indicating that the learned constraints are effective in the citation field extraction task.", "Furthermore, different constraints learned with different features can be combined.", "We observe that combining different constraint features generally improves accuracy.", "It is worth pointing out that the label existence and label counts features are global in nature and cannot be directly used to train a sequence model.", "Even if some constraint features can be used in training the original model, it is still beneficial to learn constraints from them.", "For example, the bigram label feature is captured in the original first order model, but adding constraints learned from them still improves performance.", "As another test, we trained a model with POS features, which also contains punctuation information.", "This model achieves 91.8% accuracy.", "Adding constraints learned with POS improves the accuracy to 92.6%; adding constraints learned with punctuation features further improves it to 93.8%.", "We also observed that our method for learning constraints is robust to the choice of the number of hidden ReLUs.", "For example, for punctuation, learning using 5, 8 and 10 hidden ReLUs results an accuracy of 90 .", "1% , 90 .", "3% , and 90 .", "2% , respectively.", "We observed similar behavior for other constraint features as well.", "Since the number of constraints learned is exponential in the number of hidden units, these results shows that learning redundant constraints will not hurt performance.", "4 Since the label-existence and label-counts features are global, pruning by learned inequalities is possible only at the last step of search.", "The other four features admit pruning at each step of the search process.", "Note that carefully hand-crafted constraints may achieve higher accuracy than the learned ones.", "Chang et al. (2007) report an accuracy of 92.5% with constraints specifically designed for this domain.", "In contrast, our method for learning constraints uses general constraint features, and does not rely on domain knowledge.", "Therefore, our method is suited to tasks where little is known about the underlying domain.", "Chunking is the task of clustering text into groups of syntactically correlated tokens or phrases.", "In the instance below, the phrase labels are underlined: [ NP An A.P. Green official] [ VP declined to comment] [ PP on] [ NP the filing] [ O .] We treat the chunking problem as a sequence labeling problem by using the popular IOB tagging scheme.", "For each phrase label, the first token in the phrase is labeled with a Bprefixed to phrase label while the other tokens are labeled with an Iprefixed to the phrase label.", "Hence, [ NP An A.P. Green official] is represented as [[ B-NP An] [ I-NP A.P.] [ I-NP Green] [ I-NP official]] This is done for all phrase labels except O.", "Dataset and Baselines.", "We use the CoNLL2000 dataset (Tjong Kim Sang and Buchholz, 2000) which contains 8936 training sentences and 2012 test sentences.", "For our experiments, we consider 8000 sentences out of 8936 training sentences as our training set and the remaining 936 sentences as our development set.", "Chunking is a well-studied problem and showing performance improvements on full training dataset is difficult.", "However, we use this task to illustrate the interplay of learned constraints with neural network models, and the impact of learned constraints in the low training data regime.", "We use the BiLSTM-CRF (Huang et al., 2015) for this sequence tagging task.", "We use GloVe for word embeddings.", "We do not use the BERT (De-vlin et al., 2019) family of models since tokens are broken down into sub-words during pre-processing, which introduces modeling and evaluation choices that are orthogonal to our study of label dependencies.", "As with the citation task, all our constrained models use beam search, and we compare our results to both exact decoding and beam search baselines.", "We use two kinds of constraint features:", "(i) n -gram label existence, and", "(ii) n -gram part of speech.", "Details of the constraint features and construction of negative samples are given in the supplementary material.", "We train the rectifier network with 10 hidden units.", "The beam size of 10 was chosen for our experiments based on preliminary experiments.", "We report the average results on two different random seeds for learning each constraint.", "Note that the n -gram label existence is a global constraint while the n -gram POS constraint is a local constraint which checks for validity of label assignments at each token.", "In essence, the latter constraint reranks the beam at each step by ensuring that states that satisfy the constraint are preferred over states that violate the constraint.", "Since the n -gram label existence is a global constraint, we check the validity of the tag assignments only at the last token.", "In the case where none of the states in the beam satisfy the constraint, the original beams are used.", "The results for this set of experiments are presented in Table 4.", "We observe that the POS constraint improves the performance of the base-Constraint n Percentage of training data used 1% 5% 10% 25% 50% 100% Label existence 2 81.28 88.30 89.73 91.24 90.40 92.48 3 80.98 88.20 90.58 91.20 92.37 93.12 Part-of-speech 3 86.52 90.74 91.80 92.41 93.07 93.84 4 84.21 90.99 92.17 92.46 93.08 93.93 Search without constraints 81.29 88.27 90.62 91.33 92.51 93.44 Exact decoding 82.11 88.70 90.49 92.57 93.94 94.75 Table 4: Token level accuracies (in percentage) for the chunking baseline and constrained model.", "line models significantly, outperforming the beam search baseline on all training ratios.", "More importantly, the results show sizable improvements in accuracy for smaller training ratios (e.g, 4 . 41% and 5 . 23% improvements on exact and search baselines respectively with 1% training data ).", "When the training ratios get bigger, we expect the models to learn these properties and hence the impact of the constraints decreases.", "These results (along with the experiments in the previous sections) indicate that our constraints can significantly boost performance in the low data regime.", "Another way to improve performance in low resource settings is to use better pretrained input representations.", "When we replaced GloVe embeddings with ELMo, we observed a 87 .", "09% accuracy on 0.01 ratio of training data using exact decoding.", "However, this improvement comes at a cost: the number of parameters increases from 3M (190k trainable) to 94M (561k trainable).", "In contrast, our method instead introduces a smaller rectifier network with 1000 additional parameters while still producing similar improvements.", "In other words, using trained constraints is computationally more efficient.", "We observe that the label existence constraints, however, do not help.", "We conjecture that this may be due to one of the following three conditions:", "(i) The label existence constraint might not exist for the task;", "(ii) The constraint exists but the learner is not able to find it;", "(iii) The input representations are expressive enough to represent the constraints.", "Disentangling these three factors is a future research challenge.", "Structured prediction is an active field in machine learning and has numerous applications, including various kinds of sequence labeling tasks, parsing (e.g., Martins et al., 2009), image segmentation (e.g., Lam et al., 2015), and information extraction (e.g., Anzaroot et al., 2014).", "The work of Roth and Yih (2004) introduced the idea of using explicitly stated constraints in an integer programming framework.", "That constraints and knowledge can improve models has been highlighted by several lines of work (e.g., Ganchev et al., 2010; Chang et al., 2012; Hu et al., 2016).", "The interplay between constraints and representations has been sharply highlighted by recent work on integrating neural networks with structured outputs (e.g., Rocktaschel and Riedel, 2017; Niculae et al., 2018; Manhaeve et al., 2018; Xu et al., 2018; Li and Srikumar, 2019; Li et al., 2019, and others).", "We expect that constraints learned as described in this work can be integrated into these formalisms, presenting an avenue for future research.", "While our paper focuses on learning explicit constraints directly from examples, it is also possible to use indirect supervision from these examples to learn a structural classifier (Chang et al., 2010), with an objective function penalizing invalid structures.", "Related to our goal of learning constraints is rule learning , as studied in various subfields of artificial intelligence.", "Quinlan (1986) describes the ID3 algorithm, which extracts rules as a decision tree.", "First order logic rules can be learned from examples using inductive logic programming (Muggle-ton and de Raedt, 1994; Lavrac and Dzeroski, 1994; Page and Srinivasan, 2003).", "Notable algorithms for inductive logic programming include FOIL (Quin-lan, 1990) and Progol (Muggleton, 1995).", "Statistical relation learning addresses learning constraints with uncertainty (Friedman et al., 1999; Getoor and Mihalkova, 2001).", "Markov logic networks (Richardson and Domingos, 2006) combines probabilistic models with first order logic knowledge, whose weighted formulas are soft constraints and the weights can be learned from data.", "In contrast to these directions, in this paper, we exploit a novel representational result about rectifier networks to learn polytopes that represent constraints with off-the-shelf neural network tools.", "We presented a systematic way for discovering constraints as linear inequalities for structured prediction problems.", "The proposed approach is built upon a novel transformation from two layer rectifier networks to linear inequality constraints and does not rely on domain expertise for any specific problem.", "Instead, it only uses general constraint features as inputs to rectifier networks.", "Our approach is particularly suited to tasks where designing constraints manually is hard, and/or the number of training examples is small.", "The learned constraints can be used for structured prediction problems in two ways: (1) combining them with an existing model to improve prediction performance, or (2) incorporating them into the training process to train a better model.", "We demonstrated the effectiveness of our approach on three NLP tasks, each with different original models.", "We thank members of the NLP group at the University of Utah, especially Jie Cao, for their valuable insights and suggestions; and reviewers for pointers to related works, corrections, and helpful comments.", "We also acknowledge the support of NSF Cyberlearning-1822877, SaTC-1801446 and gifts from Google and NVIDIA." ]
[ "abstain", "abstain", "abstain", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "result", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "result", "abstain", "abstain", "method", "result", "objective", "objective", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "method", "abstain", "objective", "other", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "method", "abstain", "objective", "other", "other" ]
[ "Text classification is usually studied by labeling natural language texts with relevant categories from a predefined set.", "In the real world, new classes might keep challenging the existing system with limited labeled data.", "The system should be intelligent enough to recognize upcoming new classes with a few examples.", "In this work, we define a new task in the NLP domain, incremental few-shot text classification , where the system incrementally handles multiple rounds of new classes.", "For each round, there is a batch of new classes with a few labeled examples per class.", "Two major challenges exist in this new task:", "(i) For the learning process, the system should incrementally learn new classes round by round without re-training on the examples of preceding classes;", "(ii) For the performance, the system should perform well on new classes without much loss on preceding classes.", "In addition to formulating the new task, we also release two benchmark datasets 1 in the incremental few-shot setting: intent classification and relation classification.", "Moreover, we propose two entailment approaches, ENTAILMENT and HYBRID , which show promise for solving this novel problem.", "Text classification has achieved great success in the past decades with the development of deep learning techniques (Kowsari et al., 2019; Li et al., 2020).", "However, decent performance highly relies on the availability of large-scale task-specific training data.", "Recently, few-shot text classification (Yu et al., 2018; Geng et al., 2019; Xia et al., 2020a) has attracted increasing attention from the NLP community since it is unlikely to have large-scale labeled data for new classes in the real world.", "Indicates Equal Contribution.", "1 Code & Data are available at https://github.com/ congyingxia/IncrementalFSTC .", "Typically, few-shot text classification is formulated like this: the system first sees a set of base classes C b that have a large number of labeled examples, then a group of new classes C n is provided with k examples per class.", "For a testing instance, the system is required to search for its label in the space of C b C n or merely C n .", "However, this setting might not suitable for real scenarios.", "First, base classes with rich annotations might not be available at the beginning.", "It happens whenever you want to build a system from scratch.", "Second, take the bank's customer service system as an example, queries with new intents are continuously appearing (e.g., by a sequence of rounds) without enough labeled data.", "The system should be able to keep learning and recognizing new intents round by round.", "For each query, the system needs to pick up the most appropriate intent in the incrementally increasing label space or return none of them.", "In this work, we propose a more realistic and challenging task in the low resource scenarios: incremental few-shot text classification .", "In this new task, the system is provided with m rounds of new classes (i.e., C 1 n , C 2 n , , C mn ) without any base classes that have enough annotations.", "For each round, there are a group of new classes, C in ( i = 1 , , m ), and each class has k labeled examples ( k is in the range of [1, 5] and varies for different classes).", "During testing, the system is required to either select the best class from C 1 n C 1 n C mn or output none of them which means no existing class applies to the input.", "As far as we know, this is the first work that studies incremental few-shot learning without base classes.", "All previous few-shot learning models (Snell et al., 2017; Gidaris and Komodakis, 2018; Yin et al., 2020; Xia et al., 2020b; Nguyen et al., 2020) fail to solve this problem since they relied on the large-scale labeled data of base classes to train a robust system.", "To provide a complete vision about incremental few-shot text classification, we also conduct experiments with additional base classes to compare with these baselines.", "To evaluate the performance for different models, we build two benchmark datasets for this new problem.", "One is intent detection that aims at understanding the intents under user queries (Liu and Lane, 2016; Xia et al., 2018).", "This benchmark simulates a task like a bank's customer service as we mentioned.", "The other is relation classification which needs to determine the correct relation between two entities in a given sentence (Zeng et al., 2014).", "In reality, the relation types might be unlimited.", "For example, there are fine-grained relations or implicit relations that need entailment.", "In open-domain or open-form relation tasks, there always exists the problem of lack of annotations.", "Another important feature of our benchmark datasets is that we do not provide dev sets.", "Existing systems are commonly evaluated on the dev set to choose the best training model.", "We claim that in real-world (incremental) few-shot applications, we cannot expect extra labeled data other than the k examples.", "This is in line with the observation in Schick and Schtze (2020).", "If a system has to rely on the dev set to find the best parameters, it is not suitable for the incremental few-shot setting.", "Furthermore, we propose a novel approach, ENTAILMENT , to solve this new problem.", "ENTAILMENT models the text classification problem in a textual entailment (Dagan et al., 2013) framework.", "To figure out if an input x belongs to a class y , ENTAILMENT tries to infer the truth value of y (i.e., a hypothesis), given the x (i.e, the premise).", "The main benefit of this formulation is that the system learns this task not only from label-specific examples, but more importantly, from the large-scale entailment datasets.", "In other words, we make use of indirect supervision from textual entailment datasets to address the target few-shot task.", "In summary, our contribution lies in three aspects.", "1) We propose a new task named Incremental Few shot Text Classification with multi-round new classes.", "This task is more challenging and realistic for low resource scenarios.", "2) We create and release two benchmark datasets to evaluate the performance of this new task.", "3) We propose two novel models, ENTAILMENT and HYBRID , to solve this novel problem.", "Extensive experiments on these two datasets show the effectiveness of our proposed models.", "Incremental few-shot learning.", "As far as we know, there is no prior work in the NLP domain that studies incremental few-shot text classification.", "In this section, we mainly introduce some work in the computer vision domain.", "These works only assume that a single round of new classes C n is appended to the base classes C b .", "Generally, they will learn class representations for classification.", "Different approaches differ in the way of representing base classes and new classes.", "Hereafter, we use W b and W n as the representations for C b and C n , respectively.", "Snell et al. (2017) proposes the Prototypical Network, in which both W b and W n are stored as the average embedding of the few-shot support images for a certain class.", "Although Prototypical Network was not designed for incremental few-shot learning, it can be easily adapted to the incremental setting by providing the representations for all the classes.", "It trains a nearest neighbor algorithm on the base classes and tests directly on the union of base and new classes.", "Qi et al. (2018) proposes an imprint-ing mechanism: the base representations W b are learned through supervised pre-training (e.g., the weight matrix in a softmax classifier), and W n are computed using the averaged representations like Prototypical Network.", "In Gidaris and Komodakis (2018), the base representations W b are learned through supervised pre-training.", "The representation of the i th novel class ( W n,i ) comes from two origins:", "(i) the prototypical averaging, w avg ;", "(ii) attention-weighted sum over base representations: w att .", "Namely, W n,i = avg (cid:12) w avg + att (cid:12) w att , where avg and att are learnable weight vectors.", "In the few-shot training stage, the original base classes C b are split into new base classes and fake novel classes for each episode.", "In testing, the representations of novel classes, W n , are constructed based on the k examples and W b .", "In Ren et al. (2019), both W b and W n are learned through supervised training: W b are classifier parameters pre-trained on base classes, W n are classifier parameters learned in new classes.", "During the training, the support set and the query set are constructed differently for new classes.", "The support set consists of examples only from new classes; the query set contains examples from both new classes and base classes (because the training goal is to maximize the performance of all classes).", "The training in this literature has two phases.", "The first phase is few-shot episode training which learns W n , the second phase (called meta-learning training) optimizes the performance on the query set and regularizes the representations for new classes.", "To summarize, compared with Snell et al. (2017) and Qi et al. (2018), both Gidaris and Komodakis (2018) and Ren et al. (2019) build connections between the representations of base classes and the new classes.", "However, these methods cannot be directly applied to our problem for the following reasons.", "(i) Despite the claims in some literature that they are dealing with incremental or dynamic few-shot problems, they only considered a single round of new classes (Qi et al., 2018; Gidaris and Komodakis, 2018; Ren et al., 2019).", "It is unclear if the system can keep the performance when multi-round new classes are considered.", "(ii) During the training for the new classes, they often rely on extra labeled data other than the k examples, such as the query set in Ren et al. (2019).", "(iii) Different from their setting, we have an extra label none-of-them in incremental few-shot text classification.", "It's not guaranteed that the input, such as the customer's utterance, always falls into the range of seen labels.", "Zhang et al. (2020) is a state-of-the-art paper for few-shot text classification.", "They propose a clustering-based classifier named discriminative nearest neighbor classification (DNNC).", "DNNC compares whether two examples are in the same class or not.", "A matching model S( x i , x j ) is trained as a binary classifier, such that S( x i , x j ) is close to 1.0 if x i and x j belong to the same class, otherwise close to 0.0.", "Thus, their model can be pre-trained with a large-scale textual entailment dataset.", "Given a test query x , they compare the test query with all the previous examples.", "The final prediction is made by searching the nearest neighbor which has the highest matching score S( x , x i ) with the query example.", "Their computation cost is high due to the comparision between all the utterance pairs.", "Moreover, comparing whether two examples are in the same class is different from textual entailment.", "In textual entailment, a person reads a premise to infer that the hypothesis is true or not.", "The fact that two examples are in the same class does not mean they can entail each other.", "Thus, they cannot fully utilize the pre-trained entailment model.", "Instead, our proposed model, ENTAILMENT , entails the label with a given utterance, which is much more efficient and maximizes the utilization of the pre-trained entailment model.", "Yin et al. (2019) is another work that utilizes textual entailment for zero-shot text classification.", "They convert the zero-shot text classification as a problem of filling a label for a hypothesis.", "For example, they combine emotion labels with the question this text expresses ?, and ask the model if this hypothesis is true, given the text.", "This work more focuses on zero-shot learning and they need to propose different questions for different labels.", "In this section, we give a formal description of the problem incremental few-shot text classification without base classes.", "Furthermore, we extend the problem with additional base classes.", "Training data.", "In the incremental few-shot text classification setting, the system is provided with m rounds of new classes sequentially: { C 1 n , , C mn }.", "Each round C in has h new classes, namely C in = { C in, 1 , , C in,h } .", "Each new class only has k examples ( k [1 , 5] ).", "The value of k is not fixed and varies for different new classes in the same round, i.e., k C in,s (cid:54) = k C in,t , where s, t [1 , ..., h ] .", "For the setting with additional base classes, the system can access a set of base classes C b = { C b, 1 , C b, 2 , , C b,g } .", "All the base classes C b have enough labeled examples for training.", "We create the multi-round setting to mimic the real-world scenario where there is a sequence of new classes coming to the system.", "Since we can only collect a handful of examples for the upcoming classes and the number of examples cannot be guaranteed, we set k [1 , 5] and allow the flexibil-ity that k C in,s (cid:54) = k C in,t in each round.", "Development data.", "In the incremental few-shot setting, there are only k examples available for each new class.", "Thus, our formulation does not provide any development set to help select the best model.", "It is recommended to select hyper-parameters based on experience or related tasks.", "In the experiments, we choose hyper-parameters like batch size based on the suggestions by Huggingface 2 and other papers like Devlin et al. (2018) and Zhang et al. (2020).", "Testing data.", "To evaluate the system, the test data consists of examples across all the classes.", "For the setting without base classes, the potential label space is C 1 n C mn C o .", "For the setting with additional base classes, we search among all the classes in C b C 1 n C mn C o .", "C o is an extra out-of-distribution (OOD) class that consists of examples falling outside of all the seen classes.", "It gives us a chance to check the system's ability to detect instances that reject all the known classes.", "This is crucial for an open-set problem like incremental learning since there are always examples from upcoming classes that do not belong to any existing class.", "Requirements.", "(i) For the training of i th round C in , the system can only access the newly added few-shot examples and label names in this round.", "The system is not allowed to re-train on the (full or partial) examples of preceding classes.", "(ii) For the evaluation, we care about the performance in different types of classes, including base classes, different rounds of new classes, and OOD classes in C o .", "We expect a system that can continuously recognize new classes with few-shot examples.", "In the meantime, the performance of preceding classes should be maintained.", "A system showing severer catastrophic forgetting is less preferred.", "Our approach ENTAILMENT casts the text classification problem into textual entailment: the input text acts as a premise, the class name, such as open a bank account in intent detection , acts as a hypothesis.", "Then the question that if the input belongs to a class is equivalent to ask if the hypothesis is true given the premise.", "There are two benefits of transforming the text classification problem to entailment.", "First, we can make use of indirect supervision from a large-scale entailment dataset (Williams et al., 2018) to benefit the few-shot settings.", "Second, this enables us to utilize the few-shot examples as well as the information of the class names.", "Typical text classification approaches treat classes as indices.", "In fact, class names usually contain informative signals.", "Entailment pairs.", "To transfer the text classification problem into textual entailment, we construct positive and negative entailment pairs for the training.", "Positive entailment pairs ( x i , y i ) are constructed with utterance x i and its gold label name y i , where y i C b for base classes and y i C in for new classes.", "Negative entailment pairs consist of ( x i , y j ), where y j is an incorrect label in the current round.", "For base classes, y j C b but y j (cid:54) = y i ; for new classes, y j C in but y j (cid:54) = y i .", "For each entailment pair ( x, y ) whether it is positive or negative, we concatenate its utterance x with the label y and fed it into the RoBERTa (Liu et al., 2019) encoder.", "Given an utterance x = ( X 1 , X 2 , ..., XT 2 ) with T 1 words and a label y = ( Y 1 , Y 2 , ..., YT 2 ) with T 2 words, we add a special start-of-sequence ([CLS]) token at the beginning of the input and a special end-of-sequence ([SEP]) token at the end of each sentence.", "The whole input is ([CLS], X 1 , X 2 , ..., XT 1 , [SEP], Y 1 , Y 2 , ..., YT 2 , [SEP]).", "We use the [CLS] embedding output from the RoBERTa encoder with a fully connected layer for binary textual entailment: e = RoBERTa ( x, y ) , (1) p = softmax ( W e + z ) , (2) where h R d is the embedding for the [CLS] token, W R 2 d and z R 2 are parameters.", "Compared to Zhang et al. (2020), they discriminate whether two utterances ( x i , x j ) are in the same class or not.", "( x i , x j ) is a positive pair if they belong to the same class, otherwise, it is a negative pair.", "To explore the potential of different combinations, we also propose a hybrid entailment model, HYBRID , that uses both (utterance, label) pairs ( x i , y i ) and (utterance, utterance) pairs ( x i , x j ).", "In other words, we train HYBRID with pairs from both ENTAILMENT and DNNC (Zhang et al., 2020).", "In round C in which contains h new classes and k examples for each class, ENTAILMENT generates h k positive entailment pairs and ( h 1) h k negative entailment pairs, while DNNC generates h k ( k 1) positive pairs and h ( h 1) k 2 negative pairs.", "HYBRID utilizes pairs from both models.", "For simplicity, we use the same k value for all new classes here; in real datasets, different new classes may have different numbers of few-shot examples.", "In that case, the number of generated pairs will change accordingly.", "Training strategy.", "Both ENTAILMENT and HYBRID are binary classification models that can utilize indirect supervision from textual entailment.", "Firstly, we pre-train these models with a large-scale entailment dataset (Williams et al., 2018).", "For each round, models are fine-tuned on the new classes in C in .", "For the setting with additional base classes, we fine-tune the models on base classes first.", "Then we continuously fine-tune the models on new classes.", "Inference strategy.", "After the training, we use the model to infer the class for a test input.", "For each input utterance, we generate entailment pairs by accompanying the utterance with all classes except C o .", "Each pair will get a score [0 , 1] indicating whether this input belongs to the particular class or not.", "> 0 .", "5 indicates YES, No otherwise.", "If there is at least one class labeled with YES, the class with the maximal score is returned; otherwise, the system returns C o .", "We choose the threshold as 0.5 because entailment recognition is a binary classification problem.", "Next, we compare our model with some related systems that can be potentially applied to the incremental few-shot text classification.", "ENTAILMENT vs. Prototypical Network.", "Prototypical Network (Snell et al., 2017) tries to solve few-shot target tasks given a collection of training tasks.", "The few-shot learning problem solved in Prototypical network is slightly different from our incremental few-shot setting.", "In Prototypical Network, the label space for target tasks only contains the new classes.", "However, in the incremental few-shot setting, the target label space is continuously increasing by adding new classes.", "Due to this essential distinction, applying Prototypical Network to incremental few-shot are very likely to have performance drop on base classes when fine-tuning on new classes.", "ENTAILMENT vs. Incremental few-shot approaches in computer vision.", "In Related Work, we introduced some typical approaches in computer vision that deal with the incremental few-shot problem.", "Those methods consistently try to learn representations for classes and examples separately (i.e the W b and W n in Section 2).", "In our model, there are no individual representation vectors for classes or examples.", "Instead, the model learns an overall representation vector for the whole (input, class) pair.", "Our solution enables the learning of the input and the class to interact with each other, which has widely demonstrated its superiority in modeling the relations of two elements (Yu et al., 2020; Zhang et al., 2020).", "In addition, the approaches in computer vision mostly rely on large-scale labeled data for base classes to train a robust system.", "We would argue IFS-INTENTIFS-RELATION #class #train #test #class #train #test C b 20 2088 800 10 5000 400 C 1 n 10 30 400 10 30 400 C 2 n 10 30 400 10 30 400 C 3 n 10 30 400 10 30 400 C 4 n 10 30 400 10 30 400 C 5 n 10 30 400 10 30 400 C o 7 280 10 400 Table 1: Statistics of two datasets: IFS-INTENT and IFS-RELATION .", "that the base classes with rich annotations may not be available in real-world applications.", "Our system which can be pre-trained with entailment dataset, instead, does not rely on base classes.", "This makes our system more applicable to various scenarios.", "IFS-INTENT .", "This is our benchmark for incremental few-shot intent detection.", "IFS-INTENT is converted from BANKING77 3 (Casanueva et al., 2020), which is a single-domain intent detection dataset comprising 13,083 annotated examples over 77 intents (average: 170 examples per intent).", "Each intent class is described by a short name, such as get physical card, lost or stolen card, etc.", "We randomly split the 77 intents into a base group (i.e., C b , 20 base classes), 5 rounds of new intents (i.e., { C 1 n , , C 5 n }, each round has 10 new classes), and a group of out-of-distribution intents (i.e., C o , 7 ood classes).", "IFS-RELATION .", "This is the benchmark for incremental few-shot relation classification.", "IFS-RELATION is converted from FewRel 4 (Han et al., 2018), which is a large-scale relation classification dataset.", "FewRel contains relations from different domains, including Wikipedia (Vran-decic and Krtzsch, 2014), SemEval-2010 (Hen-drickx et al., 2019) and Pubmed 5 .", "For classes in C b , C 1 n , C 2 n , C 3 n , C 4 n , we randomly sample 10 classes from Wikipedia.", "Classes in C 5 n come from SemEval-2010 and classes in C o come from Pubmed.", "Details for two datasets are reported in Table 1. For both benchmarks, we first split the classes into different rounds according to the setting illustrated in Table 1. Then we split the train/test examples provided by the original dataset into different rounds according to the split classes.", "For the new classes in each round, we randomly split 10 new classes into 5 groups (each with 2 classes) and intentionally let the 5 groups have different sizes of k-shot examples ( k [1 , 5] ).", "Baselines.", "Since this is the first work that studies the incremental few-shot text classification problem, there is no prior system that deals with exactly the same task.", "In the setting without base classes, most few-shot learning models didn't work.", "We compare our proposed model ENTAILMENT with another work (Zhang et al., 2020) which also solves text classification as a textual entailment problem and use large-scale entailment datasets for pretraining.", "Together, their hybrid model, HYBRID , is also compared.", "In the setting with additional base classes, we further compare two few-shot learning models (Snell et al., 2017; Gidaris and Komodakis, 2018) adapted from the computer vision field.", "For these two baselines, we replace their encoders with RoBERTa to fit into the text classification task.", "DNNC .", "Zhang et al. (2020) proposed a discriminate nearest neighbor classifier.", "They decide whether two utterances are in the same class or not and make predictions by assigning the label of the nearest neighbor among all the examples.", "Prototypical Network (Snell et al., 2017).", "We train the Prototypical Network on base classes with the episode training method.", "For each round C in , representations for new classes are calculated as the average embedding of k -shot examples.", "Given a query example, the label is predicted with its nearest neighbor among all the class representations.", "DyFewShot (Gidaris and Komodakis, 2018).", "We introduced this baseline in Section 2. For this baseline, we extend this baseline to address multi-round few-shot classes: for the present round C tn , 1 2 3 4 5 55 60 65 70 75 80 Hybrid Entailment DNNC", "all the preceding classes, including that in C b and { C 1 n , C t 1 n }, are viewed as base classes.", "Implementation and setting.", "For DNNC, ENTAILMENT , and HYBRID , we use the MNLI (Williams et al., 2018) dataset to pre-train these models.", "All systems are implemented through the Huggingface Transformers package.", "For both pretraining and fine-tuning, we set the learning rate as 1e-6, the batch size is 16.", "We run 20 epochs for the pre-training.", "For the fine-tuning process, we run 5 epochs on IFS-INTENT and 50 epochs on IFS-RELATION .", "We run the same program with 3 different seeds and report the average performance.", "Accuracy is reported for { C b , C 1 n , , C 5 n } and F1 score for C o .", "As the problem formulation presented in Section 3, we want to investigate two questions.", "Q 1 : can our system get better performance on each round?", "Q 2 : can our system hold more stable performance during the incremental learning process?", "We answer these questions separately under the incremental learning setting with or without base classes.", "Incremental learning without base classes.", "Tables 2 3 list the results on two benchmarks, IFS-INTENT and IFS-RELATION , for the setting without base classes, respectively.", "For the IFS-INTENT benchmark, we compare ENTAILMENT with DNNC, together with their hybrid model HYBRID for 5 rounds.", "For the IFS-RELATION benchmark, we only compare ENTAILMENT with DNNC since HYBRID is not applicable for this dataset.", "The label (relation type) is not compatible with the input instance (an utterance 0 1 2 3 4 5 20 40 60 80 100 Entailment Hybrid DNNC ProtoNet DyFewShot", "(a) Average Performance.", "(b) Performance drop rate.", "with an entity pair).", "Therefore, we can not mix the pairs from these two models (ENTAILMENT and DNNC) to train a hybrid model.", "As for question Q 1 , we find that ENTAILMENT and HYBRID outperform all the baselines.", "These results show the effectiveness of formalizing text classification as a textual entailment problem.", "For the benchmark IFS-INTENT , the hybrid model, HYBRID , achieves the best performance since it has the largest number of entailment pairs (by combining pairs from two models) for the training.", "It shows in the extreme case that no base classes are available, the more data the better.", "For the benchmark IFS-RELATION , this task is much more difficult compared to intent detection due to the complicity of the training examples (utterances with entity pairs).", "DNNC does not perform well for this task since comparing two complex examples can not benefit from the pre-training entailment model.", "As for Q 2 , we show the average performance change on new classes in Figure 1. For IFS-INTENT , the average performance of new classes increases in the beginning then drops for the remaining rounds.", "This might due to the lack of training data in the first found.", "For IFS-RELATION , the average performance drops dramatically due to this task is much more difficult.", "in Table 4.", "We compare our systems ENTAILMENT and HYBRID with three baselines: DNNC, ProtoNet, and DyFewShot.", "This setting is evaluated incrementally on base classes, five rounds of new classes, and OOD.", "As for question Q 1 , we summarize our observations as follows.", "(i) Pre-trained models (ENTAILMENT , HYBRID , and DNNC) work much better than few-shot learning methods (ProtoNet and DyFewShot) which means pre-training from a large-scale entailment dataset helps a lot in this setting.", "(ii) Our proposed models, ENTAILMENT and HYBRID obtain comparable performances and they outperform all the other baselines consistently in all test classes for the whole timeline.", "This shows the effectiveness of our proposed method of generating (utterance, label) entailment pairs.", "To answer Q 2 in this setting, we propose a new evaluation metric, performance drop rate d , to evaluate the performance change along the timeline, i.e., how fast the performance drops when adding new rounds of classes into the system.", "For example, the performances on base classes decrease when incrementally adding five rounds of new classes into the system.", "Given a list of performance results for a certain subset of classes (for example, base classes) on m rounds, r = ( r 1 , r 2 , ..., r m ) , we calculate the performance drop rate as the average drop rate of different rounds d = 1 m 1 (cid:80) m 1 i =0 ( r i r i +1 ) /r i .", "In the experiments, we calculate d for four methods on base classes, new classes in round1 and round2 separately.", "The average drop rate of DyFewShot is not reported since there are 0.0 values in the performance.", "In Figure 2, we show the average performance on all the seen classes in different rounds (includ-ing base classes and all the seen new classes) in", "(a) and the performance drop rate d in", "(b).", "As shown in Figure 2", "(a), the average performance drops with the increase of round numbers.", "We can also observe that our proposed models ENTAILMENT and HYBRID achieve the best performance on the average performance on all the seen classes, including base classes and new classes.", "Figure 2", "(b) shows the the performance drop rate d for different models.", "ProtoNet and ENTAILMENT have higher drop rate than DNNC and HYBRID on base classes.", "For new classes in round1 and round2, the drop rate on ProtoNet is much higher than all the entailment methods.", "In summary, ENTAILMENT achieves the best performance on the average accuracy of all seen classes, while DNNC is more stable and has a lower performance drop rate.", "HYBRID combines the advantages of both models by combining these two models together.", "In this work, we define a new challenge in the NLP domain, incremental few-shot text classification with multi-round new classes in two settings: with or without base classes.", "In addition to the problem formulation, we also release two benchmark datasets for this particular challenge: IFS-INTENT and IFS-RELATION .", "Two approaches, ENTAILMENT and HYBRID are proposed to solve this problem.", "They convert the text classification problem into textual entailment and make the maximum utilization of the pre-training textual entailment model.", "Extensive experiments are conducted and the results consistently show the effectiveness of our proposed models.", "We thank the reviewers for their valuable comments.", "This work is supported in part by NSF under grants III-1763325, III-1909323, and SaTC-1930941." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "method", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "objective", "objective", "other", "abstain", "abstain", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "other", "other" ]
[ "Models of narrative schema knowledge have proven useful for a range of event-related tasks, but they typically do not capture the temporal relationships between events.", "We propose a single model that addresses both temporal ordering, sorting given events into the order they occurred, and event infilling, predicting new events which fit into an existing temporally-ordered sequence.", "We use a BART-based conditional generation model that can capture both temporality and common event co-occurrence, meaning it can be flexibly applied to different tasks in this space.", "Our model is trained as a denoising autoencoder: we take temporally-ordered event sequences, shuffle them, delete some events, and then attempt to recover the original event sequence.", "This task teaches the model to make inferences given incomplete knowledge about the events in an underlying scenario.", "On the temporal ordering task, we show that our model is able to unscramble event sequences from existing datasets without access to explicitly labeled temporal training data, outperforming both a BERT-based pairwise model and a BERT-based pointer network.", "On event infilling, human evaluation shows that our model is able to generate events that fit better temporally into the input events when compared to GPT-2 story completion models.", "This paper proposes a single model of events to support inferences in two seemingly different tasks: (1) temporal event ordering and (2) event infilling, or inferring unseen or unmentioned events occurring as part of a larger scenario.", "Figure 1 shows an example illustrating these two goals.", "Unlike prior approaches, we aim to address both with the same model architecture, rather than having to annotate data and build ad-hoc models for each task separately; our goal is to work towards models that cap-Temporal BARTI opened a present She gave me the present She bought the present Complete ordered event sequence She bought the present I opened a present Scrambled input events Figure 1: Diagram of our modeling setup: TemporalBART captures both temporal ordering and event cooc-currence to make various event-related inferences.", "ture temporal event knowledge broadly and support a wide range of inferences.", "We thus need a suitably general modeling framework to capture temporal knowledge about events, which in our case will be a BART-based (Lewis et al., 2020) model we call TemporalBART.", "Note that classic temporal relation extraction models, which model temporal ordering in context for a particular document, may chiefly learn how to use local discourse cues rather than generalizable event knowledge (Chambers et al., 2014; Ning et al., 2018b).", "The goals in this work relate to past work on learning narrative schemas (Mooney and DeJong, 1985; Chambers, 2013; Peng and Roth, 2016; Peng et al., 2017).", "Our approach particularly follows a recent line of work using distributed representations of schemas (Pichotta and Mooney, 2016; Weber et al., 2018b), which support inferences about events without explicitly materializing a discrete schema library.", "The target tasks in this work are directly motivated by downstream applications of schema learning.", "Text generation tasks like story completion rely on understanding what makes narratives plausible and what events might be likely to happen before, after, and between other events (Jain et al., 2017; Yao et al., 2019), motivating our event infilling task.", "Answering questions about causes, effects, or what might happen next in a scenario requires knowing typical temporal orders of event sequences (Zhou et al., 2019, 2020; Ning et al., 2020), motivating our temporal ordering task.", "Prior work has not combined traditional event cooc-currence with event temporality as we do.", "We propose a conditional generation model to tackle temporal event ordering and event infilling, and train it as a denoising autoencoder over out-of-context temporal event sequences .", "As shown in Figure 1, the encoder of our TemporalBART model reads a temporally scrambled sequence of a subset of input events, obtained by corrupting a temporally-ordered sequence of events from a corpus.", "The decoder, which can be viewed as a conditional event language model (Kiyomaru et al., 2019; Bosselut et al., 2019; Madaan et al., 2020), then reconstructs the complete, temporally-ordered event sequence.", "Such denoising training has been successful exploited in many applications (Vincent et al., 2010; Lu et al., 2013; Lample et al., 2018; Lewis et al., 2020), and using seq2seq models to reorder and smooth inputs has been explored before (Goyal and Durrett, 2020), but to our knowledge we are the first to apply this in this temporal modeling setting.", "The conditional generation architecture of our model is flexible enough to address a variety of tasks, including our temporal ordering and event infilling tasks, by either sampling from the model or using it to score sequences.", "Capitalizing on the success of recent pre-trained encoder-decoder transformers (Lewis et al., 2020; Raffel et al., 2020), our model itself is based on BART, consuming and producing predicate-argument structures rendered in surface order.", "Gathering large-scale high-quality labeled data with temporal annotations is often expensive and requires specially designed annotation schemes (Pustejovsky et al., 2003a; Cassidy et al., 2014; Ning et al., 2018b; Zhao et al., 2021).", "Here, we instead turn to a narrative documents corpus, EventsNarratives (Yao and Huang, 2018) and design an automatic method to extract the training data we need.", "In these documents, discourse order is loosely assumed to reflect temporal order, so events extracted from this text can directly provide training data for our models.", "This use of automatic annotation allows us to use broad-domain data, giving us a strong domain-independent temporal model (Zhao et al., 2021).", "To evaluate how well our proposed models capture temporal knowledge and solve the two targeted tasks, we apply them on out-of domain test sets in a zero-shot manner.", "Specifically, for event ordering, we first extract test temporal event sequences from the CaTeRS (Mostafazadeh et al., 2016b) and MCTaco (Zhou et al., 2019) datasets, which include the annotations on temporal relations between events.", "We then compare the performance of our models with two baselines: a BERT-based pairwise model and a BERT-based pointer network.", "For event infilling, we use the test event sequences from CaTeRS and examine the ability of our models to order unseen events and generate infilled events in comparison with GPT-2 baselines from story generation.", "Our BART-based models significantly outperform the baseline models on the ordering settings we consider, and human evaluation verifies that our models can generate infilled events that are better temporally-ordered with respect to the input.", "Learning temporal knowledge to order events and generate new events as part of schemas or stories are two problems that have received significant attention, but in contrast to our work, previous work typically focuses on each in isolation.", "Closely related to the temporal ordering aspect of this paper is temporal relation extraction, which orders pairs of events in text in document context (Pustejovsky et al., 2003b; Cassidy et al., 2014; Ning et al., 2018b).", "This problem has been addressed as pairwise classification (Mani et al., 2006; Verhagen et al., 2007; Chambers et al., 2007; Verhagen and Pustejovsky, 2008; Cheng and Miyao, 2017; Tourille et al., 2017; Goyal and Durrett, 2019) or as a structured learning problem to enforce constraints on the output (Do et al., 2012; Ning et al., 2017, 2018a; Leeuwenberg and Moens, 2017; Han et al., 2019a,b).", "However, even in these latter works, the models focus on pairwise relations.", "In contrast, our work here views temporal event ordering as a sequence generation problem, which provides models a stronger inductive bias to capture global temporal relations between events.", "One recent effort (Madaan and Yang, 2020) treats this task as a graph generation problem, and so is able to predict more complex structures, but it focuses solely on ordering and is not suitable for our event infilling goals.", "Schema learning systems are often evaluated on their ability to predict unseen events.", "Initial work e 1 e 2 e 3 e 4 e 5 Event Deletion Event Shu ing Input events Autoencoder (reconstruct original sequence) Encoder Decoder e 1 e 2 e 3 e 4 e 5 e 4 e 5 e 2 e 1 e 3 e 4 e 2 e 1 e 1 e 2 e 3 e 4 e 5 Figure 2: Our event-based denoising autoencoding training scheme used to encourage our model to learn temporal event knowledge.", "attempted to use statistical methods to derive a library of schematic information (Mooney and De-Jong, 1985; Chambers and Jurafsky, 2008; Jans et al., 2012).", "Another thread exploits event language modeling to learn the distributions over events (Pichotta and Mooney, 2016; Peng and Roth, 2016; Weber et al., 2018b), or focuses on learning event representations (Modi, 2016; Weber et al., 2018a) rather than writing down discrete schemas.", "However, most of this work only models the co-occurrence between events instead of directly considering temporal information, and only represent events as a small tuple of S-V-O headwords.", "Another line of work instead directly focuses on extracting coherent narratives from story salads (Wang et al., 2018) or more broadly generating narratives given predefined scenarios (Wang et al., 2019; Qin et al., 2020).", "However, without considering temporal ordering, these systems are prone to learn discourse ordering of events instead of a strong representation of temporal knowledge.", "Our framework involves modeling a conditional distribution P ( y | x ) over temporal event sequences y = { e 1 , , e l } , which are sequences of events taken out of context (i.e., not represented as spans in a document) which are part of the same scenario, involve shared actors, and are temporally ordered.", "The input of the model is a (not necessarily temporal) sequence of events x = { e 1 , , e m } that represents incomplete information abut the scenario y : a partial set of unordered events.", "Our model should learn distributions over a true underlying order of events, without obvious gaps in the event sequence, given this incomplete information.", "By taking events out of context rather than in the context of a document, we are encouraging the model to encode temporal knowledge between events rather than superficial cues like surface textual order or discourse connectives that might determine their order.", "For the definition of events, we follow Chambers and Jurafsky (2008) where an event e is a predicate v e along with its arguments (Palmer et al., 2005).", "Our model can be formulated as a denoising autoencoder if x is created as a noised version of y .", "Specifically, given a temporal event sequence y as defined above, we first corrupt it to get the required input x by performing two transformation functions consecutively (see Figure 2): Event Shuffling We first perform a random shuffling of the events in y to produce x .", "To perfectly reconstruct the original sequence y , the model must capture the temporal relations between events.", "Event Deletion We randomly delete each event in y with probability p to produce x .", "This denoising scheme is similar to the token deletion transformation in Lewis et al. (2020).", "To perfectly reconstruct the original event sequence, the model needs to encode schema-like event knowledge so as to generate events not included in the input x and insert them at correct positions.", "As a result, this denoising can help the model learn event infilling.", "We train our model to maximize log P ( y | x ) on this automatically-constructed data.", "To leverage the power of pretrained transformers, we adopt BART (Lewis et al., 2020) as the underlying architecture for our model, and initialize our model with its pretrained weights.", "The overall model, shown in Figure 3, takes a corrupted event sequence x = { e i } as input, and outputs the true event sequence y = { e j } .", "To feed the event-based inputs and outputs to BART, we need to represent each event e in a textual format Repr( e ) .", "We represent e with the concatenation of its predicate and all arguments.", "Unlike previous work which only uses the syntactic heads of the predicate and certain arguments (Pichotta and Mooney, 2016; Weber et al., 2018a,b), our approach preserves complex noun phrase arguments and exposes to the model arguments like temporal modifiers.", "We strike a balance between using ARG0 V ARG1 ARGM-TMP e 2 She bought it yesterday [E2] bought [A] She bought it yesterday [E] gave [A] She gave me a present [E1] opened [A] I opened the present ARG0 V ARG1 e 1 I opened the present BART Encoder BART Decoder New event generated Event copied from input Event copied from input Decoder Output Encoder Input [E1] I opened the present [E2] She bought it yesterday Figure 3: Model architecture of the proposed BART-based conditional generation models.", "enough information to have meaningful event representations and not consuming entire documents (Han et al., 2019a,b), which would result in a model that overly relies on discourse clues.", "We then consider two variants for input and output: TemporalBART This model first encodes each event e i in x as Repr( e i ) , and concatenates them with a special token [E] prepended in front of each event.", "This special token can help the model identify the boundary between the input events; such placeholder tokens have been used in related tasks like entity tracking in procedural text (Gupta and Durrett, 2019).", "For the output, we instead prepend [E] v e j [A] in front of each Repr( e j ) .", "This setup not only provides an extra supervision signal that encourages the model to predict ordering on the basis of predicates, but also allows us to post-hoc recover an event sequence by checking the predicate part of the generation.", "TemporalBART-indexed This model, depicted in Figure 3, uses the same input and output format as TemporalBART, except the prepended special token [E] is instead [Ei] before each event e i .", "For the output, if e j is one of the input events and e j = e i , then we also change the prepended tokens e j to [Ei] v e j [A] .", "Otherwise, we still use [E] as the special event token.", "Note that the model is not able to cheat using the [Ei] tokens to do the prediction since the input events are scrambled by the shuffling denoising training scheme described in 3.1.", "Compared to TemporalBART, the use of [Ei] here provides an extra clue for the model to associate input events to output events, which can benefit the event ordering.", "It also provides a potential way to focus only on modeling the ordering of the target sequence, rather than also mixing in generation decisions, many of which are copying event arguments and often affect the prediction.", "1 1 We experiment with this method, which is denoted as TemporalBART-indexed (tags only), in Appendix A Training details of these BART-based models are described in the Appendix.", "For our framework, the training data we need is event sequences in temporal order .", "Note that most text data occurs in discourse order , which is not the same thing: human annotations of temporal relation datasets like TimeBank (Pustejovsky et al., 2003b) show that many events mentioned earlier in the text occur later in time.", "Existing datasets of temporal relations (Cassidy et al., 2014; Vashishtha et al., 2019) are small-scale, and annotating more data is expensive and prone to low agreement (Ning et al., 2018b).", "To combat this issue, we instead try to automatically gather the training data we need.", "Corpus We use the English-language EventsNarratives corpus (Yao and Huang, 2018), which contains more than 200,000 narrative-structured documents identified from three different source domains including news articles, novel books, and blogs.", "Yao and Huang (2018) use a weakly supervised method to identify narrative texts, describing a sequence of events in such a way that the discourse order is very likely to reflect the temporal order.", "This gives us an entry point to collect temporal event sequences automatically from each document.", "Here we focus on documents in the novel domain as our source for temporal event sequences.", "Extracting Temporal Event Sequences To obtain the training event sequences, we first use an SRL model from AllenNLP (Gardner et al., 2017) to extract verbs (events) and their arguments.", "Then, temporal event sequences are constructed by connecting only the events in different sentences, since the relations between events within the same sentence are unclear even in narrative documents.", "Here, to ensure all the events in a sequence have a strong relation with each other, we only include chains of events that are associated with a come 1 e 2 e * e 3 e 1 e 2 e 3 e 1 e 2 e 3 e 3 e 1 e 2", "mon entity (Chambers and Jurafsky, 2008), as determined by checking whether the arguments of two event have some shared non-stopword tokens.", "With this procedure, we are able to collect nearly 2 million temporal event sequences to train on, with nearly 70% of the sequences consisting of three or more events.", "Here we describe the two target tasks of our model and how they can be handled as event-based conditional generation problems.", "A visual of the task formulations is shown in Figure 4.", "Temporal Event Ordering Given an unordered set of events { e i } , this task's goal is to produce the temporal ordering of { e i } , as shown in Figure", "4(a).", "We ask the model to generate an ordered sequence of events { e f ( i ) } given the set { e i } , where f ( ) is a mapping function to determine the event to put at position i .", "This is a conditional generation problem that is directly solved by our proposed models.", "Event Infilling The goal of event infilling is to generate inserted events at some pre-selected insertion positions in a seed event sequence (Wang et al., 2020).", "To simplify the evaluation, here we assume that given an event sequence x = { e i } , models will only be required to generate one inserted event at one insertion position i , as shown in Figure", "4(b).", "We first feed { e i } as the input to our model, then generate one event e using x prefix = { e i | i < i } as the decoding prefix.", "To force our models to produce e / x , we prevent our model from generating { v e i } during the decoding process.", "ordering task and a pointer network model that directly models event sequence permutations discriminatively.", "BERT-based Pairwise Model + SSVM We follow the architecture of the Deep SSVM model used in Han et al. (2019a) as our first baseline, which tackles event ordering as a pairwise classification problem.", "This network first exploits a BERT-based model (Devlin et al., 2019) to compute pairwise scores for e i preceding e j in the output y .", "The final output is then obtained by solving an ILP over all the pairwise scores.", "The overall network is trained with the structured SVM loss so that it can learn to make joint predictions with transitivity constraint.", "To make this baseline more comparable to our models, we take Repr( e i ) prepended with [E] as the event representation instead of using the sentence containing v e i as in Han et al. (2019a).", "Detailed formulas are in Appendix B. We denote this baseline as Pairwise+SSVM in the evaluations.", "BERT-based Pointer Network This network first follows the BERT-based Pairwise Model + SSVM to extract the the vectorized representation U p i for each e i , where U is the final BERT encoded matrix, and p i is the position of the first token of e i in the input sequence.", "These event representations are then instead fed into a LSTM-based pointer network to model the ordering probability by decomposing it in a sequential fashion: P seq ( y | x ) = (cid:89) j P ( j | h 1 , . . . , U p 1 , . . . ) (1) h t is the decoder hidden state in the pointer network.", "Compared to the above pairwise baseline, this model has a stronger inductive bias for exploiting global event relations.", "We train the sequential model with teacher forcing to maximize the probability of the gold ordering.", "We denote this baseline as BERT-based PN in the evaluation section.", "HAQAE HAQAE (Weber et al., 2018b) is a vector quantized variational autoencoder which encodes schema knowledge with hierarchical latent variables.", "Since HAQAE is also an event-level seq2seq autoencoder, we can easily apply it to our setting.", "During training we follow Weber et al. (2018b) except that we use our narrative event sequences for training and represent each event with the predicate-argument format described in 3.2 so it is more comparable to our BART-based models.", "GPT-2 GPT-2 (Radford et al., 2019) is a transformer-based pretrained language model that has been exploited in various generation tasks like story generation (Dathathri et al., 2020; Rashkin et al., 2020).", "However, one issue with the GPT-2 model is that it can only perform uni-directional generation.", "To apply GPT-2 to generate an inserted event e , we first concatenate { Repr( e i ) | e i x prefix } with periods in between, and treat it as the decoding prefix only.", "We then decode until another period is generated, and take the model's output as the text representation of e .", "Except where otherwise specified, we use the GPT2-medium pretrained model from HuggingFace's Transformer (Wolf et al., 2020), whose model size is comparable to BART-large.", "Infilling GPT-2 To build a stronger GPT-2 baseline that doesn't only condition on the prefix events, we follow the baselines from Qin et al. (2020) to adapt GPT-2 to infilling tasks.", "Infilling GPT-2 generates the infilling events by wrapping the events after the insertion position to the front.", "That is, the decoding prefix fed to the infilling GPT-2 becomes the concatenation of { Repr( e i ) | i > = i } , <SEP> and { Repr( e i ) | i < i } , again with a period appended after each event.", "The special token <SEP> is used to help the model to differentiate the events before and after the insertion position.", "All the models used in the evaluation are trained with the temporal event sequences automatically collected on EventsNarratives except GPT-2, since we want to compare the learned knowledge in GPT-2 with our proposed models.", "Although we are able to gather millions of sequences, for efficiency, we train on 100,000 sequences unless specified otherwise.", "For each sequence, we extract 2 distinct permutations from the corruption process.", "This results in 200,000 training examples in total.", "We use two out-of-domain English datasets to extract the test temporal event sequences: CaTeRS and MCTaco.", "As during training, two different Candidate Answer: they would destroy the democracy Context: In Colombia, the drug-financed guerrillas trying to seize the country and destroy democracy include M-19, which Castro has clearly backed.", "CaTeRS (Mostafazadeh et al., 2016b) CaTeRS includes annotations of events and their casual and temporal relations on 320 five-sentence short stories sampled from the ROCStories corpus (Mostafazadeh et al., 2016a).", "To extract the evaluation data from CaTeRS, we first apply the SRL model used in 3.3 on each story.", "Then, a directed acyclic graph is constructed with a node being an event e whose predicate v e can be captured by the SRL model, and an edge ( e i , e j ) indicating e i happens temporally before e j .", "Note that here we treat all types of annotated relations except IDEN-TITY, DURING and CAUSE_TO_END as BEFORE, as suggested in Mostafazadeh et al. (2016b).", "Test temporal event sequences are then extracted by retrieving all the path from the source nodes to sink nodes in the graph.", "With this procedure, we are able to gather 842 event sequences, 60% of which contain 3 or more events.", "With permutations, the final CaTeRS evaluation set has 1684 examples.", "MCTaco (Zhou et al., 2019) MCTaco is a multiple-choice QA dataset for evaluating model understanding on 5 different types of temporal com-Architecture All Length >= 3 Pairwise Acc.", "monsense.", "To extract suitable test data, we focus on questions with the reasoning type of event or-dering and their positive candidates.", "Each data point here consists of a sentence describing multiple events { e ci } , a question asking what event could happen temporally before/after a particular event e q { e ci } , and a candidate event e a .", "Critically, the question itself tells us whether e a should happen before/after e q in the temporal event sequence formed by { e ci } { e a } .", "With this annotation, we evaluate our models by first feeding the randomly shuffled { e ci }{ e a } into a model, then checking the ordering between e a and e q in the output sequence.", "Here, we were able to extract 585 test sequences from MCTaco.", "For each sequence, { e ci } and e a are extracted with the SRL model used in 3.3.", "For the question, we first use a set of pre-defined regex templates to extract an event e q and a temporal relation (before / af-ter).", "We then match e q to one of e ci by ROUGE-L scores.", "See Figure 5 for an example of the extracted data.", "Compared to CaTeRS, since the sentences here are from 9 different domains in MultiRC (Khashabi et al., 2018), the types of events are more diverse.", "The event arguments are also more complex.", "We first examine the temporal ordering results on CaTeRS, shown in Table", "1. We compute the pairwise accuracy of the predicted event sequences, or how many pairs of events in the output are ordered correctly by a model.", "Note that the BART-based models can deviate from generating permutations of the input; however, we found that the most probable generated sequences were almost exact permutations of the input or easily aligned to the input using a heuristic.", "huge margin.", "One possible reason is that the decoder of BART can condition on the token-level embeddings of the events when generating the output events, whereas in the pointer network, the decoder is only aware of the condensed event embeddings U p i .", "Our two BART-based models also outperform the BERT-based pairwise model on both all sequences and long sequences.", "Results on MCTaco are shown in Table", "2. Here since we only know the gold temporal relation of one pair of events in the input, i.e e q and e a , the averaged accuracy on predicting the order of e q and e a is computed.", "In addition, since the ratio of before/after questions is significantly unbalanced in MCTaco, with 90% asking about the after relationship, we also compute the macro F1 score as our metric (averaging F1 across these two classes).", "Our two baselines perform worse than just picking the majority label.", "This is possibly due to the high diversity of events in MCTaco, which makes it much harder to apply a zero-shot model.", "In contrast, TemporalBART achieves an F1 score about 3 points higher than the Pairwise+SSVM baseline, and TemporalBART-indexed further performs best among all.", "In Appendix E, we also show that our models are able to learn temporal phenomenon not explicitly annotated in our training data, which is another demonstration of our model's ability to generalize.", "We evaluate our BART-based models on an additional variant of this ordering problem that better tests their capability as generative models.", "Recall that previously, BART conditions on the complete (but possibly scrambled) sequence of events.", "We now consider ordering an event in the decoder that the model does not condition on in the encoder.", "Concretely, for each temporal event sequence in CaTeRS, we randomly select one event e , and treat the rest of the sequence as the seed input event sequence { e 1 , , e N } .", "Then we check if a model can correctly determine where to insert e into the input sequence.", "Specifically, for both the BART-based models and the GPT-2 baselines, we use the generation probability to rank event sequences { e 1 , , e i 1 , e , e i , , e N } for i between 1 and N + 1 (all possible locations).", "If a model correctly ranks the gold candidate higher, it indicates that it can model temporal relations between seen events and new unseen events it may generate.", "The results are shown in Table 3, where we compute the top-1 and top-2 exact match (EM): did the model rank the gold sequence 1st or 2nd highest?", "Our GPT-2 variants are only slightly better than random.", "HAQAE, also using an autoencoder framework, performs worse than infilling GPT-2, likely due to the lack of large-scale pretraining and the loss of information when compressing input into latent variables.", "Our BART-based models are significantly better, with TemporalBART-indexed showing the benefit of using indexed event markers to help the model capture order.", "We also perform an ablation of deletion during training (Figure 2).", "Unsurprisingly for this unseen event evaluation, not deleting events in training (setting p to 0) causes a major drop by 14 EM points.", "Deletion denoising is evidently critical to model new events.", "Now we turn to temporal event infilling: given a CaTeRS sequence, remove a random event at index i , and denote the resulting sequence { e 1 , , e N } .", "We then ask a model to generate one event e at position i so { e 1 , , e i 1 , e , e i , , e N } is temporally ordered with the new event.", "We evaluate the quality of the generated (in-serted) events by human evaluation on Amazon Mechanical Turk.", "Specifically, we randomly sample 30 examples from CaTeRS and have 5 raters judge the coherence and temporality (on a scale from 0 to 2) of the inserted event from each model.", "See Figure 8 in Appendix for our exact prompt.", "The final scores for each model on coherence and temporality are computed by taking the average of the majority rating on each prediction.", "Here we only include GPT-2 models as baselines since HAQAE is also using the autoencoder framework, and already performs significantly worse in 5.3.", "The results of this evaluation are shown in Table 4.", "All models achieve reasonable coherence scores.", "However in terms of temporality, GPT-2 performs worst, as expected, since it can only condition on partial input event sequences while the other three consider the whole event sequence as input.", "Both of the BART-based models achieve better performance than infilling GPT-2.", "The improvements on the temporal score are significant with p < 0 .", "05 according to bootstrap resampling for both TemporalBART models with respect to infilling GPT-2.", "Figure 6 gives examples of infilled events generated by GPT-2, infilling GPT-2, and TemporalBART.", "On this specific test example, GPT-2 generates an event generally about the Apple watch, which is less relevant to the input scenario about Mike making a tree.", "The event generated by infilling GPT-2 is coherent with the scenario, but doesn't occur in the correct order with respect to the input events.", "The event generated by TemporalBART is the best in terms of coherence and temporality.", "More examples are in Table 7 of the Appendix.", "Figure 7 shows that the performance of both our models on the CaTeRS ordering task improves when increasing the amount of narrative training data.", "This demonstrates that the automatically ex-TemporalBART: After breakfast Mike picked a good piece of twine GPT-2: You can buy a $25 Apple Watch with the watch face e 3 : He painted over the bad tree with design of his own creation e 2 : Mike tried to make a tree Infilling GPT-2: He started with a rough looking tree [INSERTED EVENT] Figure 6: Real examples of infilled events generated by GPT-2, infilling GPT-2 and TemporalBART respectively.", "tracted temporal event sequences are useful and diverse enough to help the models to learn temporal-related knowledge.", "The TemporalBART-indexed model is effective on surprisingly small amounts of data, but also scales well with data size; however, we observe a plateau in both models which motivated our decision to use 100k training sequences.", "For comparison, we train our TemporalBART-indexed model on 1266 event sequences gathered from the MATRES dataset, a human-labeled dataset for temporal relation extraction, using the same procedure we applied to CaTeRS.", "However, Figure 7 shows that the resulting performance, 65.6 on MATRES, is significantly lower than the best number we get on narrative data.", "Even with the same size training set, using narrative data achieves over 7 points of improvement over using MATRES.", "This suggests that the small-scale human-labeled dataset is not enough for models to learn generalized temporal knowledge, and even with the same amount of data, narrative data may be a better source for general temporal knowledge.", "This work presents a BART-based conditional generation model and a denoising autoencoder framework to learn temporal event knowledge, and addresses both temporal ordering and event infilling tasks by pretraining on automatically collected data.", "Our experiments demonstrate that our model is able to perform temporal ordering and infilling in a zero-shot manner, not fine-tuned on our target datasets, which suggests that it can also be applied to other settings requiring event schematic and temporal knowledge.", "Thanks to Mahnaz Koupaee from Stony Brook University for providing directions on our HAQAE baseline and to the members of the UT TAUR lab for helpful discussion, particularly Yasumasa Onoe and Jiacheng Xu for suggestions on the human evaluation.", "Thanks as well to the anonymous reviewers for their comments.", "This work is based on research that is in part supported by the Air Force Research Laboratory (AFRL), DARPA, for the KAIROS program under agreement number FA8750-19-2-1003.", "The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory (AFRL), DARPA, or the U.S. Government." ]
[ "abstain", "objective", "method", "method", "abstain", "result", "result", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "method", "objective", "result", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "objective", "abstain", "method", "method", "result", "objective", "abstain", "other", "other", "objective", "abstain", "other", "method", "other", "other", "method", "other", "other", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "objective", "other", "other", "other", "other", "other" ]
[ "Prior work has proved that Translation memory (TM) can boost the performance of Neural Machine Translation (NMT).", "In contrast to existing work that uses bilingual corpus as TM and employs source-side similarity search for memory retrieval, we propose a new framework that uses monolingual memory and performs learnable memory retrieval in a cross-lingual manner.", "Our framework has unique advantages.", "First, the cross-lingual memory retriever allows abundant monolingual data to be TM.", "Second, the memory retriever and NMT model can be jointly optimized for the ultimate translation goal.", "Experiments show that the proposed method obtains substantial improvements.", "Remarkably, it even outperforms strong TM-augmented NMT baselines using bilingual TM.", "Owning to the ability to leverage monolingual data, our model also demonstrates effectiveness in low-resource and domain adaptation scenarios.", "Augmenting parametric neural network models with non-parametric memory (Khandelwal et al., 2019; Guu et al., 2020; Lewis et al., 2020a,b) has recently emerged as a promising direction to relieve the demand for ever-larger model size (De-vlin et al., 2019; Radford et al., 2019; Brown et al., 2020).", "For the task of Machine Translation (MT), inspired by the Computer-Aided Translation (CAT) tools by professional human translators for increasing productivity for decades (Yamada, 2011), the usefulness of Translation Memory (TM) has long been recognized (Huang et al., 2021).", "In general, TM is a database that stores pairs of source text and its corresponding translations.", "Like for human The work described in this paper is partially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14200719).", "translation, early work (Koehn and Senellart, 2010; He et al., 2010; Utiyama et al., 2011; Wang et al., 2013, inter alia) presents translations for similar source input to statistical translation models as additional cues.", "Recent work has confirmed that TM can help Neural Machine Translation (NMT) models as well.", "In a similar spirit to prior work, TM-augmented NMT models do not discard the training corpus after training but keep exploiting it in the test time.", "These models perform translation in two stages: In the retrieval stage, a retriever searches for nearest neighbors (i.e., source-target pairs) from the training corpus based on source-side similarity such as lexical overlaps (Gu et al., 2018; Zhang et al., 2018; Xia et al., 2019), embedding-based matches (Cao and Xiong, 2018), or a hybrid (Bulte and Tezcan, 2019; Xu et al., 2020); In the generation stage, the retrieved translations are injected into a standard NMT model by attending over them with sophisticated memory networks (Gu et al., 2018; Cao and Xiong, 2018; Xia et al., 2019; He et al., 2021) or directly concatenating them to the source input (Bulte and Tezcan, 2019; Xu et al., 2020), or biasing the word distribution during decoding (Zhang et al., 2018).", "Most recently, Khandelwal et al. (2020) propose a token-level nearest neighbor search using complete translation context, i.e., both the source-side input and target-side prefix.", "Despite their differences, we identify two major limitations in previous research.", "First, the translation memory has to be a bilingual corpus consisting of aligned source-target pairs.", "This requirement limits the memory bank to bilingual pairs and precludes the use of abundant monolingual data, which can be especially helpful for low-resource scenarios.", "Second, the memory retriever is non-learnable , not end-to-end optimized, and lacks for the ability to adapt to specific downstream NMT models.", "Concretely, current retrieval mechanisms (e.g., BM25) are generic similarity search, adopting a simple heuristic.", "That is, the more a source sentence overlaps with the input sentence, the more likely its target-side translation pieces will appear in the correct translation.", "Although this observation is true, the most similar one does not necessarily serve the best for NMT models.", "Ideally, the retrieval metric would be learned from the data in a task-dependent way: we wish to consider a memory only if it can indeed boost the quality of final translation.", "In this work, we propose to augment NMT models with monolingual TM and a learnable cross-lingual memory retriever.", "Specifically, we align source-side sentences and the corresponding target-side translations in a latent vector space using a simple dual-encoder framework (Bromley et al., 1993), such that the distance in the latent space yields a score function for retrieval.", "As a result, our memory retriever directly connects the dots between the source-side input and target-side translations, enabling monolingual data in the target language to be used alone as TM.", "Before running each translation, the memory retriever selects the highest-scored memories from a large collection of monolingual sentences (TM), which may include but are not limited to the target side of training corpus, and then the downstream NMT model attends over those memories to help inform its translation.", "We design the memory retriever with differentiable neural networks.", "To unify the memory retriever and its downstream NMT model into a learnable whole, the retrieval scores are used to bias the attention scores to the most useful retrieved memories.", "In this way, our memory retrieval can be end-to-end optimized for the translation objective: a retrieval that improves the golden translation's likelihood is helpful and should be rewarded, while an uninformative retrieval should be penalized.", "One challenge for training our proposed framework is that, when starting from random initialization, the retrieved memories will likely be totally unrelated to the input.", "Since the memory retriever does not exert positive influence on NMT model's performance, it cannot receive a meaningful gradient and improve.", "This causes the NMT model to learn to ignore all retrieved memories.", "To avoid this cold-start problem, we propose to warm-start the retrieval model using two cross-alignment tasks.", "Experiments show that (1) Our model leads to significant improvements over non-TM baseline NMT model, even outperforming strong TM-augmented baselines.", "This is remarkable given that previous TM-augmented models rely on bilingual TM while our model only exploits the target side.", "(2) Our model can substantially boost translation quality in low-resource scenarios by utilizing extra monolingual TM that is not present in training pairs.", "(3) Our model gains a strong cross-domain transferability by hot-swapping domain-specific monolingual memory.", "TM-augmented NMT This work contributes primarily to the research line of Translation Memory (TM) augmented Neural Machine Translation (NMT).", "Feng et al. (2017) augmented NMT with a bilingual dictionary to tackle infrequent word translation.", "Gu et al. (2018) proposed a model that retrieves examples similar to the test source sentence and encodes retrieved source-target pairs with key-value memory networks.", "Cao and Xiong (2018); Cao et al. (2019) used a gating mechanism to bal-ance the impact of the translation memory.", "Zhang et al. (2018) proposed guiding models by retrieving n -grams and up-weighting the probabilities of retrieved n -grams.", "Bulte and Tezcan (2019) and Xu et al. (2020) used fuzzy-matching with translation memories and augment source sequences with retrieved source-target pairs.", "Xia et al. (2019) directly ignored the source side of a TM and packed the target side into a compact graph.", "Khandelwal et al. (2020) ran existing translation model on large bi-text corpora and recorded all hidden states for later nearest neighbor search at each decoding step, which is very compute-intensive.", "The distinctions between our work and prior work are obvious: (1) The TM in our framework is a collection of monolingual sentences rather than bilingual sentence pairs; (2) We use learnable task-specific retrieval rather than generic retrieval mechanisms.", "Retrieval for Text Generation Discrete retrieval as an intermediate step has been shown beneficial to a variety of natural language processing tasks.", "One typical use is to retrieve supporting evidence for open-domain question answering (e.g., Chen et al., 2017; Lee et al., 2019; Karpukhin et al., 2020).", "Recently, retrieval-guided generation has gained increasing interest in a wide range of text generation tasks such as language modeling (Guu et al., 2018; Khandelwal et al., 2019; Guu et al., 2020), dialogue response generation (Weston et al., 2018; Wu et al., 2019; Cai et al., source sentence encoder \"#$ () target sentence encoder ()( () Translation Memory dense index MIPS SourceEncoder MemoryEncoder -,/ /01 2 3 Decoder Retrieval Model Output y Translation Model bias attention Input x 1 5 (, 5 ) (, 1 ) relevant TM relevance scores Figure 1: Overall framework.", "2019a,b), code generation (Hashimoto et al., 2018) and other knowledge-intensive generation (Lewis et al., 2020b).", "It can be observed that there is a shift from using off-the-shelf search engines to learning task-specific retrievers.", "Our work draws inspiration from this line of research.", "However, retrieval-guided generation has so far been mainly investigated for knowledge retrieval in the same language.", "The memory retrieval in this work is more challenging due to the cross-lingual setting.", "NMT using Monolingual Data To our knowledge, the integration of monolingual data for NMT was first investigated by Gulcehre et al. (2015), who separately trained target-side language models using monolingual data, and then integrated them during decoding either through re-scoring the beam, or by feeding the hidden state of the language model to the NMT model.", "Jean et al. (2015) also explored re-ranking the NMT output with a n -gram language model.", "Another successful method for leveraging monolingual data in NMT is back-translation (Sen-nrich et al., 2016; Fadaee et al., 2017; Edunov et al., 2018; He et al., 2016), where a reverse translation model is used to translate monolingual sentences from the target language to the source language to generate synthetic parallel sentences.", "Recent studies (Jiao et al., 2021; He et al., 2019) showed that self-training , where the synthetic parallel sentences are created by translating monolingual sentences in the source language, is also helpful.", "Our method is orthogonal to previous work and bears a unique feature: it can use more monolingual data without re-training (see 4.3).", "We start by formalizing the translation task as a retrieve-then-generate process in 3.1.", "Then in 3.2, we describe the model design for the cross-lingual memory retrieval model.", "In 3.3, we describe the model design for the memory-augmented translation model.", "Lastly, we show how to optimize the two components jointly using standard maximum likelihood training in 3.4 and therein we address the cold-start problem via cross-alignment pre-training.", "Our approach decomposes the whole translation processing into two steps: retrieve, then generate.", "The overall framework is illustrated in Figure", "1. The Translation Memory (TM) in our approach is a collection of sentences in the target language Z .", "Given an input x in the source language, the retrieval model first selects a number of possibly helpful sentences { z i } Mi =1 from Z , where M (cid:28) |Z| , according to a relevance function f ( x, z i ) .", "Then, the translation model conditions on both the retrieved set { ( z i , f ( x, z i ) } Mi =1 and the original input x to generate the output y using a probabilistic model p ( y | x, z 1 , f ( x, z 1 ) , . . . , z M , f ( x, z M )) .", "Note that the relevance scores { f ( x, z i ) } Mi =1 are also part of the input to the translation model, encouraging the translation model to focus more on more relevant sentences.", "During training, maximizing the likelihood of the translation references improves both the translation model and the retrieval model.", "The retrieval model is responsible for selecting the most relevant sentences for a source sentence from a large monolingual TM.", "This could involve measuring the relevance scores between the source sentence and millions of candidate target sentences, which poses a serious computational challenge.", "To address this, we implement the retrieval model using a simple dual-encoder framework (Bromley et al., 1993) such that the selection of the most relevant sentences can be reduced to Maximum Inner Product Search (MIPS).", "With performant data structures and search algorithms (e.g., Shrivastava and Li, 2014; Malkov and Yashunin, 2018), the retrieval can be done efficiently.", "Specifically, we define the relevance score f ( x, z ) between the source sentence x and the candidate sentence z as the dot product of their dense vector representations: f ( x, z ) = E src ( x ) TE tgt ( z ) where E src and E tgt are the source sentence encoder and the target sentence encoder that map x and z to d -dimensional vectors respectively.", "We implement the two sentence encoders using two independent Transformers (Vaswani et al., 2017).", "For an input sentence, we prepend the [BOS] token to its token sequence and then feed it into a Transformer.", "We take the representation at the [BOS] token as the output (denoted Trans { src , tgt } ( { x, z } ) ), and perform a linear projection ( W { src , tgt } ) to reduce the dimensionality of the vector.", "Finally, we normalize the vectors to regulate the range of relevance scores.", "The normalized vectors have zero means and unit lengths.", "Therefore, the relevance scores always fall in the interval [ 1 , 1] .", "We let denote all parameters associated with the retrieval model.", "In practice, the dense representations of all sentences in TM can be pre-computed and indexed using FAISS (Johnson et al., 2019), an open-source toolkit for efficient vector search.", "Given a source sentence x in hand, we compute the vector representation v x = E src ( x ) and retrieve the top M target sentences with vectors closest to v x .", "Given a source sentence x , a small set of relevant TM { z i } Mi =1 , and relevance scores { f ( x, z i ) } Mi =1 the translation model defines the conditional probability p ( y | x, z 1 , f ( x, z 1 ) , . . . , z M , f ( x, z M )) .", "Our translation model is built upon the standard encoder-decoder NMT model (Bahdanau et al., 2015; Vaswani et al., 2017): the (source) encoder transforms the source sentence x into dense vector representations.", "The decoder generates an output sequence y in an auto-regressive fashion.", "At each time step t , the decoder attends over both previously generated sequence y 1: t 1 and the output of the source encoder, generating a hidden state h t .", "The hidden state h t is then converted to next-token probabilities through a linear projection followed by softmax function, i.e., P v = softmax ( W v h t + b v ) .", "To accommodate the extra memory input, we extend the standard encoder-decoder NMT framework with a memory encoder and allow cross-attention from the decoder to the memory encoder.", "Specifically, the memory encoder encodes each TM sentence z i individually, resulting in a set of con-textualized token embeddings { z i,k } L i k =1 , where L i is the length of the token sequence z i .", "We compute a cross attention over all TM sentences: ij = exp( h t TW m z i,j )) (cid:80) Mi =1 (cid:80) L i k =1 exp( h t TW m z i,k ) (1) c t = W c M (cid:88) i =1 L i (cid:88) j =1 ij z i,j where ij is the attention score of the j -th token in z i , c t is a weighted combination of memory embeddings, and W m and W c are trainable matrices.", "The cross attention is used twice during decoding.", "First, the decoder's hidden state h t is updated by a weighted sum of memory embeddings, i.e., h t = h t + c t .", "Second, we consider each attention score as a probability of copying the corresponding token (Gu et al., 2016; See et al., 2017).", "Formally, the next-token probabilities are computed as: p ( y t | ) = (1 t ) P v ( y t ) + t M (cid:88) i =1 L i (cid:88) j =1 ij 1 z ij = y t where 1 is the indicator function and t is a gating variable computed by another feed-forward network t = g ( h t , c t ) .", "Inspired by Lewis et al. (2020a), to enable the gradient flow from the translation output to the retrieval model, we bias the attention scores with the relevance scores, rewriting Eq.", "(1) as: ij = exp( h t TW m z i,j + f ( x, z i )) (cid:80) Mi =1 (cid:80) L i k =1 exp( h t TW m z i,k + f ( x, z i )) (2) where is a trainable scalar that controls the weight of the relevance scores.", "the negative log-likelihood loss function log p ( y | x, z 1 , f ( x, z 1 ) , . . . , z M , f ( x, z M )) , where y refers to the reference translation.", "As implied by Eq.", "(2), TM sentences that improve the likelihood of reference translations should receive higher attention scores and higher relevance scores, so gradient descent on the loss function will improve the quality of the retrieval model as well.", "Cross-alignment Pre-training However, if the retrieval model starts from random initialization, all top TM sentences z i will likely be unrelated to x (or equally useless).", "This leads to a problem that the retrieval model cannot receive meaningful gradients and improve, and the translation model will learn to completely ignore the TM input.", "To avoid this cold-start problem, we propose two cross-alignment tasks to warm-start the retrieval model.", "The first task is sentence-level cross-alignment.", "This task aims to find the right translation for a source sentence given a set of other translations, which is directly related to our retrieval function.", "Concretely, We sample B source-target pairs from the training corpus at each training step.", "Let X and Z be the ( B d ) matrix of the source and target vectors encoded by E src and E tgt respectively.", "S = XZT is a ( B B ) matrix of relevance scores, where each row corresponds to a source sentence and each column corresponds to a target sentence.", "Any ( X i , Z j ) pair should be aligned when i = j , and should not otherwise.", "The objective is to maximize the scores along the diagonal of the matrix and henceforth reduce the values in other entries.", "The loss function can be written as: L ( i ) snt = exp( S ii ) exp( S ii ) + (cid:80) j (cid:54) = i exp( S ij ) .", "The second task is token-level cross-alignment, which aims to predict the tokens in the target language given the source sentence representation and vice versa.", "Formally, we use bag-of-words losses: L ( i ) tok = (cid:88) w y Y i log p ( w y | X i ) + (cid:88) w x X i log p ( w x | Y i ) where X i ( Y i ) represents the set of tokens in the i -th source (target) sentence and the token probabilities are computed by a linear projection followed by the softmax function.", "The joint loss for pre-training is 1 B (cid:80) Bi =1 L ( i ) snt + L ( i ) tok .", "In practice, we find that both the sentence-level and token-level objectives are crucial for achieving superior performance.", "Dataset #Train Pairs #Dev Pairs #Test Pairs En Es 679,088 2,533 2,596 En De 699,569 2,454 2,483 Table 1: Data statistics for the JRC-Acquis corpus.", "Asynchronous Index Refresh To employ fast MIPS, we must pre-compute E tgt ( z ) for every z Z and build an index.", "However, the index cannot remain consistent with the running model during training as will be updated over time.", "One straightforward solution to fix the parameters of E tgt after the pre-training described above and only fine-tune the parameters of E src .", "However, this may hurt performance since E tgt cannot adapt to the translation objective.", "Another solution is to asynchronously refresh the index by re-computing and re-indexing all TM sentences at regular intervals.", "The index is slightly outdated between refreshes, however, we use fresh E tgt in gradient estimate.", "We explore both options in our experiments.", "We experiment with the proposed approach in three settings: (1) the conventional setting where the available TM is limited to the bilingual training corpus, (2) the low-resource setting where bilingual training pairs are scarce but extra monolingual data is exploited as additional TM, and (3) nonparametric domain adaptation using monolingual TM.", "Note that existing TM-augmented NMT models are only applicable to the first setting, the last two settings only become possible with our proposed model.", "We use BLEU score (Papineni et al., 2002) as the evaluation metric.", "We build our model using Transformer blocks with the same configuration as Transformer Base (Vaswani et al., 2017) (8 attention heads, 512 dimensional hidden state, and 2048 dimensional feed-forward state).", "The number of Transformer blocks is 3 for the retrieval model, 4 for the memory encoder in the translation model, and 6 for the encoder-decoder architecture in the translation model.", "We retrieve the top 5 TM sentences.", "The FAISS index code is IVF1024 HNSW32,SQ8 and the search depth is 64.", "We follow the learning rate schedule, dropout and label smoothing settings described in Vaswani et al. (2017).", "We use Adam optimizer (Kingma and Ba, 2014) and train models with up to 100K # System Retriever Es En En Es De En En De Dev Test Dev Test Dev Test Dev Test Existing NMT systems* Gu et al. (2018) source similarity 63.16 62.94 ---Zhang et al. (2018) source similarity 63.97 64.30 61.50 61.56 60.10 60.26 55.54 55.14 Xia et al. (2019) source similarity 66.37 66.21 62.50 62.76 61.85 61.72 57.43 56.88 Our NMT systems 1 this work None 64.25 64.07 62.27 61.54 59.82 60.76 55.01 54.90 2 source similarity 66.98 66.48 63.04 62.76 63.62 63.85 57.88 57.53 3 cross-lingual (fixed) 66.68 66.24 63.06 62.73 63.25 63.06 57.61 56.97 4 cross-lingual (fixed E tgt ) 67.66 67.16 63.73 63.22 64.39 64.01 58.12 57.92 5 cross-lingual 67.73 67.42 64.18 63.86 64.48 64.62 58.77 58.42 Table 2: Experimental results (BLEU scores) on four translation tasks.", "steps throughout all experiments.", "When trained with asynchronous index refresh, the re-indexing interval is 3K training steps.", "1 4.2 Conventional Experiments Following prior work in TM-augmented NMT, we first conduct experiments in a setting where the bilingual training corpus is the only source for TM.", "Data We use the JRC-Acquis corpus (Steinberger et al., 2006) for our experiments.", "The JRC-Acquis corpus contains the total body of European Union (EU) law applicable to the EU member states.", "This corpus was also used by Gu et al. (2018); Zhang et al. (2018); Xia et al. (2019) and we managed to get the datasets originally preprocessed by Gu et al. (2018), making it possible to fairly compare our results with previously reported BLEU scores.", "Specifically, we select four translation directions, namely, Spanish English (Es En), En Es, German English (De En), and En De, for evaluation.", "Detailed data statistics are shown in Table", "1. Models To study the effect of each model component, we implement a series of model variants (model #1 to #5 in Table 2).", "1. NMT without TM.", "To measure the help from TM, we remove the model components related to TM (including the retrieval model and the memory encoder), and only employ the encoder-decoder architecture for NMT.", "The resulted model is equivalent to the Transformer Base model (Vaswani et al., 2017).", "1 Our code is released at https://github.com/ jcyk/copyisallyouneed", ".", "2. TM-augmented NMT using source similarity search.", "To isolate the effect of architectural changes in NMT models, we replace our cross-lingual memory retriever with traditional source-side similarity search.", "Specifi-cally, we use the fuzzy match system used in Xia et al. (2019) and many others, which is based on BM25 and edit distance.", "3. TM-augmented NMT using pre-trained cross-lingual retriever.", "To study the effect of end-to-end task-specific optimization of the retrieval model, we pre-train the retrieval model using the cross-alignment tasks introduced in 3.4 and keep it fixed in the following NMT training.", "4. Our full model using a fixed TM index; After pre-training, we fix the parameter of E tgt during NMT training.", "5. Our full model trained with asynchronous index refresh.", "Results The results of the above models are presented in Table", "2. We have the following observations: (1) Our full model trained with asynchronous index refresh (model #5) delivers the best performance on test sets across all four translation tasks, outperforming the non-TM baseline (model #1) by 3.26 BLEU points in average and up to 3.86 BLEU points (De En).", "This result confirms that monolingual TM can boost NMT performance; (2) The end-to-end learning of the retriever model is the key for substantial performance improvement.", "We can see that using a pre-trained fixed cross-lingual retriever only gives moderate test performance, fine-tuning E src and fixing E tgt significantly boosts the performance, and fine-tuning both E src Figure 2: Test results with 1/4 bilingual pairs (upper) and 2/4 bilingual pairs (lower) across different TM sizes.", "and E tgt leads to the strongest performance (model #5 > model #4 > model #3); (3) Cross-lingual retrieval (model #4 and model #5) can obtain better results than that of the source similarity search (model #2).", "This is remarkable since the cross-lingual retrieval only requires monolingual TM, while the source similarity search relies on bilingual TM.", "We attribute the success, again, to the end-to-end adaptability of our cross-lingual retriever.", "This is manifested by the fact that model #3 even slightly underperforms model #2 in some of translation tasks.", "Contrast to Previous Bilingual TM Systems We also compare our results with the best previously reported models.", "2 We can see that our results significantly outperform previous arts.", "Notably, our best model (model #5) surpasses the best reported model (Xia et al., 2019) by 1.69 BLEU points in average and up to 2.9 BLEU points (De En).", "This result verifies the effectiveness of our proposed models.", "In fact, we can see that our translation model using traditional similarity search (model #2) already outperforms the best previously reported results, which reveals that the architectural design of our translation model is surprisingly effective despite its simplicity.", "2 Some recent work used different datasets other than JRC-Acquis with unspecified data split, which makes it hard to make an exhaustive comparison.", "However, note that our in-house baseline (model #2) is quite strong.", "One most unique characteristic of our proposed model is that it uses monolingual TM.", "This motivates us to conduct experiments in low-resource scenarios, where we use extra monolingual data in the target language to boost translation quality.", "Data We create low-resource scenarios by randomly partitioning each training set in JRC-Acquis corpus into four subsets of equal size.", "We set up two series of experiments: (1) We only use the bilinguals pairs in the first subset and gradually enlarge the TM by including more monolingual data in other subsets.", "(2) Similar to (1), but we instead use the bilingual pairs in the first two subsets.", "Models As shown in 4.2, the model trained with asynchronous index refresh (model #5) is slightly better than the model using fixed E tgt (model #4), however, the computational cost of training model #5 is much bigger.", "For simplicity and environmental consideration, we only test model #4 in low-resource scenarios.", "Nevertheless, we note there are still two modeling choices: (1) train the model once with the TM limited to training pairs and only enlarge the TM during testing; (2) re-train the model with every enlarged TM.", "Note that when using the first choice, the model may retrieve a TM sentence that has never been seen during training.", "To measure the performance improvements from additional monolingual TM, we also include a Transformer Base baseline (model #1, denoted as Data Model Es En En Es De En En De dev test dev test dev test dev test 1/4 bilingual + 4/4 monolingual Ours 61.46 61.02 57.86 57.40 56.77 56.54 51.11 51.58 BT 62.47 61.99 60.28 59.59 57.75 58.20 52.47 52.96 Ours+BT 65.98 65.51 62.48 62.22 62.22 61.79 56.75 56.50 2/4 bilingual + 4/4 monolingual Ours 65.17 64.69 61.31 61.01 61.43 61.19 55.55 55.35 BT 63.82 63.10 61.59 60.83 59.17 59.26 54.18 54.29 Ours+BT 66.95 66.38 63.22 62.90 63.68 63.10 57.69 57.40 Table 3: Comparison with back-translation (BT).", "Results Figure 2 shows the main results on the test sets.", "The general patterns are consistent across all experiments: the larger the TM becomes, the better translation performance the model achieves.", "When using all available monolingual data (4/4), the translation quality is boosted significantly.", "Interestingly, the performance of models without retraining is comparable to, if not better than, those with re-training.", "We also observe that when the training pairs are very scarce (only 1/4 bilingual pairs are available), a small size of TM even hurts the model performance.", "The reason could be over-fitting.", "We speculate that better results would be obtained by tuning the model hyper-parameters according to different TM sizes.", "Contrast to Back-Translation We compare our models with back-translation (BT) (Sennrich et al., 2016), a popular way of utilizing monolingual data for NMT.", "We train a target-to-source Transformer Base model using bilingual pairs and use the resultant model to translate monolingual sentences to obtain additional synthetic parallel data.", "As shown in Table 3, our method performs better than BT with 2/4 bilingual pairs but performs worse with 1/4 bilingual pairs.", "Interestingly, the combination of BT and our method yields significant further gains, which demonstrates that our method is not only orthogonal but also complementary to BT.", "Lastly, the plug and play property of TM further motivates us to domain adaptation, where we adapt a single general-domain model to a specific domain by using domain-specific monolingual TM.", "Data To simulate a diverse multi-domain setting, we use the data splits in Aharoni and Goldberg (2020) originally collected by Koehn and Knowles (2017).", "It includes German-English parallel data for train/dev/test sets in five domains: Medical, Law, IT, Koran and Subtitles.", "Similar to the experiments in 4.3, we only use one fourth of bilingual pairs for training.", "The target side of the remaining data is treated as additional monolingual data for building domain-specific TM, and the source side is discarded.", "The data statistics can be found in the upper block of Table", "4. The dev and test sets for each domain contains 2K instances.", "Models We first train a Transformer Base baseline (model #1) on the concatenation of bilingual pairs in all domains.", "As in 4.3, we train our model using fixed E tgt (model #4).", "One advantage of our approach is the possibility of training a single model which can be adapted to any new domain at the inference time without any re-training, by just switching the TM.", "When adapting to a new TM, we do not re-train our model.", "As the purpose here is to verify that our approach can tackle domain adaptation without any domain-specific training , we leave the comparison and combination of other domain adaptation techniques (Moore and Lewis, 2010; Chu and Wang, 2018) as future work.", "Results The results are presented in Table", "4. We can see that when only using the bilingual data, the TM-augmented model obtains higher BLEU scores in domains with less data but slightly lower scores in other domains compared to the non-TM baseline.", "However, as we switch the TM to domain-specific TM, the translation quality is significantly boosted in all domains, improving the non-TM baseline by an average of 1.85 BLEU points, with improvements as large as 2.57 BLEU points on Law and 2.51 BLEU point on Medical.", "We also attempt to combine all domain-specific TMs to one and use it for all domains (the last row in Table 4).", "However, we do not obtain noticeable improvement.", "This reveals that the out-of-domain data can provide little help so that a smaller in-domain TM is suffi-cient, which is also confirmed by the fact that about 90.21% of the retrieved sentences come from the corresponding domain in the combined TM.", "With the help of FAISS in-GPU index, search over millions of vectors can be made incredibly efficient (often in tens of milliseconds).", "In our implementation, the memory search performs even faster than naive BM25 3 .", "For the results in Table 2, taking the vanilla Transformer Base model (model #1) as the baseline.", "The inference latency of our models (both model #4 and model #5) is about 1.36 times of the baseline (all use a single Nividia V100 GPU).", "Note that the corresponding number for the previous state-of-the-art model (Xia et al., 2019) is 1.80.", "As for training cost, the averaged time cost per training step of model #4 and model #5 is 2.62 times and 2.76 times of the baseline respectively, which are on par with traditional TM-augmented baselines (model #2 is 2.59 times) (all use two Nividia V100 GPUs).", "Table 5 presents the results.", "In addition, we also observe that memory-augmented models converge much faster than vanilla models in terms of training steps.", "We introduced an effective approach that augments NMT models with monolingual TM.", "We show that a task-specific cross-lingual memory retriever can be learned by end-to-end MT training.", "Our approach achieves new state-of-the-art results on sev-3 Elasticsearch Implementation: https://www.", "eral datasets, leads to large gains in low-resource scenarios where the bilingual data is limited, and can specialize a NMT model for specific domains without further training.", "Future work should aim to build over our proposed framework.", "Two obvious directions are: (1) Even though our experiments validated that the whole framework can be learned from scratch using standard MT corpora, it is possible to initialize each model component in our framework with massively pre-trained models for performance enhancement; and (2) The NMT model can benefit from aggregating over a set of diverse memories, which is not explicitly encouraged in current design." ]
[ "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "result", "abstain", "method", "abstain", "objective", "objective", "abstain", "abstain", "objective", "objective", "objective", "method", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "abstain", "other", "other", "other", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "other", "abstain", "objective", "method" ]
[ "Abstract Predicting the approval odds of a patent application is a challenging problem involving multiple factors.", "The most important factor is arguably the novelty 35 U.S. Code 102 rejects applications that are not sufficiently differentiated from prior art.", "Novelty evaluation distinguishes the patent approval prediction from conventional document classification too-similar newer submissions are considered as not novel and would receive the opposite label, thus confusing standard document classifiers (e.g., BERT).", "To address this issue, we propose a novel framework AISeer that unifies the document classifier with handcrafted features, particularly time-dependent novelty scores.", "Specifically, we formulate the novelty scores by comparing each application with millions of prior art using a hybrid of efficient filters and a neural bi-encoder.", "Moreover, we impose a new regularization term into the classification objective to enforce the monotonic change of approval prediction w.r.t. novelty scores, From extensive experiments on a large-scale USPTO dataset, we find that standard BERT fine-tuning can partially learn the correct relationship between novelty and approvals from inconsistent data.", "However, our time-dependent novelty feature and other handcrafted features offer a significant boost on top of it.", "Also, our monotonic regularization, while shrinking the search space, can drive the optimizer to better local optima, yielding a further small performance gain.", "Intellectual property (IP) is an important and integral to the economy.", "IP-intensive industries directly accounted for 27.9 million jobs in the U.S. (USPTO, 2016) Theoretical and empirical evidence shows that patents are effective in fostering technological progress.", "(Gallini, 2002; Hu and Png, 2013; Hall and Harhoff, 2012)", "Securing patent approvals offers a major shot in the arm to inventors Jingbo Shang is the corresponding author.", "and innovators, increasing the chances of obtaining angel and venture capital investments.", "However, the process of getting a patent approved can cost applicants tens of thousands of dollars in payments to law firms who claim to be helpful in understanding what gets approved and improving the odds of success of a patent application.", "Thus, algorithmic approaches to aid in the patent evaluation process can potentially save precious time and resources for applicants during the patent application phase, as well as benefit patent examiners in government patent offices around the world, accelerating and improving the review process (Ebrahim, 2018).", "The approval of a patent application , according to U.S. patent laws, is determined necessarily and sufficiently by the approval of application claims .", "Patent laws define individual claims as the subject matter of inventions ( 35 U.S. Code 112), on which patentability is defined ( 35 U.S. Code 101, 102, and 103) (refer to Appendix B).", "No overall assessment of a patent application is provisioned.", "In practice, application claims demarcate the scope of legal protection that an applicant is seeking and are the eventual objects for investigation under legal disputes or transfer of commercial rights.", "Patent examiners from the U.S. Patent and Trademark Office (USPTO) make decisions on each application claim individually and independently with other sections as supporting materials.", "Therefore we focus on claim texts and use the term patent approval informally and interchangeably referring to claims approval .", "In particular, we primarily consider 35 U.S. Code 102 , assessing the novelty of application claims.", "To the best of our knowledge, we are the first to try to predict patent (claim) approval, which is as an extremely challenging problem for multiple reasons.", "First, patent documents comprise of technically nuanced and challenging to parse language (intricate legalese).", "Patent texts are usually legal and technical descriptions of objects or pro-349 cesses, which tend to be complex in vocabulary and grammatical structures (Singer and Smith, 1967).", "Claims are examined not only literally, but also for their legal implications.", "Appendix A provides a few example application claims.", "Second, the patent examination process tends to suffer from subjectivity and inconsistencies (O'Neill, 2018a), exemplified by variance across offices and groups, (O'Neill, 2018b) and across human examiners.", "In FY17, only 66% of primary examiners are within a 12.5% delta off the average allowance rate (USPTO, 2017).", "Third, at the core of patent examination, evaluation of novelty is time-dependent.", "Rejections of claims by 35 U.S. Code 102 require examiners to cite prior approved patent claims, prior art , as evidence.", "More details about the examination process can be found in USPTO (2020).", "The United States Patent and Trademark Office (USPTO) receives thousands of applications a week; thus a novel application at one time may be dramatically different in the assessment of novelty after a short time period.", "This means that a classifier can pick up a positive label from an earlier approved application but receive a negative label from an application sometime later with similar technical content, which is deemed no longer novel.", "Such conflicting information can confuse the classifier and undermine its performance.", "In other words, the data labels are intrinsically noisy and inconsistent due to the nature of the domain problem.", "Although AI/ML approaches are often discussed in the patent domain (Aristodemou and Tietze, 2018) such as in the area of information retrieval (Kang et al., 2007; Fujii, 2007; Shalaby and Zadrozny, 2019), applications of deep NLP methods are mostly concerned with classifying the content domains of patents (Verberne et al., 2010; D'hondt et al., 2013; Hu et al., 2016; Lee and Hsiang, 2019).", "In addition, the extant literature usually explores approved patents rather than applications (Balsmeier et al., 2018).", "Even to simply classify the topics of approved patents, state-of-the-art document classifiers can only achieve an accuracy of about 69.3% only 2.2% over RoBerta (Zaheer et al., 2020).", "Due to these issues, patent approval prediction task is much more challenging than topic classification for document classifiers.", "To mitigate the issues, we first develop several handcrafted features based on domain knowledge for use alongside the language model for context (cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:38)(cid:79)(cid:68)(cid:76)(cid:80)(cid:3)(cid:55)(cid:72)(cid:91)(cid:87)(cid:86) (cid:3)(cid:3)(cid:3)(cid:3)(cid:37)(cid:40)(cid:53)(cid:55) (cid:3)(cid:3)(cid:3)(cid:3)(cid:53)(cid:72)(cid:83)(cid:85)(cid:72)(cid:86)(cid:72)(cid:81)(cid:87)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:3)(cid:3)(cid:3)(cid:3)(cid:44)(cid:81)(cid:87)(cid:72)(cid:74)(cid:85)(cid:68)(cid:87)(cid:76)(cid:81)(cid:74)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:90)(cid:76)(cid:87)(cid:75)(cid:3)(cid:37)(cid:40)(cid:53)(cid:55) (cid:3)(cid:3)(cid:3)(cid:3)(cid:51)(cid:85)(cid:72)(cid:71)(cid:76)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81) (cid:48)(cid:82)(cid:81)(cid:82)(cid:87)(cid:82)(cid:81)(cid:76)(cid:70)(cid:3)(cid:53)(cid:72)(cid:74)(cid:88)(cid:79)(cid:68)(cid:85)(cid:76)(cid:93)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81) (cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:43)(cid:68)(cid:81)(cid:71)(cid:16)(cid:70)(cid:85)(cid:68)(cid:73)(cid:87)(cid:72)(cid:71)(cid:3)(cid:3)(cid:3)(cid:41)(cid:72)(cid:68)(cid:87)(cid:88)(cid:85)(cid:72)(cid:86) (cid:38)(cid:79)(cid:68)(cid:76)(cid:80)(cid:16)(cid:47)(cid:72)(cid:89)(cid:72)(cid:79)(cid:3)(cid:49)(cid:82)(cid:89)(cid:72)(cid:79)(cid:87)(cid:92) (cid:39)(cid:82)(cid:70)(cid:17)(cid:16)(cid:47)(cid:72)(cid:89)(cid:72)(cid:79)(cid:3)(cid:48)(cid:68)(cid:91)(cid:17)(cid:3)(cid:54)(cid:76)(cid:80)(cid:76)(cid:79)(cid:68)(cid:85)(cid:76)(cid:87)(cid:92) (cid:38)(cid:79)(cid:68)(cid:76)(cid:80)(cid:16)(cid:47)(cid:72)(cid:89)(cid:72)(cid:79)(cid:3)(cid:54)(cid:87)(cid:85)(cid:88)(cid:70)(cid:87)(cid:88)(cid:85)(cid:68)(cid:79) (cid:50)(cid:87)(cid:75)(cid:72)(cid:85)(cid:3)(cid:36)(cid:83)(cid:83)(cid:79)(cid:76)(cid:70)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:41)(cid:72)(cid:68)(cid:87)(cid:88)(cid:85)(cid:72)(cid:86) Figure 1: An overview of our proposed AISeer.", "and control.", "The time-dependent nature of the novelty also makes traditional document classifiers not suitable here, because they typically assume that similar instances belong to the same label.", "To address this challenge, we propose a novel framework AISeer as shown in Figure 1.", "We formulate a time-dependent novelty score for each patent claim with its semantic similarity against prior approved claims from patent grants , which are final versions of approved patents.", "Specifically, inside a comprehensive pool comprising millions of grants, we consider those approved before the filing date of the focal application and then measure the maximum semantic similarity score of the focal patent claim matched with all approved claims in the time-dependent sub-pool.", "To improve computing efficiency, we apply document-level filters to narrow the sub-pool for each claim.", "Integrating such similarity scores with handcrafted features and BERT, we conduct experiments on the large-scale USPTO dataset and find significant performance gains over fine-tuning a standard BERT alone.", "All else equal, a patent claim with a higher similarity score, i.e., semantically more similar to prior approved claims, should be less likely to be approved.", "Hence we propose to impose monotonic regularization on the novelty score so that the loss function has an additional term of the hinge loss to further penalize non-decreasing predictions in the similarity.", "This effectively restricts the search space for the optimizer to prediction mechanisms that are reasonably consistent with the novelty measure.", "From our experiments, this regularization significantly impacts the model outputs.", "Although performance improvements are limited, it can help the optimizer steer away from unfavorable local optima and further improve AUROC.", "We further discuss the experimental findings in depth to illustrate how BERT and handcrafted features contribute in overcoming the unconventional data issues.", "metadata, office actions, rejections and citations data into a massive dataset; We develop a series of handcrafted features to aid the prediction of 35 U.S. Code 102 approval decisions.", "In particular, we design and analyze a time-dependent feature that measures the novelty of patent applications at the time of filing; We incorporate the handcrafted features and impose monotonic regularization on the novelty features to shed light on how the intrinsic data inconsistency issues in the domain problem can be mitigated.", "Reproducibility.", "We will release the benchmark dataset and our code on GitHub: https://github.com/ acl-2022-towards-comprehensive/ acl-2022-camera-ready .", "In this section, we formally formulate the novelty-based patent approval problem.", "We describe the experiment setup, the dataset, and baseline results with common document classifiers.", "We follow legal definitions under 35 U.S. Code 102 .", "Despite the popular notions of patent approval or issues, what is actually being approved/rejected are individual claims.", "Each patent applications A k , k { 1 M } , sorted by filing dates, comprises of a number of application claims.", "Given text representation X i , i { 1 N } , of each application claim, there exist { i k } , k { 0 M } such that claim representations { X i k 1 X i k } belong to patent application A k .", "Binary labels y i indicate approval decisions derived from patent rejections and office actions data where y i = 1 indicates claim approvals.", "We would like to classify application claims according to approval labels.", "and grant full texts, application metadata, citations, office actions, and rejections (USPTO, d,e).", "Patent grants are final versions of approved patent applications.", "Later we will utilize grants for constructing the application novelty feature.", "To extract labels and create handcrafted features, we utilize both the legacy data system for office actions, rejections and citations made between 2008 and mid-2017 (Lu et al., 2017), and newer v2 APIs that cover mid-2018 onward.", "For application metadata, we obtain bulk data from PEDS (Patent Examination Data System) (USPTO,", "b).", "In order to match all the available labels, we obtain weekly bulk releases for of both utility patent applications and utility patent grants in XML format ranging between 2005 and 2019.", "In total, we extract 8.8 million patent applications and 3.7 million patent grants during the same time period whose texts are around 730 GB.", "Dataset Processing.", "According to patent laws, only one version among possibly a number of revisions is published and available as full-text data.", "Meanwhile, for a considerate amount of applications, the entire history of office action data and rejection data are available, where allowances or rejections for each individual claim under each legal clause are formally made.", "Hence we ought to identify the labels associated with the published version among patent examination rejection data and office action data.", "We take a \"snapshot\" approach.", "Given the available publication version of each application as snapshots, the examination decisions of each claim particularly with respect to the snapshot version are processed and attached as classification labels.", "Therefore, with the huge number of snapshots, regardless of the subsequent actions of the applicant,", "e.g.", "abandonment, the model can be kept agnostic of the status in the application pipeline.", "This way, we allow the model to predict for any version of a patent application so that the attorneys and applicants can evaluate their chances for decision making.", "Technical preparations for publication of an application generally begin 4 months prior to the projected date of publication.", "Hence we match the closest office action dates with publication dates minus 4 months which is supposed to be the benchmark date for the available version, so that correct labels can be obtained.", "Please refer to the essential publication regulations in Appendix D. Data are merged by the application number and ingested into a DBMS.", "plications under which all corresponding sections of data are available.", "Because of the data size and to control for computation times, we choose the most recent, around 500K applications and effectively around 9.5 million claims for experiments.", "Dataset Splits.", "We split the data into training, validation, and testing sets by their filing dates.", "The more recent ones are chosen for testing.", "The size for final experimental data, including the abstract, claim texts, labels, and handcrafted features, is around 15 GB.", "The dataset is highly imbalanced towards positive labels (see Table 1 Approval %).", "evaluate the following common document classifiers.", "Log.", "Reg.", "refers to logistics regression using TF-TDF features.", "Text-CNN (Kim, 2014) with GloVe (Pennington et al., 2014) embeddings as the input.", "Adam optimizer with learning rate 0.001.", "10 epochs' run; batch size as 1024; LSTM (Hochreiter and Schmidhuber, 1997) with GloVe embeddings as the input.", "AdamW optimizer with learning rate 0.005 and 10 epochs' run; batch size as 1024; BERT (Devlin et al., 2018) fine-tuning.", "AdamW optimizer with learning rate 5e-5 as the optimizer.", "The number of fine-tuning epochs as 5; batch size as 256.", "This is the the same model as in the state-of-the-art model, PatentBERT , in patent content classification (Lee and Hsiang, 2019) with a different set of hyper-parameters and balanced class weights.", "The original PatentBERT model is designed for a different task, and the experimental setting is not suitable for predicting patent approvals, hence we make the tweaks.", "In all of the models, we impose class weights in the loss functions inversely proportional to the number of class instances, such that two classes are treated equally by the optimizer.", "For the details, please refer to Section 3.1.", "The neural models are trained with text inputs processed at a maximum length of 128 tokens per claim and on a single GPU.", "Evaluation Metrics.", "Given the imbalanced nature of our dataset, we adopt both the Area Under the Curve for the ROC plot (Fawcett, 2004) (AUROC) and macro F1 score as our evaluation metrics.", "With AUROC, the predicting performance of the minority class could be taken into consideration with a similar weight as for the majority class (in our Table 2: Benchmarking Common Document Classifiers.", "case, positive class).", "Moreover, the probability-based metric can provide more detailed insights into model performances.", "Therefore, we choose AUROC as our main metric .", "The macro F1 score is a direct average of F1 scores of both the positive class and the negative class and provides an alternative balanced view of both classes' performances.", "We treat it as a secondary metric .", "We compute the maximum macro F1 score (Lipton et al., 2014) by varying the decision threshold for each model.", "Other traditional measures focused on the positive class performance such as accuracy and recall have little practical implications due to data imbalance.", "Benchmark Results.", "Table 2 shows common document classifiers' performance with some naive predictions as references.", "Results of neural models are reported with the median metrics among several runs with different optimizer random states.", "Figure 4 in Appendix F further visualizes more details of the ROC curves of these models.", "One can find that BERT and LSTM are arguably the most effective ones.", "Therefore, we will focus on BERT and LSTM for further comparisons.", "Our AISeer framework unifies the document classifier, handcrafted features and monotonic regularization, as shown in Figure 1.", "It is compatible with almost all document classifiers.", "In this paper, we choose BERT as the base document classifier to demonstrate the effects as it is widely adopted and also performs well in our benchmark evaluations.", "After each application claim text is run through the BERT model, the output representation is concatenated with the corresponding handcrafted features.", "Our handcrafted features include a time-dependent claim-level novelty score, claim-level structural features, document-level similarity scores, and other application metadata features.", "We further impose a monotonic regularization on the impact of the claim-level novelty score so that the loss function has an additional term of the hinge loss.", "For self-containedess, we briefly introduce how we use BERT in AISeer.", "We first utilize BERT to transform the i -th application claim to a text representation X i in batches of a size N b , which is then passed to a linear layer to obtain the prediction through a softmax layer.", "Approvals (i.e., y i = 1 ) are much more popular than rejections (refer to Table 1), so the vanilla training will bias the model towards approvals.", "Therefore, we adopt a weighted loss for training: L = (cid:80) i w y i ( y i log y i + (1 y i ) log(1 y i )) where w y i denotes the fixed weights of the two classes, which is inversely proportional to the number of instances from the corresponding class, balancing the training weights of the two classes.", "The backbone of the novelty feature is the time-dependent claim-level maximum similarity score.", "We first index all patent grants with ElasticSearch (NV).", "Given a patent application and a claim under it, we first take advantage of its fast BM25-based document-level fuzzy matches to obtain the 5 most similar grant documents to the focal application document as a first-stage pre-filter.", "To account for time-dependence, each focal application is matched against a sub-pool of patent grants which are time-stamped to be approved strictly before the filing date of the focal application.", "In application level matching, all document sections are considered, including the abstract, summary of invention, and details of invention of all claims.", "Among all claims under the top-5 matched grants, we then find the most similar one to the focal claim using sentence-transformer (Reimers and Gurevych, 2019) with stsb-roberta-large pre-trained bi-encoder model.", "Base cross-encoder transformers such as BERT can lack in performance for pure semantic similarity tasks.", "Although certain cross-encoders have excellent semantic similarity performance, it can be computationally too demanding for our purpose since the scale of the claims in all patent grants is more than 100 million, and since each grant claim can be required to be paired many times with a focal application claim.", "The Elasticsearch-based pre-filter process also helps manage the computational need.", "Figure 2 demonstrates how the time-dependent novelty feature is generated the application that the red-highlighted focal claim belongs to is first matched with 5 patent grants on the application level; then the focal claim is matched against every claim under the 5 matched grants to compute the semantic similarity score, before the most similar grant claim is identified.", "Our experiments confirm that the claim-level maximum similarity score, as expected, is negatively correlated with 35 U.S. Code 102 labels, as shown in Figure 3. 3.3 Application-Level Handcrafted Features Application-Level Similarity.", "We consider the application-level maximum similarity score, denoted as N s,doc , and mean similarity score generated by ElasticSearch (NV) as handcrafted features.", "These document-level scores measure how similar overall are the applications to the approved grants.", "The document-level similarity scores are positively correlated with 35 U.S. Code 102 labels.", "We believe that they primarily capture the overall writing quality and the common language patterns of approvable applications.", "offers a rich collection of metadata about each patent application.", "We use the following two of them: Patent Classification : the USPC class designated for the applications.", "USPC (USPTO,", "a) is a system of classifying the subject matter of each patent application for recording, publication, and assignment purposes.", "Different classes of patents tend to have varying approval rates (see Table 6 in Appendix C).", "Number of Applicant Cited References : the number of citations of other patents or articles initiated by the applicant herself.", "In the patent domain, most citations are initiated by the examiners as prior arts to reject application claims.", "However, they can also be made by the applicant to demonstrate understanding of related work and claim contributions.", "The number of applicant-initiated citations is a signal of the effort and 353 200 250 300 350 400 450 Max Similarity Score (Document) 0.78 0.80 0.82 0.84 0.86 P o s i t i v e I n s t a n c e P r o p o r t i o n", "Max Citation : based on ElasticSearch pre-filter, the maximum number of total citations among the top 5 most similar patent grant documents to the focal patent application.", "Max Article Citation refers to the maximum number of citations which are research articles (not other patents) in top matched grants.", "Lexical Diversity : the richness in the vocabulary of the abstract of the patent application.", "We consider two indicators for each claim.", "Component refers to indicator on whether the application claim is describing the components of a system (e.g., a machine, a process, a compound).", "Other claims may describe the properties or utility of particular components.", "This is identifiable by the transitional phrases used in the claim.", "Transitional Phrase refers to indicator on whether a component claim is open , closed , or half-open , which is determined by which transitional phrase is used.", "Openness or closedness regulates the scope of legal IP protection the applicant enjoys once the patent is approved.", "Often it is a strategic choice by the applicant and the attorney.", "If a claim is open , indicated by transitional phrases compris-ing and legal synonyms, any additional components later added to the system are also protected, in contrast to closed claims .", "Open claims are more difficult to be approved.", "Other examples of transitional phrases include \"consisting essentially of\" and \"consisting of\".", "These particular language phenomena are well-known in the IP communities and sometimes referred to as patentese (Singer and Smith, 1967).", "The patent examination manual explicitly discusses these phrases with case law (USPTO, c; Silverman and Stacey, 1996).", "Now let H i denote other handcrafted features in addition to N s,claim and N s,doc .", "Figure 3 demonstrates the correlations between some representative handcrafted features and the positive label.", "Let Z i = X i H i N s,claim N s,doc { 1 } , i { 1 , ..., N b } .", "Note that X i is the representation for the claim and that the document or application-level handcrafted features will be augmented to each claim.", "The concatenated Z i will pass through the linear and the softmax layer.", "Mathematically, we restrict the search space upon N s,claim , regularizing predictions to be decreasing in it.", "The optimizer will potentially be able to find alternative paths to avoid undesirable local minima.", "Let Z i denote all other inputs except N s,claim .", "We manipulate the input such that inconsistency with the monotonicity in N s,claim is represented.", "The novelty scores need to be manipulated and multiplied by For a positive constant C (0 < C < 1) let N s,claim = CN s,claim , let Z i = Z i N s,claim .", "Applying such a manual constraint on the input novelty representation completes such a monotonically decreasing relationship between input N i,k and output Given log-likelihood with respect to Z i , F ( Z i ) = y i log y i ( Z i ) + (1 y i ) log(1 y i ( Z i )) , we shall constrain F ( Z i ) < F ( Z i ) .", "To implement it, we shall impose a hinge loss penalty whenever F ( Z i ) > F ( Z i ) and return 0 when otherwise.", "Therefore, the final objective function becomes: O = L + (cid:88) i max (cid:8) 0 , F ( Z i ) > F ( Z i ) (cid:9) , where determines the regularization strength.", "In the experiments, we seek to answer a number of questions.", "To begin with, we evaluate how hand-354 Table 3: Evaluation Results of AISeer, Compared Methods, and Ablations.", "crafted features can help the deep language model, i.e. BERT, adapt to a complex domain that differs from typical NLP use cases.", "In particular we focus on the novelty feature critical to patent approvals.", "We are interested in the extent to which a standard BERT application can learn from highly noisy labels and inconsistent data and find out the novelty pattern, i.e. the significance of novelty in determining patent approval outcomes.", "In addition, we study if the combination of BERT and handcrafted features serves to be adequate in capturing the novelty pattern.", "We also examine if monotonic regularization boosts the learning process to further overcome the intrinsic data inconsistencies.", "We mainly compare AISeer with two baseline models, BERT and LSTM , as they are the best common document classifiers from our benchmark results.", "For ablation study purpose, we also compare with Log.", "Reg.", "Feat.", "Only , a logistics regression model with handcrafted features only, and AISeer w/o Regu.", ", which is a BERT model integrated with our handcrafted features but not regularized by our monotonic constraints.", "AISeer is trained with the same set of hyper-parameters as BERT: maximum token length as 128, fine-tuning for 5 epochs; batch size as 256; AdamW with learning rate being 5e-5 as the optimizer.", "The monotonic regularization parameter C is 12 and is 5e-4.", "The models are trained on a single Nvidia Quadro RTX 8000 GPU.", "The baseline BERT model gives decent AUC (ROC) and macro F1.", "The full-fledged AISeer, combining handcrafted novelty feature along with other computed ones and motonic regularization, helps with both the metric dimensions: AISeer boosts AUROC by around 2.5% percent and macro F1 by around 1% compared to the best common document classifiers.", "Figure 5 in Appendix F shows the AUROC improvement originates consistently from the entire spectrum of prediction scores.", "Aforementioned in the introduction, when simply classifying the topics of approved patents, state-of-the-art document classifiers can only achieve an accuracy of about 69.3% (only 2.2% over RoBerta) (Zaheer et al., 2020).", "Given the difficulty level and subjective nature of the patent approval task, the performance improvement is nontrivial and practically impactful.", "Standard BERT fine-tuning realizes an AUROC increase of only 11.79% over completely random or naive predictions, which also exemplifies the problem's difficulty.", "Our approach achieves an additional performance of 2.35%, which is equivalent to 20% of the total benefits of the original BERT model.", "Given that BERT remains one of the most effective models in varieties of NLP tasks, and especially that handcrafted features have relatively low dimensionality compared to BERT, we believe that the performance gain equivalent to 20% of the performance gain of BERT is substantial for this completely new application domain.", "The lower half of Table 3 shows the result of Log.", "Reg.", "Feat.", "Only, indicating the necessity of a language model.", "Neither a language model only nor handcrafted features only can yield satisfactory performance.", "Comparing AISeer w/o Regu.", "result, also in the lower half of Table 3, and the standard BERT and LSTM results, it is shown that handcrafted features improve on best common document classifiers by about 2%.", "We believe that the handcrafted features combined, in particular, the novelty feature, helps in resolving label contradictions and data inconsistency.", "We believe the novelty feature should be only considered under contexts and will not perform well on its own.", "First, novelty can be a subjective concept and may vary according to different types of claims, openness of claims, the department (cate-gory), etc.", "Second, novelty as practically measured by dis-similarity, can be easily achieved by poorly written random content, thus structural or overall similarity is also important.", "However, the observations indicate that there are potential conflicts between the novelty feature and other handcrafted features.", "While the latter helps with prediction performance on their own and provide contexts for the 355 novelty feature thus imperative, it will also attenuate the effects of the regularized novelty feature.", "We leave this challenge for future work.", "One may also ask whether the handcrafted features have contributed significantly given the moderate improvement.", "Granted, application full texts may also contain signals for the patent class and applicant efforts that may partially reflect handcrafted features and the document classifier such as BERT may pick them up.", "To shed light on how AISeer learns from handcrafted features, we run linear regressions for the model prediction scores on handcrafted features for interpretable insights and present statistical results, as shown in Table 4. In the table, even prediction scores under BERT are significant in all handcrafted features, showing that BERT does learn knowledge overlapping with the handcrafted features to some extent.Overall, low R 2 's indicate that knowledge from the deep neural model and knowledge from handcrafted features are quite distinct.", "Comparing BERT and AISeer w/o.", "Regu., the significant R 2 increase from 0.085 to 0.125 shows that AISeer captures handcrafted features much more effectively than BERT.", "The prediction scores of AISeer w/o.", "Regu.", "have an additional about 4% increase in explanability by the handcrafted features.", "In Table 3, comparing AISeer and AISeer w/o", "Regu., the median run result indicates that adding monotonic regularization produces a small magnitude of improvement.", "Table 4 also provides insights with respect to the monotonic regularization.", "According to Table 4, our claim-level novelty feature N s,claim has the most significant impact, i.e. the coefficients are much larger in every column.", "The use of monotonic regularization alone boosts the R 2 significantly, indicating that the approach also helps the model learn from handcrafted features overall.", "About 19% of the knowledge of AISeer corresponds to handcrafted features, a 10% increase over BERT.", "Also, AISeer corrects incorrect coefficient signs from BERT.", "Intuitively, the approval chance shall increase with the number of applicant cited references.", "However, BERT prediction scores are negatively correlated with it statistically significantly.", "Under AISeer, this direction is reversed to match intuitions.", "We also evaluate the Spearman correlation coefficients of the probability prediction scores produced by the models with the claim-level novelty feature Pearson correlations with the document-level similarity score.", "Spearman correlations measure the strength and direction of monotonic association between two variables.", "According to Table 5, first we can confirm that applying monotonic regularization significantly pushes the prediction scores to be more monotonically decreasing in the core novelty feature the Spearman correlation shifts from -0.0230 to -0.103.", "However, compared to the BERT, the regularization effect is less prominent.", "Observe that adding handcrafted features will actually steer the monotonicity into the opposite direction.", "Our regularized AISeer model manages to both benefit from the novelty feature and incorporate knowledge from other handcrafted features.", "While Table 4 illustrates the significant effects of applying the monotonic regularization on the prediction scores, we acknowledge that the observed main performance improvement is not very significant.", "In fact, although monotonic regularization raise the performance on average, it does not always yield desirable improvements depending on the random seed and the hyperparameter setup.", "The BERT model may already have a decent learning power to mine the novelty measurement despite the noisy data.", "We observe that in Table 4, the BERT prediction scores, learned from texts only, are significant in the novelty feature and are in the correct direction.", "The relatively small performance gain of using monotonic regularization may also be attributed to the compromised precision of the novelty feature due to the use of the ElasticSearch pre-filter for the sake of computational costs.", "To our knowledge, our work is the first in predicting patent approvals according to the examination procedures at the government patent office.", "Few extant researches attempt to predict decisions in office.", "Winer (2017) studies PTAB (Patent Trial and Appeal Board) hearing decisions at USPTO.", "Other related work addresses patent quality in a general and broad sense (Wu et al., 2016).", "More broadly in the IP/patent domain, although AI/ML applications have been often advocated (Ebrahim, 2018), studied (for a review see (Aristodemou and 356 Table 4: Regression Analysis of Prediction Scores on Handcrafted Features.", "Tietze, 2018)) or implemented in practice (Lu et al., 2017), most work focus on determining patent content classes to save manpower or concern only with patent grants rather than applications (Verberne et al., 2010; D'hondt et al., 2013; Hu et al., 2016; Balsmeier et al., 2018; Lee and Hsiang, 2019).", "Recent studies (Hsu et al., 2020) emerge aiming at predicting patent transfers and the economic value.", "Other streams of related work include those exploring patent similarity.", "Our approach of constructing the novelty feature with a state-of-the-art neural bi-encoder (Reimers and Gurevych, 2019) is significantly more advanced than relatively rudimentary approaches in the extant literature, such as text matching and frequency-based methods (Younge and Kuhn, 2016; Arts et al., 2018; Shah-mirzadi et al., 2019).", "Studies on semantic analysis and representation of technology (Kim et al., 2016; Strumsky and Lobo, 2015) based on patent data are also related.", "In this paper, we tackle the challenging problem of predicting patent approval decisions as per 35 U.S. Code 102 , namely the novelty-based decisions.", "We have prepared a large-scale benchmark dataset by consolidating different data sources from USPTO.", "From the evaluations of the popular document classifiers, BERT and LSTM are arguably the most effective ones.", "We identify the time-dependent challenge of the novelty judgement, and therefore propose AISeer, a novel framework going beyond the traditional document classifiers.", "Specifically, we construct a claim-level core novelty feature along with several other handcrafted features and apply them on top of the pre-trained BERT model.", "We further propose to add the monotonic regularization on the core novelty feature to resolve the potential label conflicts caused by the mechanism of the patent examination process.", "Experimental results have verified the superiority of AISeer and also the effectiveness of introducing novelty features and monotonic regularization.", "We believe that our work is beneficial to various parties, including patent applicants, attorneys, examiners and regulators.", "While the advantages of our regularization methodology are significant, there is still room for potential metric improvements, thus further developing the work will yield opportunities for promising future research and greater contributions to the communities.", "In future, it is important to extend the scope from claims to the other sections in the patent applications.", "Relationships among components and entities described in claims and relations among claims are also critical to investigate.", "We want to thank the anonymous reviewers for their insightful comments.", "The research was sponsored in part by National Science Foundation Convergence Accelerator under award OIA-2040727 as well as generous gifts from Google, Adobe, and Teradata.", "Any opinions, findings, conclusions, or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views, either expressed or implied, of the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright annotation hereon.", "This paper focuses on the patent approval prediction.", "The data is publicly available from USPTO and we collected it to form a large-scale dataset.", "Our architecture is built upon open-source models and all the datasets are available online.", "Therefore, we do not anticipate any major ethical concerns." ]
[ "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "method", "method", "result", "result", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "other", "method", "abstain", "abstain", "objective", "method", "objective", "abstain", "method", "objective", "abstain", "abstain", "other", "other", "other", "other", "method", "method", "method", "abstain" ]
[ "Open Information Extraction (OpenIE) is the task of extracting (subject, predicate, object) triples from natural language sentences.", "Current OpenIE systems extract all triple slots independently.", "In contrast, we explore the hypothesis that it may be beneficial to extract triple slots iteratively: first extract easy slots, followed by the difficult ones by conditioning on the easy slots, and therefore achieve a better overall extraction.", "Based on this hypothesis, we propose a neural OpenIE system, MILIE, that operates in an iterative fashion.", "Due to the iterative nature, the system is also modularit is possible to seamlessly integrate rule based extraction systems with a neural end-to-end system, thereby allowing rule based systems to supply extraction slots which MILIE can leverage for extracting the remaining slots.", "We confirm our hypothesis empirically: MILIE outperforms SOTA systems on multiple languages ranging from Chinese to Arabic.", "Additionally, we are the first to provide an OpenIE test dataset for Arabic and Galician.", "Open Information Extraction (OpenIE) aims to extract structured facts in the form of (subject, relation, object) -triples from natural language sentences (Etzioni et al., 2008).", "For example, given a sentence, \"Barrack Obama became the US President in the year 2008\" , an OpenIE system is expected to extract the following triples: ( Barrack Obama ; became ; US President ) and ( Barrack Obama ; became US President in ; 2008 ).", "We refer to subject, predicate and the object of the triple as slots of a triple.", "OpenIE extractions are schema-free, human understandable intermediate representations of facts in source texts (Mausam, 2016).", "They are useful in a variety of information extraction end tasks such as summarization (Xu and Lap-ata, 2021), question answering (Khot et al., 2017; Yan et al., 2018) and automated schema extraction (Nimishakavi et al., 2016).", "The various slots of a triple are dependent on each other and hence an error in one slot renders the entire extraction unusable.", "We hypothesize that triple extraction errors largely stem from the difficulty of extracting certain slots of a triple and said difficulty may depend on the sentence construction and the language.", "For example, \"Barrack Obama became the US President in the year 2008\" contains two triples (Barrack Obama; became; US President) and (Barrack Obama; became US President in; 2008) .", "Extracting the predicate, \"became US President in\" , for the second triple is tricky, because the object of the first triple (US President) overlaps with the predicate of the second triple.", "But if the extraction system was provided with the object, (2008) , and then asked to extract a triple conditioned on this object, the predicate extraction would be easier.", "This is precisely the hypothesis we wish to investigate is it easier to extract certain slots of a triple, say subjects, compared to other slots, such as objects, and is it possible to improve performance by leveraging specific slot extraction orders?", "Given the hypothesis, we propose MILIE, a M odular & I terative multi L ingual open Information Extraction system, which iteratively extracts the different slots of a triple.", "The iterative nature allows for (1) studying the effects of a slot extractions on the remaining extractions, (2) extracting easier triple slots followed by harder ones, (3) aggregating different slot extraction orders as a mixture of experts, and (4) integrating slots supplied by an external rule-based system, resulting in a hybrid system.", "The latter offers a system that combines the best of neural and rule based systems, e.g. by using a rule-based system to extract high precision slots on which the neural system is conditioned.", "systems.", "It proves especially useful for zero-shot multilingual extraction, which we evaluated on five different low resource languages.", "Additionally we show how MILIE can leverage rule-based slot extraction by conditioning on them to predict the remaining parts of the triple.", "Therefore MILIE is a boon for existing applications wishing to transition from a rule based information extraction system to a neural one, because MILIE would allow using the rule-based system to compensate for the lack of exhaustive training data.", "Finally, we perform linguistic analyses that uncovers useful insights on how different languages either make it easy or difficult for OpenIE systems to extract individual elements of the triple.", "Our contributions are summarized as follows:", "1. We propose MILIE, a multilingual OpenIE system that iteratively extracts the different slots of a triple.", "2. We carry out extensive experiments on a variety of languages (English, Chinese, German, Arabic, Galician, Spanish and Portuguese) and demonstrate that MILIE outperforms re-cent SOTA systems by a wide margin, especially on languages other than English.", "3. We perform an extensive analysis based on ablation studies and uncover interesting insights about the nature of OpenIE task in different languages.", "The backbone of our system is the iterative procedure (Section 2.1), which allows us to investigate our hypothesis.", "The iterative procedure allows us to extract triple slots in various pathway orders, which results in a series of possible aggregation schemes (Section 2.2).", "To create a strong iterative system, the training paradigm (Section 2.3 needs to consider two aspects: (1) it needs to prepare incomplete triple extractions which represent incomplete triple extractions the system is expected to predict; (2) it creates negative samples that allow for teaching the system when to not continue with an extraction due to a prior error.", "With the iterative nature we also integrate rule-based systems (Sec-tion 2.4) as well as elegantly handle the specific case of n-ary extractions, where more than 3 slots need to be extracted (Section 2.5).", "On top of this block, we add a total of four neural networks blocks in parallel, which we refer to as heads and which are each in charge of extracting a particular triple slot.", "Concretely, we have the heads f s , f o , f p , f a , which are in charge of predicting s ubject, o bject, p redicate and a rgument, respectively.", "The argument head is an extra feature, which is needed for n-ary extractions that occur in some datasets, where in addition to the triple there might be an argument that modifies the triple, e.g., a temporal phrase.", "Given an input sequence of words of length N , S = w 1 , , w N , the task for each extraction head is framed as a BIO tagging problem.", "For this, each output head outputs a label l i for token w i , where l i { B, I, O } , i = 1 N ( see Figure 1 for the architecture).", "The output heads use the final transformer hidden state and predict labels denoted by L s , L o , L p , L a where L ( ) = l 1 , l 2 , l N .", "By having different extraction heads, we identify extraction slots iteratively.", "time, along with the input sentence, the model also expects extractions predicted by the previous iterations.", "To provide this information we add special symbols to the sentence that explicitly mark the previous extractions in the sentence.", "For example, we surround the predicate with the symbol <P> , subject with <S> and object with <O> .", "For example, for predicting the object given the predicate extracted from previous iteration, the extracted predicate is marked in the sentence using the <P> symbol and the sentence is consequently passed through the transformer for predicting the object using the object head.", "We always extract the arguments at the last iteration, therefore we do not mark the arguments in the sentence.", "1 Finally, we add the option to attach a dependency tag t i to each word w i in the sequence.", "This additional information may allow the system to more effectively learn how to extract triples.", "We use a language specific dependency tagger for obtaining the tags.", "We target languages, which are low resource for OpenIE, but could be high resource for other tasks, such as PoS tagging or dependency parsing.", "For a graphical overview of the MILIE architecture, see Figure", "1. 1 Preliminary experiments suggested that predicted the argument last leads to better overall results.", "This makes sense intuitively, as the argument can modify the entire triple.", "The order in which the different triple parts are extracted can be varied.", "This allows us to investigate the challenge of extracting triple elements in specific order on different languages.", "Additionally different pathways aid different kinds of extractions and combining them results in a richer set of extractions.", "Choosing a particular order defines a decoding pathway P uvxy as a sequence of output heads where u, v, x, y { s, p, o, a } .", "For example, the decoding pathway P spoa denotes a sequence of output functions ( f s , f p , f o , f a ) .", "Fixing the n-ary argument extraction in the final iteration we obtain the following six decoding pathwaysP spoa , P sopa , P psoa , P posa , P ospa , P opsa .", "Let's assume the decoding pathway P psoa : predicates are extracted first, then for each predicate, subjects are extracted, then for each (predicate, subject) pair objects are extracted and finally for every extracted (predicate, subject, object) tuple all the n-ary arguments are extracted.", "This extraction procedure preserves the relationships between the extracted elements resulting in correctly extracting multiple triples.", "Figure 2 illustrates this procedure.", "We hypothesize that some triples are easier to predict if, e.g., the predicate is extracted first while for others subject first would work well.", "This could differ from triple to triple, but also with different languages.", "Consequently, some decoding pathways might be more error prone than others.", "This leads to two questions: (1) Which pathways are best?", "(2) Can we improve recall by aggregating triples using different decoding pathways?", "We propose a simple algorithm we term as Water Filling (WF) for aggregating the extractions.", "This is inspired by the power allocation problem in the communication engineering literature (Ku-mar et al., 2008).", "Imagine a thirsty person with access to different pots of water with varying levels of purity and with the caveat that the amount of water is inversely proportional to the purity.", "The natural solution is to first drink the high purity water and move on to the pots in decreasing level of purity until the thirst is quenched.", "We use the same idea.", "Treating each decoding pathways as an expert, we assume that the triples extracted by all 6 pathways are more accurate compared to those extracted by only 5 pathways, 4 pathways and so on.", "This can be thought of as triples obtaining votes from experts.", "Starting with an empty set, for each sentence we start adding triples to the set in the 6941 order of decreasing number of received votes.", "The normalized votes a triple receives is used as the confidence value of the triple.", "Although the procedure is explained in a sequential manner it can be parallelized by running all 6 pathways in parallel.", "Triple preparation.", "For effectively extracting different triple slots conditioned on other slots, the model needs to see such combinations during training.", "However, enumerating all possible combinations exhaustively is prohibitively expensive.", "We propose a sampling technique that ensures that the model sees varied combinations of different targets and prior extractions.", "This is done by creating a training set that simulates a prior extraction and forces the model to predict the next extraction.", "To ensure that the training dataset size does not explode, we randomly sample one pathway order for each training instance.", "Based on the sampled pathway, we randomly sample at which step in the decoding process we are at and then mark the slots prior to this step in the sentence and use the remaining steps as target labels.", "We allow for multiple instances of the target labels, however there is only one instance of the marked element.", "For example, given one subject the target could be multiple predicates.", "This procedure trains the model to predict an appropriate label conditioned on a variety of previous predictions.", "At each time step we update the parameters of the currently used head and the underlying model.", "Given that triples are at different steps in their decoding process, we minimize different log-likelihood functions.", "We describe the log likelihood functions along with a few example of the training instances in Table", "1. We list additional details in Appendix A. Negative Sampling.", "Iterative prediction is prone to error amplification, i.e. if an error is made during the first iteration then the error propagates and affects subsequent extractions.", "Anticipating this, we train MILIE to recognize extraction errors made in the previous iteration.", "We purposely augment the training data with corrupted data points containing incorrectly marked extractions.", "For each of the incorrect extractions the model is trained to predict a blank extraction, i.e., predicting the outside label for all tokens.", "We use a similar sampling procedure as described previously.", "For every training data point from a fixed number of training data points, we create one negative sample using one of the three techniques and then choose k negative samples, where k is a hyperparameter.", "We corrupt triples using three techniques: (1) corrupting the predicates by replacing them with randomly chosen tokens from the sentence, (2) corrupting the subject and object by exchanging them, and (3) by mismatching the subject object pairs from different triples.", "We detail the entire procedure in Appendix A. 2.4 Integrating Linguistic Rule based systems Crucially, each output head is conditioned on the input and the output labels extracted by the previous function.", "This feature allows MILIE to seamlessly integrate rule based systems with neural systems since the conditioning can be also done on extractions obtained from rule based systems.", "This is advantageous in situations where a linguistic rule based system works well, for say, extracting objects.", "Then MILIE can complete the missing parts of the triple conditioned on the objects.", "We treat the output of the rule based system as potential objects paired with subjects and extract the predicate connecting them.", "If the rule based extraction is incorrect, then MILIE can detect the error and extract nothing.", "This results in more accurate extractions compared to simply post-processing the extracted tokens using linguistic rules.", "We evaluate MILIE on both n-ary as well as binary triple extraction datasets.", "One simple way to convert the n-ary extractions to binary extraction is to ignore the n-ary arguments.", "However, this will lead to a decrease in recall because the n-ary arguments may not be part of other extracted triples due to the initial n-ary extraction.", "Another method is to treat the extracted n-ary arguments as objects to the same subject, predicate pair.", "This would ensure that the extracted arguments are not dropped, however this may result in drop of precision since the n-ary argument may not attach to the same predicate.", "For example, consider the extraction ( Barrack Obama; became; US President; in the year 2008 ).", "Treating n-ary arguments as objects results in ( Barrack Obama; became; US President ) and ( Barrack Obama; became; in the year 2008 ) resulting in an incorrect extraction.", "In contrast to the above subpar solutions, the iterative nature of MILIE allows us to elegantly address the problem of converting n-ary extractions into a 6942 Likelihood function Input Sentence Head Target L p = (cid:80) Ni =1 log p ( l pi | f p ( ); S ) The Taj Mahal was built by Shah Jahan in 1643 Predicate built by L s = (cid:80) Ni =1 log p ( l si | f s ( ); S ; L p ) The Taj Mahal was <P>built by<P> Shah Jahan in 1643 Subject Taj Mahal L o = (cid:80) Ni =1 log p ( l oi | f o ( ); S ; L p ; L s ) The <S>Taj Mahal<S> was <P>built by<P> Shah Jahan in 1643 Object Shah Jahan L a = (cid:80) Ni =1 log p ( l ai | f a ( ); S ; L p ; L s ; L o ) The <S>Taj Mahal<S> was <P>built by<P> <O>Shah Jahan<O> in 1643 Argument in 1643 L p = (cid:80) Ni =1 log p ( l pi | f p ( ); S ; L s ; L o ) The <S>Taj Mahal<S> was built by <O>Shah Jahan<O> in 1643.", "binary format: we treat the extracted n-ary arguments as hypothesized objects.", "We then provide the extracted subject, hypothesized object pair to the model, which then extracts a new predicate conditioned on the previously extracted subject and the hypothesized object, i.e., p ( L p | f p ( ); S ; L s = \"Barrack Obama\" ; L o = \"year 2008\" ) .", "This creates a possibility of extracting the correct predicate, something that is not possible with existing n-ary OpenIE systems.", "Baselines & Training.", "We compare MILIE with both unsupervised and supervised baselines.", "Specifically we compare MILIE with ClausIE, MinIE, Stanford-OIE, RNN-OIE, OIE6 (Del Corro and Gemulla, 2013; Gashteovski et al., 2017; Stanovsky et al., 2018; Angeli et al., 2015; Kolluru et al., 2020a) and Multi2OIE (Ro et al., 2020) on English.", "Multi2OIE is the only neural system capable of extracting triples from multiple languages and therefore it is the only available baseline for the non-English evaluations.", "We use the English RE-OIE2016 (Zhan and Zhao, 2020) training dataset used in (Ro et al., 2020).", "This training dataset contains n-ary extractions allowing MILIE to be evaluated on both n-ary as well as binary extraction benchmarks.", "Evaluation on languages other than English is always zero-shot, i.e., the model is trained using only the English Re-OIE2016 dataset and tested on test set of the other languages.", "CaRB benchmark.", "We use the CARB benchmark introduced in (Bhardwaj et al., 2019) for evaluating English OpenIE n-ary extraction.", "However, the CARB benchmark also suffers from serious shortcomings due to its evaluation method based on token overlaps.", "For example, (Gashteovski et al., 2021) discovered that a simple OpenIE system that breaks the sentence into a triple at the verb boundary achieves 0 .", "70 recall and 0 .", "19 precision.", "This is problematic since it indicates that simply adding extraneous words to the extraction results in improved recall.", "BenchIE benchmark.", "Due to the issues identi-fied for CaRB, we also evaluate using BenchIE, which is an exhaustive fact based multilingual OpenIE benchmark proposed by (Gashteovski et al., 2021).", "BenchIE evaluates explicit binary extractions in English, Chinese, and German.", "BenchIE is accompanied by an annotation tool, AnnIE (Friedrich et al., 2021), for extending the benchmark to additional languages.", "For Arabic, we translated 100 sentences from BenchIE-English to Arabic with the help of a native Arabic speaker and then extracted triples using AnnIE.", "Similarly for Galician we translated all 300 sentences to Galician 6943 Chinese German Arabic Galician F1 P R F1 P R F1 P R F1 P R M2OIE 17.1 25.7 12.8 4.0 8.9 2.6 4.9 16.3 2.9 8.7 14.7 6.2 milIE 20.5 25.2 17.3 8.5 13.4 6.3 18.3 23.7 14.8 DEP 19.2 19.8 18.7 8.4 11.3 6.7 7.3 14.2 4.9 13.9 16.6 11.9 NS 17.3 19.6 15.5 10.3 14.3 8.0 4.0 10.8 2.5 13.7 18.5 10.9 Bin 20.0 22.0 18.4 9.0 13.5 6.7 7.5 13.8 5.1 17.3 21.7 14.4 Table 3: MILLIE performance comparison on multilingual BenchIE.", "Multilingual CaRB.", "Additionally we also evaluate MILIE on the Spanish and Portuguese multilingual CaRB datasets introduced in Ro et al. (2020).", "The lexical match evaluation used in this dataset has numerous shortcomings (Bhard-waj et al., 2019), however we include it for a fair comparison to Ro et al. (2020)'s Multi2OIE system.", "The CARB test set was translated to Spanish and Portuguese using the Google Translate API.", "To investigate the quality of these automatic translations, we randomly sampled 100 sentences from the test sets and had them evaluated by native Spanish and Portuguese speakers.", "To our surprise we discovered that around 70 percent of the sentence or extraction translations were inaccurate.", "Table 2 shows a few examples of the incorrect translations.", "For an accurate and clean comparison with Multi2OIE we also cleaned up part of the Spanish test set by re-translating 149 sentences and their extractions in Spanish.", "These translations were done by native Spanish speakers.", "On the CARB English benchmark we use results for baselines reported in (Ro et al., 2020) and (Kolluru et al., 2020a).", "For evaluating on BenchIE, we run all the baselines on the BenchIE English evaluation benchmark.", "For multilingual BenchIE we train Multi2OIE using the code and hyperparameters supplied in the paper.", "For hyperparameter tuning we use the CARB English validation set and use the F1 scores obtained using the CARB evaluation procedure for comparing models with different hyperparameters.", "The MILIE model is trained using negative sampling and includes the dependency tag information and binarization.", "We use the spaCy dependency parser for obtaining dependency tags.", "We were unable to find a dependency parsing tool with universal dependencies for Arabic and therefore we did not use dependency tags for Arabic.", "For BenchIE, MILIE uses the binarization function described in Section 2.5, but not for CARB and lexical match because they evaluate n-ary extractions.", "In Table 5, we compare MILIE with several unsupervised and supervised baselines in English on CARB and BenchIE.", "MILIE performs much better compared to other neural baselines on BenchIE.", "This is not the case for the CARB dataset since CARB penalizes compact extractions and rewards longer extractions (Gashteovski et al., 2021).", "Although rule based systems like ClausIE and MinIE outperform neural systems, they cannot be used for languages other than English.", "In Table 3, we compare MILIE with Multi2OIE (M2OIE) on the multilingual BenchIE benchmark.", "MILIE performs significantly better compared to Multi2OIE for all the languages.", "For German and Arabic both Multi2OIE and MILIE perform significantly worse compared to the other languages.", "The presence of separable prefixes in German verbs which cannot be extracted using BIO tags results in low performance.", "The BIO tagging scheme assumes continuity of phrases which is absent for most German verbs present in predicates, resulting in extremely low recall.", "For Arabic, the low scores are due to the Verb-Subject-Object nature of the Arabic language along with the fact that subjects or objects can be expressed as part of the verb.", "This calls for additional research on framing OpenIE tasks for languages such as German and Arabic.", "MILIE significantly outperforms Multi2OIE for Galician language which is closely related to Portuguese.", "Ablation results in Table 3 also indicate the usefulness of adding the dependency tags, negative sampling, and the binarization mechanism.", "In Table 4, we compare MILIE with Multi2OIE on the CARB lexical match benchmark.", "MILIE, without negative sampling works best for Spanish clean data.", "This is not due to the language, but due to the lexical match evaluation which rewards overly long extractions even if incorrect.", "Not using negative sampling sometimes improves recall which may improve F1 score.", "This is observed for the German benchmark.", "MILIE can easily integrate any rule based system that extracts even a part of the triple.", "To evaluate this, we first simulate a system that only extracts the object and use MILIE to extract other parts of the triple.", "We do this by employing ClausIE for extracting triples for the BenchIE English data and only use the object, discarding the rest of the triple.", "The reason behind the choice of selecting object extraction from ClausIE is the fact that neural systems are not good at extracting objects (Kolluru et al., 2020a).", "This is also seen from additional experiments detailed in Section", "4. Table 6 indeed confirms that combining rule based object extraction with MILIE improves performance by over 6% in F1 score.", "This showcases that MIL IE's ability to integrate other systems can be a great advantage.", "We would like to analyze that the ability of MILIE to extract triples using different extraction patterns results in improved performance on multilingual data.", "For this, we compare MILIE with the water filling aggregation against MILIE with different extraction pathways.We also compared with a dynamic decoding scheme where MILIE chooses a decoding pathways based on the sentence.", "To do this we split a part of the English training set and for each sentence in the split we record the extraction pathway that provides the best F1 score MILIE 6945 Figure 3: Percentage error contribution due to incorrect subject, predicate or object for EN, DE, ZH and AR.", "as per CARB evaluation.", "We then use this as training data for training another mBERT model which classifies each sentence in one of the six classes where each class represents an extraction pathway.", "Table 7 details the performance for different extraction schemes.", "All the extraction schemes except WF, use only one pathway.", "DYN provides mixed results across the different languages for German it is the best approach, whereas for Arabic it is the worst.", "In contrast, the combination of multiple pathways allows to performing much better than the other approaches on all languages, except German.", "This demonstrates that combining triple extraction from multiple pathways is better than any single pathway, which in turn confirms that extracting triples repeatedly from the same sentence using multiple extraction pathways is more profitable than using a single extraction pathway.", "Additionally, Table 7 provides an interesting insight: predicate first seems to be the best, followed by subject first and then object first for languages other than Arabic.", "This also shows how the difficulty of extracting triple slots using transfer learning from English varies with the target language.", "Table 7 suggests that predicates are easier to extract leading to lesser number of errors propagated in the prediction chain.", "We suspect that this could result from differences in linguistic variability.", "To test our hypothesis we measured the entropy of the distribution of dependency and part-of-speech tags in the predicate, subject and object slots in the BenchIE English and the multilingual test sets.", "Results shown in Table 8 suggest that linguistic complexity of objects is higher than those of predicates and subjects.", "This is also confirmed in Figure 3, where we plot the extraction errors in either subject, predicate or objects among incorrectly extracted triples.", "Most errors result from extracting incorrect objects compared to predicates and subjects.", "The percentage sum does not add to hundred because an incorrect triple can contain errors in more than one slot.", "OpenIE systems largely come in two flavors, (1) unsupervised OpenIE systems that use fine grained rules based on dependency parse trees (Del Corro and Gemulla, 2013; Gashteovski et al., 2017; Lauscher et al., 2019), and (2) supervised neural OpenIE systems, trained end-to-end with large training datasets (Stanovsky et al., 2018; Ro et al., 2020; Kolluru et al., 2020a).", "Neural OpenIE systems characterize OpenIE as either a sequence tagging task (Stanovsky and Dagan, 2016; Ro et al., 2020), span prediction task or a sequence generation task (Kolluru et al., 2020b).", "However all these prior approaches extract a triple in a single step, which does not allow us to study the effect of extracting a specific slot and its effect on extracting the rest of the triple.", "Neural generative approaches to OpenIE use sequence-to-sequence models with a copy mechanism for generating triples (Sun et al., 2018; Kolluru et al., 2020b).", "The copy mechanism needs to be learned and is often a source of errors.", "A series of alternative approaches cast OpenIE as a sequence tagging task where each token is tagged as subject, predicate or object using a BIO like tagging scheme (Stanovsky et al., 2018; Ro et al., 2020; Kolluru et al., 2020a).", "In these systems, all triple slots are extracted simultaneously and it is therefore not possible to condition on easier slots.", "More closely related to our work is SpanOIE (Zhan and Zhao, 2020) and Multi2OIE (Ro et al., 2020), which first extracts the predicate and then 6946 all additional arguments.", "Like us, Multi2OIE (Ro et al., 2020) addresses multilinguality by leveraging a pretrained BERT model (Devlin et al., 2019) for transfer learning.", "In contrast, through our iterative nature, it is possible to enrich the extractions in other languages if rule based models or other models (e.g. NER recognizers) exist to provide input for a triple slot.", "IMOJIE (Kolluru et al., 2020b) iteratively extracts entire triples from a sentence: first a triple is extracted, which is added to the input to extract the next triple.", "In contrast, our work iteratively extracts the slots of a single triple, which allows us to condition on the easier slots and therefore obtain higher quality triples.", "(Kolluru et al., 2020a) propose OpenIE6, a BERT based system, with iterative grid labelling and linguistic constraint based training.", "Such lingusitic constraints with soft penalties cannot be readily ported to other languages since such constraints use head verb based heuristics.", "Consequently OIE 6 is evaluated only on English.", "We introduced MILIE, a modular & iterative multilingual OpenIE system.", "We confirmed our hypothesis that it is beneficial to extract triple slots iteratively which allows us to extract easier slots first.", "Our experiments on English as well as five low resource languages uncovered that, with the exception of Arabic, triples are easier to extract if the predicate is extracted first followed by the subject and object.", "More importantly we discovered that extracting triples using multiple extraction pathways is superior than the standard single extractions especially in the multilingual setting.", "We also demonstrated how MILIE can be combined seamlessly with rule based systems for improving performance.", "Although our experiments were focused on the OpenIE task, we believe that the insights gained can be translated to other information extraction tasks with coupled extractions.", "We plan to explore such connections in the future." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "result", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "method", "result", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "other", "method", "other", "other", "other", "other", "abstain", "abstain", "objective", "other", "objective", "other", "other", "other", "abstain", "objective", "objective", "method", "objective", "method", "objective" ]
[ "Recent research demonstrates the effectiveness of using fine-tuned language models (LM) for dense retrieval.", "However, dense retrievers are hard to train, typically requiring heavily engineered fine-tuning pipelines to realize their full potential.", "In this paper, we identify and address two underlying problems of dense retrievers:", "i) fragility to training data noise and", "ii) requiring large batches to robustly learn the embedding space.", "We use the recently proposed Condenser pre-training architecture, which learns to condense information into the dense vector through LM pre-training.", "On top of it, we propose coCondenser, which adds an unsupervised corpus-level contrastive loss to warm up the passage embedding space.", "Experiments on MS-MARCO, Natural Question, and Trivia QA datasets show that coCondenser removes the need for heavy data engineering such as augmentation, synthesis, or fil-tering, and the need for large batch training.", "It shows comparable performance to RocketQA, a state-of-the-art, heavily engineered system, using simple small batch fine-tuning.", "1 1 Introduction Building upon the advancements of pre-trained language models (LM; Devlin et al. (2019); Liu et al. (2019)), dense retrieval has become an effective paradigm for text retrieval (Lee et al., 2019; Chang et al., 2020; Karpukhin et al., 2020; Qu et al., 2021; Gao et al., 2021a; Zhan et al., 2022).", "Recent research has however found that fine-tuning dense retrievers to realize their capacity requires carefully designed fine-tuning techniques.", "Early works include iterative negative mining (Xiong et al., 2021) and multi-vector representations (Luan et al., 2020).", "The recent RocketQA system (Qu et al., 2021) sig-nificantly improves the performance of a dense retriever by designing an optimized fine-tuning 1 Our code is available at https://github.com/ luyug/Condenser .", "pipeline that includes", "i) denoising hard negatives, which corrects mislabeling, and", "ii) large batch training.", "While this is very effective, the entire pipeline is very heavy in computation and not feasible for people who do not have tremendous hardware resources, like those in academia.", "In this paper, we ask, instead of directly using the pipeline, can we take the insights of RocketQA to perform language model pre-training such that the pre-trained model can be easily fine-tuned on any target query set.", "Concretely, we ask what the optimized training in RocketQA solves.", "We hypothesize that typical LMs are sensitive to mislabeling, which can cause detrimental updates to the model weights.", "Denoising can effectively remove the bad samples and their updates.", "On the other hand, for most LMs, the CLS vectors are either trained with a simple task (Devlin et al., 2019) or not explicitly trained at all (Liu et al., 2019).", "These vectors are far from being able to form an embedding space of passages (Lee et al., 2019).", "The large training batches in RocketQA help the LM to stably learn to form the full embedding space.", "To this end, we want to pre-train an LM such that it is locally noise-resistant and has a well-structured global embedding space.", "For noise resistance, we borrow the Condenser pre-training architecture (Gao and Callan, 2021), which performs language model pretraining actively conditioning on the CLS vector.", "It produces an information-rich CLS representation that can robustly condense an input sequence.", "We then introduce a simple corpus level contrastive learning objective: given a target corpus of documents to retrieve from, at each training step sample text span pairs from a batch of documents and train the model such that the CLS embeddings of two spans from the same document are close and spans from different documents are far apart.", "Combining the two, we propose coCondenser pre-training, which unsupervisedly learns a corpus-aware pre-trained model for dense retrieval.", "In this paper, we test coCondenser pre-training on two popular corpora, Wikipedia and MSMARCO.", "Both have served as information sources for a wide range of tasks.", "This popularity justi-fies pre-training models specifically for each of them.", "We directly fine-tune the pre-trained coCondenser using small training batches without data engineering.", "On Natural Question, TriviaQA, and MS-MARCO passage ranking tasks, we found that the resulting models perform on-par or better than RocketQA and other contemporary methods.", "Dense Retrieval Transformer LM has advanced the state-of-the-art of many NLP tasks (Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019; Lan et al., 2020) including dense retrieval.", "Lee et al. (2019) are among the first to demonstrate the effectiveness of Transformer dense retrievers.", "They proposed a simple Inverse Cloze Task (ICT) method to further pre-train BERT (Devlin et al., 2019).", "Follow-up works explored other pretraining tasks (Chang et al., 2020) as well end-to-end co-training of reader and retriever (Guu et al., 2020).", "Karpukhin et al. (2020) is the first to discover that careful fine-tuning can learn effective dense retriever directly from BERT.", "Later works then started to investigate ways to further improve fine-tuning (Xiong et al., 2021; Qu et al., 2021).", "Among them, Qu et al. (2021) proposed the RocketQA fine-tuning pipeline which hugely advanced the performance of dense retrievers.", "Until very recently, pre-training for dense retrieval has been left unexplored.", "A concurrent work, DPR-PAQ (Oguz et al., 2021), revisits pretraining and proposes domain matched pre-training, using a 65-million-size synthetic QA pair dataset generated with pre-trained Natural Question and Trivia QA pipelines to pre-train dense retrievers.", "This paper uses a recently proposed dense retrieval pre-training architecture, Condenser (Gao and Callan, 2021).", "Unlike previous works that design pre-training tasks, Condenser explored the idea of designing a special pre-training architecture to improve representation effectiveness.", "One reason why dense retrieval is of immediate great value is that there is a rich literature that studies efficient dense retrieval for first stage retrieval (Johnson et al., 2017; Guo et al., 2020).", "There are also mature dense retrieval libraries, such as FAISS (Johnson et al., 2017).", "By pre-encoding the corpus into a MIPS index, retrieval can run online with millisecond-level latency (Johnson et al., 2017; Guo et al., 2020).", "Contrastive Learning Contrastive learning has become a very popular topic in computer vision (Chen et al., 2020; He et al., 2020).", "Recent works have brought the idea to natural language processing to learn high-quality sentence representation (Giorgi et al., 2020; Wu et al., 2020).", "In this work, we use contrastive learning to do pretraining for dense retrieval.", "Different from earlier work, instead of individual representations (Giorgi et al., 2020), we are interested in the full learned embedding space, which we will use to warm start the retriever.", "The large batch requirement had been a limiting factor in contrastive learning (Chen et al., 2020) under resource-limited setups where GPU (accel-erator) memory is not sufficiently large.", "In general, this extends to any training procedure that uses contrastive loss, including dense retrieval pretraining (Guu et al., 2020; Chang et al., 2020).", "Gao et al. (2021b) recently devised a gradient cache technique that upper-bounds peak memory usage of contrastive learning to almost constant.", "In subsection 3.3, we show how to adapt it for coCondenser pre-training.", "In this section, we first give a brief review of Condenser.", "Then we discuss how to extend it to coCondenser and how to perform memory-efficient coCondenser pre-training.", "In this paper, we adopt a special pre-training architecture, Condenser (Gao and Callan, 2021).", "Condenser is a stack of Transformer blocks.", "As shown in Figure 1, these Transformer blocks are divided into three groups, early backbone encoder layers, late backbone encoder layers, and head layers.", "An input x = [ x 1 , x 2 , .. ] is first prepended a CLS, embedding, and run through the backbone layers.", "[ h 0 cls ; h 0 ] = Embed ([ CLS ; x ]) (1) [ h earlycls ; h early ] = Encoder early ([ h 0 cls ; h 0 ]) (2) [ h latecls ; h late ] = Encoder late ([ h earlycls ; h early ]) (3) The head takes the CLS representation from the late layers but using a short circuit, the token representations from the early layers.", "This late-early pair then runs through the head's Transformer blocks.", "The head's outputs are then used to do masked language model (MLM; Devlin et al. (2019)) training.", "To utilize the capacity of the late layers, Condenser is forced to learn to aggregate information into the CLS token, which will then participate in the LM prediction.", "Leveraging the rich and effective training signal produced by MLM, Condenser learns to utilize the powerful Transformer architecture to generate dense CLS representations.", "We hypothesize that with this LM objective typically used to train token representations now put on the dense CLS representation, the learned LM gains improved robustness against noise.", "While Condenser can be trained on a diverse collection of corpora to produce a universal model, it is not able to solve the embedding space issue: while information embedded in the CLS can be non-linearly interpreted by the head, inner products between these vectors still lack semantics.", "Consequently, they do not form an effective embedding space.", "To this end, we augment the Condenser MLM loss with a contrastive loss.", "Unlike previous work that pre-trains on artificial query passage pairs, in this paper, we propose to simply pre-train the passage embedding space in a query-agnostic fashion, using a contrastive loss defined over the target search corpus.", "Concretely, given a random list of n documents [ d 1 , d 2 , ..., d n ] , we extract randomly from each a pair of spans, [ s 11 , s 12 , ..., s n 1 , s n 2 ] .", "These spans then form a training batch of coCondenser.", "Given a span s ij 's corresponding late CLS representation h ij , its corpus-aware contrastive loss is defined over the batch as shown below.", "(6) Familiar readers may recognize this as the contrastive loss from SimCLR (Chen et al., 2020), for which we use random span sampling as augmentation.", "Others may see a connection to noise contrastive estimation (NCE).", "Here we provide an NCE narrative.", "Following the spirit of the distributional hypothesis, passages close together should have similar representations while those in different documents should have different representations.", "Here we use random spans as surrogates of passages and enforce the distributional hypothesis through NCE, as word embedding learning in Word2Vec (Mikolov et al., 2013).", "We can also recognize this as a span-level language model objective, or skip-span.", "Denote span s ij 's Condenser MLM loss L mlm ij .", "The batch's loss is defined as an average sum of MLM and contrastive loss, or from an alternative perspective, word and span LM loss.", "The RocketQA pipeline uses supervision and large-batch training to learn the embedding space.", "We would also like to run large-batch unsupervised pretraining to construct effective stochastic gradient estimators for the contrastive loss in Equation 6.", "To remind our readers, this large-batch pre-training happens only once for the target search corpus.", "We will show that this allows effective small batch fine-tuning on task query sets.", "However, due to the batch-wise dependency of the contrastive loss, it requires fitting the large batch into GPU (accelerator) memory.", "While this can be done naively with interconnected GPU nodes or TPU pods, which can have thousands of gigabytes of memory, academia, and smaller organizations are often restricted to machines with 2845 four commercial GPUs.", "To break the memory constraint and perform effective contrastive learning, we adjust the gradient caching technique (Gao et al., 2021b) for our setup.", "We describe the procedure here for people who want to perform coCondenser pre-training but have limited resources.", "Denote L co = (cid:80) i (cid:80) j L coij , we can write Equation 7 as, L = 1 2 n [ L co + (cid:88) i (cid:88) j L mlm ij ] (8) The spirit of gradient caching is to decouple representation gradient and encoder gradient computation.", "Before computing the model weight update, we first run an extra backbone forward for the entire batch.", "This provides numerical values of [ h 11 , h 12 , ...., h n 1 , h n 2 ] , from which we compute: v ij = h ij (cid:88) i (cid:88) j L coij = L co h ij (9) i.e. the contrastive loss gradient with respect to the CLS vector.", "We store all these vectors in a gradient cache, C = [ v 11 , v 12 , .., v n 1 , v n 2 ] .", "Using v ij , denote the model parameter , we can write the derivative of the contrastive loss as shown below.", "We can then write the gradient of Equation 8.", "Since v ij is already in the cache C , each summation term now only concerns span s ij and its activation, meaning that we can compute the full batch's gradient in an accumulation fashion over small sub-batches.", "In other words, the full batch no longer needs to concurrently reside on the GPUs.", "At the end of pre-training, we discard the Condenser head, keeping only the backbone layers.", "Consequently, the model reduces to its backbone, or effectively a Transformer Encoder.", "We use the backbone weights to initialize query encoder f q and passage encoder f p .", "Each outputs the last layer CLS.", "Recall that they have already been warmed up in pre-training.", "Query and passage encoders are supervisedly fine-tuned on the target task's training set.", "We train with a supervised contrastive loss and compute for query q , negative log likelihood of a positive document d + against a set of negatives { d 1 , d 2 ,", "..d l", "..", "} .", "We run a two-stage training as described in the DPR (Karpukhin et al., 2020) toolkit.", "As shown in Figure 2b, in the first stage, the retrievers are trained with BM25 negatives.", "The first-stage retriever is then used to mine hard negatives to complement the negative pool.", "The second stage retriever trains with the negative pool generated in the first round.", "This is in contrast to the multi-stage pipeline of RocketQA shown in Figure 2a.", "In this section, we first describe the implementation details of coCondenser pre-training.", "We then conduct dense retrieval experiments to test the effectiveness of fine-tuned coCondenser retrievers.", "The coCondenser pre-training starts with vanilla BERT and goes in two stages, universal Condenser pre-training and corpus aware coCondenser pretraining.", "In the first stage, we pre-train a Condenser and warm start the backbone layers with pre-trained 12-layer BERT base weights (Devlin et al., 2019).", "The backbone uses an equal split, 6 early layers, and 6 late layers.", "The Condenser pre-training uses the same data as BERT: English Wikipedia and the BookCorpus.", "The first stage Condenser pretraining takes roughly a week on 4 RTX 2080 Ti GPUs or 2 days on a v3-8 cloud TPU.", "The Condenser model from stage one, including both backbone and head, is taken to warm start stage two coCondenser pre-training on the target corpus (Wikipedia or MS-MARCO web collection).", "We keep the Condenser architecture unchanged in the second step.", "We use AdamW optimizer with a learning rate 1e-4, weight decay of 0.01, and linear learning rate decay.", "Each model weight update uses 2K documents.", "We train using gradient 2846", "cache update, as described in subsection 3.3.", "We used the released Condenser model for the first stage.", "The second stage takes roughly 2 days on 4 RTX 2080 Ti GPUs or 19 hours on a v3-8 cloud TPU.", "Our GPU implementations are based on Py-torch (Paszke et al., 2019) and TPU implementations on JAX (Bradbury et al., 2018).", "After the second stage finishes, we discard the Condenser head, resulting in a model of the exact same architecture as BERT base .", "Next, we fine-tune the learned coCondenser to test retrieval performance.", "Following RocketQA, we test on Natural Question and MS-MARCO passage ranking.", "We also report performance on Trivia QA, whose pre-processed version is released in DPR.", "Dataset We use MS-MARCO passage ranking (Bajaj et al., 2018), Natural Question(NQ; Kwiatkowski et al. (2019)) and Trivia QA(TQA; Joshi et al. (2017)).", "MS-MARCO is constructed from Bing's search query logs and web documents retrieved by Bing.", "Natural Question contains questions from Google search.", "Trivia QA contains a set of trivia questions.", "We report official metrics MRR@10, Recall@1000 for MS-MARCO, and Recall at 5, 20, and 100 for NQ and TQA.", "Data Preparation We use Natural Question, Trivia QA, and Wikipedia cleaned and released with DPR toolkit (Karpukhin et al., 2020).", "NQ and TQA each have about 60K training data postprocessing.", "Similarly, we use the MS-MARCO corpus released with RocketQA open-source code (Qu et al., 2021).", "For reproducibility, we use the official relevance file instead of RocketQA's extended one, which has about 0.5M training queries.", "The BM25 negatives for MS-MARCO are taken from the official training triples.", "Training MS-MARCO models are trained with Tevatron toolkit (Gao et al., 2022) using AdamW with a 5e-6 learning rate, linear learning rate schedule, and batch size 64 for 3 epochs.", "NQ and TQA models are trained with the DPR toolkit following published hyperparameters by Karpukhin et al. (2020).", "All models are trained on one RTX 2080 Ti.", "We added gradient caching to DPR to deal with memory constraints.", "The models are trained only on each task's corresponding training set.", "We note that RocketQA is trained on a concatenation of several datasets (Qu et al., 2021).", "Model Validation Since for dense retrieval, validating a checkpoint requires encoding the full corpus, evaluating a checkpoint becomes very costly.", "Due to our computation resource limitation, we follow the suggestion in the DPR toolkit and take 2847 Method MS-MARCO Dev Natural Question Test Trivia QA Test MRR@10 R@1000 R@5 R@20 R@100 R@5 R@20 R@100 BM25 18.7 85.7 -59.1 73.7 -66.9 76.7 DeepCT 24.3 90.9 ---docT5query 27.7 94.7 ---GAR -60.9 74.4 85.3 73.1 80.4 85.7 DPR --74.4 85.3 -79.3 84.9 ANCE 33.0 95.9 -81.9 87.5 -80.3 85.3 ME-BERT 33.8 ---RocketQA 37.0 97.9 74.0 82.7 88.5 -Condenser 36.6 97.4 -83.2 88.4 -81.9 86.2 DPR-PAQ BERT base 31.4 -74.5 83.7 88.6 --BERT large 31.1 -75.3 84.4 88.9 --RoBERTa base 32.3 -74.2 84.0 89.2 --RoBERTa large 34.0 -76.9 84.7 89.2 -coCondenser 38.2 98.4 75.8 84.3 89.0 76.8 83.2 87.3 Table 1: Retrieval performance on MSMARCO dev, Natural Question test and Trivia QA test.", "the last model training checkpoint.", "We do the same for MS-MARCO.", "Comparison Systems We take RocketQA (Qu et al., 2021), the state-of-the-art fine-tuning technique, as our main baseline.", "We borrowed several other baselines from the RocketQA paper, including lexical systems BM25, DeepCT (Dai and Callan, 2019), DocT5Query (Nogueira and Lin, 2019) and GAR (Mao et al., 2020); and dense systems DPR (Karpukhin et al., 2020), ANCE (Xiong et al., 2021), and ME-BERT (Luan et al., 2020).", "We also included the concurrent work DPR-PAQ (Oguz et al., 2021), which pre-trains using a 65-million-size synthetic QA pair dataset.", "The pre-training data is created by using retriever-reader pairs trained on Natural Question and Trivia QA.", "Designing the synthesis procedure also requires domain knowledge, thus under the context of this paper, we refer to this as a semi-supervised pre-training method.", "We include 4 DPR-PAQ variants based on base/large architectures of BERT/RoBERTa models.", "Finally, we fine-tune a Condenser model which is produced in the first stage of pre-training.", "Table 1 shows development (dev) set performance for MS-MARCO passage ranking and test set performance for Natural Question and Trivia QA.", "Across three query sets, dense systems show superior performance compared to sparse systems.", "We also see a big performance margin between systems involving either careful fine-tuning or pre-training (RocketQA, DPR-PAQ, Condenser, coCondenser) over earlier dense systems.", "This result confirms recent findings that low dimension embeddings possess a strong capacity for dense retrieval, a capacity however hard to exploit naively.", "coCondenser shows small improvements over RocketQA.", "Importantly, this is achieved with greatly reduced computation and data engineering effort in fine-tuning.", "Notably on MS-MARCO, coCondenser reduced the RocketQA's 4096 batch size to 64 (Table 5).", "A comparison of the two training pipelines of RocketQA and coCondenser can be found in Figure 2.", "Comparison with DPR-PAQ shows several interesting findings.", "Combining large semi-supervised pre-training with the better and larger LM RoBERTa large , DPR-PAQ achieves the best results on Natural Question.", "On the other hand, when starting from BERT (base/large), DPR-PAQ shows similar performance to coCondenser, which is based on BERT base .", "This suggests that large-scale semi-supervised pre-training is still the way to go to get the very best performance.", "However, when computational resources are limited and a large pre-training set is missing, the unsupervised coCondenser is a strong alternative.", "On the other hand, as it moves to MS-MARCO where DPR-PAQ's pre-training supervision becomes distant, 2848 MS-MARCO Passage Ranking Leaderboard Rank Method MRR@10 1 Adaptive Batch Scheduling + CoCondenser 43.1 2 coCondenser* 42.8 MS-MARCO Document Ranking Leaderboard Rank Method MRR@100 1 UniRetriever 44.0 2 coCondenser + MORES+* 43.6 Table 2: Performance on the MS-MARCO leader-boards with reranking.", "we observe that DPR-PAQ becomes less effective than RocketQA and coCondenser.", "The comparison between Condenser and coCondenser demonstrates the importance of the contrastive loss in coCondener: coCondenser can be robustly fine-tuned thanks to its pre-structured embedding space, allowing it to have better Recall (fewer false negatives) across all datasets.", "Due to the leaderboard nature of MS-MARCO Eval set, we cannot do ablation studies on it but have only made two submissions.", "We follow other top-performing systems and add some form of reranker.", "For the passage ranking leaderboard, we rerank the top 1000 retrieved passages with an ensemble of ERNIE (Sun et al., 2020) and RoBERTa.", "We also fine-tuned a coCondenser on the MS-MARCO document ranking dataset.", "As passage retrieval is the focus of this paper, we retrieve based on the first passage of 512 tokens.", "The top100 are reranked by a fast modular reranker (Gao et al., 2020).", "Performance of best systems and ours are recorded in Table 2.", "At the time of this paper's submission, both of our systems are the 2nd best on the two leaderboards.", "For passages ranking, we are excited to see that other people are able to further improve coCondenser with additional fine-tuning techniques.", "For document, we leave the study of retrieval beyond first passage to future work and refer readers to other leaderboard systems.", "Recall the two desired properties of coCondernser are local noise resistance to mislabeling and a well-structured, pre-trained embedding space.", "In this section, to investigate the former, we introduce and compare with a knowledge distillation (Hin-ton et al., 2015) setup where we substitute noisy hard labels with soft labels from a cross-encoder.", "For the latter, we measure the quality of 1st stage retriever mined negatives to see if coCondenser embedding can help find related but not relevant hard negatives.", "We also provide ablation studies of loss components and, pre-training and fine-tuning stages.", "To analyze the local robustness of coCondenser, we introduce a knowledge distillation upper-bound model: instead of training using the noisy labels, we first train a cross encoder and then fine-tune a coCondenser model using soft labels generated from the cross-encoder.", "Unlike Qu et al. (2021) that uses cross encoder for filtering, here we directly expose the logits as soft labels.", "Concretely, given cross encoder g , a batch of M queries { q 1 , q 2 ,", ".., q M } , each paired with N passages (positive and hard negatives) { p 11 ,", ".., p 1 N ,", ".., p MN } , for a query q l we define its soft target distribution T , T ij = softmax j ( g ( q l , d ij )) if i = l else 0 (15) i.e., the soft labels are normalized logits from g for the local passages and 0 for the rest.", "Let S ij = softmax ij ( s ( q, p ij )) , the normalized bi-encoder similarities.", "Loss is defined as Kullback-Leibler divergence between S and T , L kd = DKL ( S || T ) (16) This setup", "a) focuses on improving labels for local hard negatives and positives while", "b) avoids evaluating cross encoder g for in-batch negatives.", "In Table 3, we compare coCondenser trained with MS-MARCO Dev Model Label MRR@10 R@100 R@1K BERT Hard 33.4 85.1 95.4 coCondenser Hard 38.2 91.3 98.4 coCondenser Soft 39.1 91.9 98.6 Table 3: Fine-tuning with hard v.s. soft labels.", "the original hard labels and the cross-encoder generated soft labels.", "Using soft labels indeed produces some improvement.", "On the other hand, without help from the cross-encoder, coCondenser still yields performance within small margins, showing coCondenser's superior local noise resistance.", "Intuitively, a globally better-structured embedding space will be less likely to collapse, producing a more accurate set of mined negatives for the second stage retriever to learn over.", "In other words, the 2849 2nd stage retriever will produce less unexpected top (hard) negatives.", "To quantitatively measure this, we propose a new metric, top n neighborhood recall at depth k (nb-recall n @ k ): for a query, the coverage over the 2nd stage retriever's top n candidates by the 1st stage retrievers top k candidates, nb-recall n @ k = { stage1 top k } { stage2 top n } n (17) Essentially, this measures how the mined hard negatives agree with the actual negatives, or simply, how well the 1st stage retriever locating hard negatives.", "We measure neighborhood recall of BERT, Condenser and coCondenser retrievers, averaged over all MS-MARCO Dev queries, for n = 50 , 100 and various k values, in Figure 3, We see consis-55.00 65.00 75.00 85.00 95.00 50 100 150 200 250 BERT Condenser coCondenser nb-recall n=50 Depth K 55.00 65.00 75.00 85.00 95.00 100 200 300 400 500 BERT Condenser coCondenser nb-recall n=100 Figure 3: MS-MARCO Dev Neighborhood Recall tent higher nb-recall of Condenser over BERT and coCondenser over Condenser.", "The former comes from stronger CLS representation while the latter is due to the globally better-structured embedding space.", "In this section, we conduct an ablation study to understand the second stage pre-training loss com-ponents'", "com-ponents' influence on the final quality.", "In particular, we consider a Condenser model further pre-trained with only the contrastive loss.", "In Table 4, we see that further pre-training Condenser with only contrastive loss leads to better recall but similar MRR.", "The contrastive loss learns a better embedding space but by itself cannot keep the CLS locally discriminative.", "The original Condenser is able to rank better locally, producing similar MRR with fewer recalled passages.", "When both contrastive and Condenser MLM loss are used, we see improvements on all metrics.", "This again stresses the importance of the Condenser MLM loss during the second contrastive learning stage.", "We seek to understand the contribution of each pre-training and fine-tuning stage of coCondenser retriever.", "We consider pre-trained Condenser from the first stage and coCondenser from the second stage.", "For each, we consider retrievers trained with and without hard negatives (HN).", "For reference, we compare also with various RocketQA training stages.", "Results are shown in Table 5.", "We see Method Batch Size MS-MARCO Dev MRR@10 R@1000 RocketQA Cross-batch negatives 8192 33.3 -+ Hard negatives 4096 26.0 -+ Denoising 4096 36.4 -+ Data augmentation 4096 37.0 97.9 coCondenser Condenser w/o HN 64 33.8 96.1 + Hard negatives 64 36.6 97.4 coCondenser w/o HN 64 35.7 97.8 + Hard negatives 64 38.2 98.4 Table 5: MS-MARCO Dev performance for various training stages RocketQA and coCondenser.", "that each stage of RocketQA is critical.", "As each is added, performance improves steadily.", "On the other hand, this also suggests the full pipeline has to be executed to get the best performance.", "In comparison, we see Condenser with hard negatives has 2850 performance very close to the full RocketQA system.", "Condenser with hard negatives also has better MRR than coCondenser without hard negatives, meaning that Condenser from the first pre-training stage is already very strong locally but the embedding space trained from a relatively cold start is still not optimal, causing global misses.", "Adding the corpus aware loss, coCondenser without hard negatives has Recall very close to the full RocketQA system, using only a size 64 batch.", "This again confirms our hypothesis that fine-tuning can benefit from a pre-trained passage embedding space.", "Further adding hard negatives, we get the strongest coCondenser system that is both locally and globally effective.", "Note that all Condenser systems achieve their performance without denoising, showing the superior noise resistance capability learned using the Condenser architecture.", "Practically, our systems also do not require data augmentation, which removes engineering effort in designing augmentation techniques and defining augmentation data.", "This paper introduces coCondenser , an unsupervised corpus-aware language model pre-training method.", "We demonstrate proper pre-training can establish not only language understanding ability but also corpus-level representation power.", "Leveraging the Condenser architecture and a corpus aware contrastive loss, coCondenser acquires two important properties for dense retrieval, noise resistance, and structured embedding space.", "This corpus-aware pre-training needs to be done once for a search corpus and is query agnostic.", "The learned model can be shared among various types of end task queries.", "Experiments show that coCondenser can drastically reduce the costs of fine-tuning a strong dense retriever.", "We also find coCondenser yields performance close or similar to semi-supervised pre-trained models that are several times larger.", "Importantly, coCondenser provides a hands-off way to pre-train a very effective LM for dense retrieval.", "In particular, it effectively removes the effort for designing and testing pre-training as well as fine-tuning techniques.", "With our models, practitioners can use limited resources to train dense retrieval systems with state-of-the-art level performance.", "Future works may also investigate integrating additional pre-training and/or fine-tuning methods to further improve performance.", "This research was supported by NSF grant IIS-1815528.", "Any opinions, findings, and conclusions in this paper are the authors' and do not necessarily reflect those of the sponsors.", "The authors would like to thank Google's TPU Research Cloud (TRC) for access to Cloud TPUs and the anonymous reviewers for the reviews." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "method", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "method", "method", "other", "other", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "other", "other", "other" ]
[ "Recent proposed approaches have made promising progress in dialogue state tracking (DST).", "However, in multi-domain scenarios, ellipsis and reference are frequently adopted by users to express values that have been mentioned by slots from other domains.", "To handle these phenomena, we propose a D ialogue S tate T racking with S lot C onnections ( DST-SC ) model to explicitly consider slot correlations across different domains.", "Given a target slot, the slot connecting mechanism in DST-SC can infer its source slot and copy the source slot value directly, thus significantly reducing the difficulty of learning and reasoning.", "Experimental results verify the benefits of explicit slot connection modeling, and our model achieves state-of-the-art performance on MultiWOZ 2.0 and MultiWOZ 2.1 datasets.", "Task-oriented dialogue systems assist users to achieve their certain goals, such as making a restaurant reservation or booking a taxi.", "To fulfill users' goals, dialogue state tracking (DST) is employed to estimate dialogue states at each turn.", "Dialogue states consist of constraints and requests conveyed by user utterances, typically are represented by a set of predefined slots and their corresponding values.", "For instance, the user utterance I am looking for a Korean restaurant in the centre mentions two slots, food and area , whose values are Korean and centre respectively.", "Numerous methods are proposed to tackle the challenge of DST recently, and these methods can be mainly categorized into two types: fixed vocabulary and open vocabulary (Eric et al., 2019).", "Fixed vocabulary models are designed in the paradigm of multi-class classification, relying on a predefined Equal contributions.", "Open vocabulary approaches (Xu and Hu, 2018; Wu et al., 2019; Gao et al., 2019; Ren et al., 2019) break the assumption of predefined ontologies, turning to generate values only given target slots.", "Wu et al. (2019) propose a copy-augmented encoder-decoder model to track dialogue states, which outperforms fixed vocabulary models and achieves the state-of-the-art result in multi-domain DST.", "Despite significant improvements achieved by those open vocabulary models, they always suffer from understanding enormous ellipsis and reference expressions in multi-domain scenarios.", "As shown in Table 1, there are several slot connections across multiple domains and turns.", "For example, at the second turn, the value of the target slot attraction-area is informed by a referring expression in the same area as the restaurant .", "Thus, the system needs to retrieve the value of its source slot restaurant-area .", "The last turn shows an obscurer utterance with multiple slot connections, in which target slots taxi-departure and taxi-destination are implicitly connected to their source slots attraction-name and restaurant-name respectively.", "For those 35 slots that need connections, existing methods attempt to find their values out from the lengthy dialogue history, which usually fail because of high learning complexity.", "In this paper, we formally consider the above challenge as related-slot problem and propose a novel model DST-SC (Dialogue State Tracking with Slot Connections) to address it.", "We follow previous work to build a copy-augmented encoder-decoder model.", "Specially, DST-SC is designed with a slot connecting mechanism to establish the connection between the target slot and its source slot explicitly.", "Thus it can take advantage of the source slot value directly instead of reasoning from preceding turns.", "The contributions of this work are two-fold: To the best of our knowledge, this work is the first one to discuss the related-slot problem in multi-domain DST and address it by explicitly modeling slot connections across domains.", "We demonstrate that DST-SC is more effective for handling the related-slot problem and outperforms state-of-the-art baselines.", "In this section, we will describe DST-SC model in detail.", "DST-SC is an open vocabulary model based on the encoder-decoder architecture.", "As shown in Figure 1, there are three components that contribute to obtain the target slot value: (1) word generation from the vocabulary; (2) word copying from the dialogue history; (3) value copying from the source slot.", "To reduce the burden on the decoder, DST-SC also equips with a slot gate (Wu et al., 2019) to predict for slot values of none and dontcare .", "Our model uses a bi-directional GRU (Cho et al., 2014) to encode the dialogue history x = { w 1 , w 2 , , w m } , where m is the number of tokens in the dialogue history.", "Each input token is first embedded using a word embedding function emb and then encoded into a fix-length vector h i .", "We employ another GRU to decode slot values.", "Each slot is comprised of a domain name and a slot name, e.g., hotel-area .", "While decoding the j -th slot s j , its summed embedding is fed as the first input.", "The last hidden state of the encoder initializes the decoder hidden state.", "At decoding step t , the hidden state is represented as (cid:101) h jt .", "(The superscript j will be omitted for simplicity.)", "Following the vanilla attention-based decoder architecture (Bahdanau et al., 2014), (cid:101) h t is used to apply attention over encoder outputs and aggregate them to get the context vector c t .", "a ti = softmax ( f mlp ([ (cid:101) h t , h i ])) , (2) c t = m (cid:88) i =1 a ti h i .", "(3) The distribution of generating token y t is given by: P gen ( y t ) = softmax ( W gen [ (cid:101) h t , c t ]) .", "The copy mechanism is shown to be effective in DST (Lei et al., 2018; Xu and Hu, 2018; Wu et al., 2019).", "Here, we follow Wu et al. (2019) to augment the vanilla attention-based decoder with pointer-generator copying, enabling it to capture slot values that explicitly occur in the dialogue history.", "A soft gate g 1 is used to combine word copying distribution and generative distribution.", "As claimed in Section 1, connecting the target slot with its source slot helps to decrease the reasoning difficulty.", "Therefore, we enhance the copy-augmented encoder-decoder model with a slot connecting mechanism to model slot correlations directly.", "When decoding the target slot s j , DST-SC infers its source slot from last dialogue states, then copies its value for the final distribution.", "Last dialogue states are represented by (slot, value) tuples: { ( s 1 , v 1 ) , ( s 2 , v 2 ) , , ( s n , v n ) } .", "We use (cid:101) h 0 as the query to attend the potential source slot.", "where s k is the summed slot embedding, k { 1 , 2 , , n } \\ { j } .", "Attention score a k measures 36 Slot Gate {none, dontcare, span} Dialogue History Bi-GRU GRU Target Slot s j Attention Attention g 1 g 2 Slot Connecting Mechanism Last Dialogue States w 1 w m s 1 v 1 s 2 v 2 s n v n context P wc P gen P vc P Figure 1: DST-SC model architecture (best viewed in color).", "how related s k is to the target slot s j .", "It is computed only once at the first decoding step and maintained consistency to subsequent tokens in the value v k .", "At the t -th decoding step, the t -th token v kt contributes to form value copying distribution P vc ( y t ) .", "Similar to the copy-augmented decoder, we combine value copying distribution and original distributions using a soft gate g 2 to get final output distribution.", "To evaluate the effectiveness of DST-SC, we conducted experiments on MultiWOZ 2.0 (Budzianowski et al., 2018) and MultiWOZ 2.1 datasets (Eric et al., 2019).", "MultiWOZ 2.0 is a multi-domain dialogues corpus, and some annotation errors are corrected in MultiWOZ 2.1.", "We compare DST-SC with several baseline methods.", "FJST and HJST (Eric et al., 2019) apply a separate feed-forward network to classify for every single state slot.", "HyST (Goel et al., 2019) is a hybrid approach, which combines the joint tracking fixed vocabulary approach and open vocabulary approach.", "COMER (Ren et al., 2019) adopts three hierarchically stacked decoders to generate dialogue states.", "In our experiments, we used Glove (Pennington et al., 2014) and character embeddings (Hashimoto et al., 2017) to initialize word embeddings, each word is represented by a 400-dimensional vector.", "The hidden sizes of all GRU layers are set to 400.", "In the training phase, we used ground truth prior-turn dialogue states in the slot connecting mechanism.", "Adam optimizer (Kingma and Ba, 2015) is applied with 0.001 learning rate initially.", "The learning rate then reduced by a factor of 0.2, and the training stopped early when the performance in validation set was not improved for 6 consecutive epochs.", "We used a batch size of 32 and dropout rate of 0.2.", "Greedy search strategy is used for decoding, with maximum 10 decoded tokens and 50% probability of teacher forcing.", "Also, we followed previous work to utilize our model with the word dropout (Wu et al., 2019) by masking input tokens with a 20% probability.", "All experiments are averaged across 3 seeds.", "We follow previous work to compare the performance of joint goal accuracy.", "We get the joint goal correct if the predicted state exactly matches the ground truth state for every slot.", "As shown in Table 2, open vocabulary approaches achieve higher accuracy than fixed vocabulary approaches.", "DST-SC achieves state-of-the-art performance on 37 Model MultiWOZ 2.0 MultiWOZ 2.1 FJST 40.20% 38.00% HJST 38.40% 35.55% HyST 42.33% 38.10% COMER 48.79% TRADE 1 50.83% 48.29% DST-SC 52.24% 49.58% Table 2: Joint goal accuracy on MultiWOZ 2.0 and MultiWOZ 2.1.", "MultiWOZ 2.0 and MultiWOZ 2.1, with the joint goal accuracy of 52.24% and 49.58%.", "We conducted further related-slot tests to verify the effectiveness of DST-SC in solving the related-slot problem.", "The dataset for related-slot tests is constructed by manually extracting dialogues with the related-slot problem from MultiWOZ 2.1 test set.", "We made an observation that slot connections are common at target slots such as attraction-area , hotel-area , hotel-book day and so on.", "We only need to focus on target slot accuracy of turns with slot connections.", "However, some target slots occur infrequently in the extracted dataset.", "Considering that target slots from different domains with the same slot type always correspond to similar slot connection expressions, we can neglect their domains and calculate the accuracy of each slot type instead.", "For example, we can calculate the accuracy of slot type price instead of calculating the accuracy of hotel-price range and restaurant-price range separately.", "Table 3 lists slot types and their corresponding target slots.", "To make more convincing tests, we performed data augmentations to get more samples for each slot type.", "We used two heuristic rules to augment the extracted data and obtained 100 dialogues for each slot type.", "(1) Paraphrasing : we rewrote some utterances to get multiple phrases with the same intent.", "For example, the phrase in the same area as the restaurant can be rewritten as close to the restaurant .", "(2) Replacing values : we replaced some slot values to exclude the influence of overfitting.", "For example, the phrase stay in the east can be replaced as stay in the west .", "As shown in Table 4, DST-SC outperforms TRADE by a large margin at most slot types.", "Case 1 in Table 5 illustrates the advantage of DST-SC explicitly.", "We find that both generation and word copying miss the correct token.", "However, the slot connecting mechanism in DST-SC helps to find out the correct source slot and merges its value into P under the control of gate g 2 .", "Note that there are no obvious improvements on slot types departure and destination.", "We suspect that this is caused by lots of missing annotations for attraction-name , hotel-name and restaurant-name , which usually act as source slots for departure and destination.", "The absence of these critical information makes DST-SC pay less attention to values from source slots.", "As shown in case 2 in Table 5, even if the slot connection mechanism has inferred the correct source slot, the unconfidence of g 2 leads to the final incorrect output.", "Traditional approaches for dialogue state tracking (Henderson et al., 2014b; Sun et al., 2014; Zilka and Jurccek, 2015; Mrksic et al., 2015) rely on manually constructed semantic dictionaries to extract features from input text, known as delexicali-sation.", "These methods are vulnerable to linguistic variations and difficult to scale.", "To overcome these problems, Mrksic et al. (2017) propose the first data-driven model for DST, the employed deep learning approaches provide stronger representa-38 Model area day departure destination people price time TRADE 49.33% 16.00% 49.66% 48.33% 12.00% 26.33% 86.66% DST-SC 86.33% 92.00% 46.66% 48.66% 87.00% 53.33% 87.33% Table 4: Slot type accuracy of related-slot tests.", "tion learning ability.", "By sharing parameters among slots (Ren et al., 2018; Zhong et al., 2018; Nouri and Hosseini-Asl, 2018), the model is further improved to track rare slot values.", "These approaches are all designed in the paradigm of multi-class clas-sification over predefined slot value candidates and usually referred to as fixed vocabulary approaches.", "Fixed vocabulary approaches always require a predefined ontology, which is usually impractical.", "Their applications are usually limited in a single domain.", "Therefore, several open vocabulary approaches in generative fashion (Xu and Hu, 2018; Wu et al., 2019; Gao et al., 2019; Ren et al., 2019) are proposed to handle unlimited slot values in more complicated dialogues.", "Open vocabulary models show the promising performance in multi-domain DST.", "However, ellipsis and reference phenomena among multi-domain slots are still less explored in existing literature.", "In this paper, we highlight a regularly appeared yet rarely discussed problem in multi-domain DST, namely the related-slot problem.", "We propose a novel dialogue state tracking model DST-SC, which equips with the slot connecting mechanism to build slot connections across domains.", "Our model achieves significant improvements on two public datasets and shows effectiveness on related-slot problem tests.", "Annotations complement for MultiWOZ dataset in the future might enable DST-SC to handle the related-slot problem more effectively and further improve the joint accuracy.", "We would like to thank the anonymous reviewers for their constructive comments.", "This work is supported by the National Natural Science Foundation of China (Nos. 61976114 and 61936012), the National Key R&D Program of China (No. 2018YFB1005102)." ]
[ "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "result", "objective", "result", "abstain", "other", "other" ]
[ "Generative models for dialog systems have gained much interest because of the recent success of RNN and Transformer based models in tasks like question answering and summarization.", "Although the task of dialog response generation is generally seen as a sequence to sequence (Seq2Seq) problem, researchers in the past have found it challenging to train dialog systems using the standard Seq2Seq models.", "Therefore, to help the model learn meaningful utterance and conversation level features, Sordoni et al. (2015b); Serban et al. (2016) proposed Hierarchical RNN architecture, which was later adopted by several other RNN based dialog systems.", "With the transformer-based models dominating the seq2seq problems lately, the natural question to ask is the applicability of the notion of hierarchy in transformer based dialog systems.", "In this paper, we propose a generalized framework for Hierarchical Transformer Encoders and show how a standard transformer can be morphed into any hierarchical encoder, including HRED and HIBERT like models, by using specially designed attention masks and positional encodings.", "We demonstrate that Hierarchical Encoding helps achieve better natural language understanding of the contexts in transformer-based models for task-oriented dialog systems through a wide range of experiments.", "The code and data for all experiments in this paper has been open-sourced 1 2 .", "Dialog systems are concerned with replicating the human ability to make conversation.", "In a generative dialog system, the model aims at generating coherent and informative responses given a dialog Equal Contributions 1 Experiments in this paper: https://github.com/ bsantraigi/HIER 2 PyTorch implementation of Hierarchical Transformer Encoder: https://github.com/bsantraigi/ hier-transformer-pytorch context and, optionally, some external information through knowledge bases (Wen et al., 2017) or annotations e.g. belief states, dialog acts etc. (Chen et al., 2019; Zhao et al., 2017).", "A dialog is usually represented as a series of utterances.", "However, it is not sufficient to view each utterance independently for engaging in a conversation.", "In a dialogue between humans, the speakers communicate both utterance level and dialog level information.", "E.g., dialog intent often cannot be detected by looking at a single utterance, whereas dialog acts are specific to each utterance and change throughout a conversation.", "Intuitively, we can instruct the model to achieve both utterance level and dialog level understanding separately through a hierarchical encoder (Serban et al., 2016).", "There has been a lot of interest in the past towards using the Hierarchical Encoder-Decoder (HRED) model for encoding utterances in many RNN based dialog systems.", "However, since the rise of Transformers and self-attention (Vaswani et al., 2017), the use of hierarchy has not been explored further for transformer-based dialog models.", "Past research and user-studies have also shown that hierarchy is an important aspect of human conversation (Jurafsky, 2000).", "But, most previous works based on transformer have focused on training models either as language models (Budzianowski and Vulic, 2019; Zhang et al., 2020b) or as standard (non-hierarchical) Seq2Seq models (Chen et al., 2019; Zhang et al., 2020a; Wang et al., 2020) with certain task specific extensions.", "Although arguably, the self-attention mechanism might automatically learn such a scheme during the training process, our empirical results show that forcing this inductive bias by manual design as proposed here leads to better performing models.", "This paper bridges these two popular approaches of transformers and hierarchical encoding for dialogs systems to propose a family of Hierarchical Transformer Encoders.", "Although arguably, the self-attention mechanism of standard encoders might automatically learn such a scheme during the training process, our empirical results show that forcing this inductive bias by manual design as proposed here leads to better performing models.", "Our contributions in this paper include: We propose a generalized framework for hierarchical encoders in transformer based models that covers a broader range of architectures including existing encoding schemes like HRED/HIBERT (Zhang et al., 2019) and possibly other novel variants.", "We call members of this family of hierarchical transformer encoders as an HT-Encoder .", "Then, we formulate a straightforward algorithm for converting an implementation of standard transformer encoder into an HTEncoder by changing the attention mask and the positional encoding.", "Building upon that, we show how an HRED/HIBERT like hierarchical encoder ( HIER-CLS ) can be implemented using our HT-Encoder framework.", "We also showcase a novel HT-Encoder based model, called HIER , with a context encoding mechanism different from HRED.", "We show that these simple HT-Encoder based baselines achieve at par or better performance than many recent models with more sophisticated architectures or training procedures.", "We make a thorough comparison with many recently proposed models in four different experimental settings for dialog response generation task.", "We further apply HT-Encoder to a state-of-the-art model, Marco (Wang et al., 2020), for task-oriented dialog systems and obtain improved results.", "Formally, the task of a dialog system is to predict a coherent response, r , given a dialog context c .", "In case of a goal oriented dialog system, context c might consist of dialog history, C t = [ U 1 , S 1 , ..., U t ] , and optionally a belief state (dialog act, slot values, intent etc.) b t , when available.", "Here, U i , S i represent the user and system utterances at turn i , respectively.", "The actual target response following C t is the system utterance S t .", "Like the original HRED architecture, HT-Encoder also has two basic components, a shared utterance encoder and the context encoder.", "Shared utterance encoder, or the Shared Encoder in short, is the first phase of the encoding process where each utterance is processed independently to obtain utterance level representations.", "In the second phase, the Context Encoder is used to process the full context together.", "These context level representations are then used for the tasks like dialog state tracking or response generation.", "We propose two different types of Hierarchical Encoding schemes for the transformer model.", "1. HIER-CLS: When Serban et al. (2016) employed a hierarchical encoder for dialog contexts, they obtained a single representative embedding, usually the final hidden state of an RNN, for each utterance.", "Similarly, in HIER-CLS, the context encoder utilizes only a single utterance embedding for each utterance.", "We do this by taking the contextual embedding of the first token (often termed as the CLS token in transformer based models) of each utterance.", "2. HIER: Recent works have shown the importance of contextual word embeddings.", "In HIER, we consider contextual embedding of all utterance tokens as input to the context encoder.", "We simply concatenate the whole sequence of contextual embeddings and forward it to the context encoder.", "In this section, we show how the two-step process of hierarchical encoding can be achieved using a single standard transformer encoder.", "If we want to have an M layer utterance encoder followed by an N layer context encoder, we start with an ( M + N ) layer standard encoder.", "Then by applying two separate masks as designed below, we convert the standard encoder into an HT-encoder.", "First, we need to encode the utterances independently.", "Within the self-attention mechanism of a transformer encoder, which token gets to attend to which other tokens is controlled by the attention mask.", "If we apply a block-diagonal mask, each block of size same as the length of utterances (as shown in Figure 2 bottom-left), to the concatenated sequence of tokenized utterances, we effectively achieve the same process of utterance encoding.", "We call this block-diagonal mask for utterance encoding the UT-mask .", "Similarly, another attention mask ( CT-Mask ) can explain the context encoding phase that allows tokens to attend beyond the respective utterance boundaries.", "See the two matrices on Figure 2's right for examples of such CT-Masks.", "From here, it can be quickly concluded that if we apply the UT-Mask for the first few layers of the encoder and the CT-Mask in the remaining few layers, we effectively have a hierarchical encoder.", "The CT-Mask also gives us more freedom on what kind of global attention we want to allow during context encoding.", "Positional encoding is applied once before utterance encoder (local PE) and once more before context encoder (global PE).", "UT-Mask and Local Positional Encoding The steps for obtaining the UT-Mask and positional encoding for the utterance encoder are given below and is accompanied by Figure 2. C is the dialog context to be encoded.", "w ij is the j th token of i th utterance.", "In CI , each index i is repeated | u i | (length of u i ) times.", "And CIR is a square matrix created by repeating CI .", "PI has the same dimensions as CI , and it stores the position of each token w ij in context C , relative to utterance u i .", "P : I (cid:55) R d is the positional encoding function that takes an index (or indices) and returns their d -dim positional embedding.", "A is the UT-Mask for the given context C and their utterance indices CI .", "An example instance of this process is given in Figure 2. 1 ( . ) is an indicator function that returns true when the input logic holds, and is applied to a matrix or vector element-wise.", "C = [ w 11 , w 12 , ..., w Tl T ] CI = [0 , . . . , 0 , 1 , . . . , 1 , . . . , T ] PI = [0 , 1 , . . . , l 1 1 , 0 , . . . , l 2 1 , . . . , l T 1] CIR = repeat ( CI , len ( CI ) , 0) A = 1 (cid:0) 2 CIR == ( CTIR + CIR ) (cid:1) P c = P [ PI , :] Dialogue Representation I Decoder Decoder Decoder y1 y2 y3 Shared Encoder Shared Encoder Shared Encoder Shared Encoder U1 S1 Ut U2 CONTEXT ENCODER U 1 S 1 U 2 U t HT-ENCODERSOS Can I Embedding Layer + Pos.", "CT-Masks for Models The attention masks for context encoding depends on the choice for model architecture.", "We provide the details of the architectures and their attention masks used in our experiments in the subsequent section.", "There are other masks possible, but these are the ones we found to be working best in their respective settings.", "We propose several model architectures to test the effectiveness of the proposed HIER-Encoder in various experimental settings.", "These architectures are designed to fit well with the four experimental settings (see Section 3.1) of the response generation task of the MultiWOZ dataset in terms of input and output.", "The tested model architectures are as follows.", "Using the HIER encoding scheme described in Section 2.1, we test two model architectures for response generation, namely HIER and HIER++.", "HIER: HIER is the most straightforward model architecture with an HT-Encoder replacing the encoder in a Transformer Seq2Seq.", "The working of the model is shown in Figure 3a.", "First, in the utterance encoding phase, each utterance is encoded independently with the help of the UT-Mask.", "In the second half of the encoder, we apply a CT-Mask as depicted by the figure's block attention matrix.", "Block B ij is a matrix which, if all ones, means that utterance i can attend to utterance j 's contextual token embeddings.", "The local and global positional encodings are applied, as explained in Section 2.2.", "A standard transformer decoder follows the HTEncoder for generating the response.", "The CT-Mask for HIER was experimentally obtained after trying a few other variants.", "The intuition behind this mask was that the model should reply to the last user utterance in the context.", "Hence, we design the attention mask to apply cross attention between all the utterances and the last utterance (see Figure 3a).", "HIER++: HIER++ is the extended version of the HIER model, as shown in Figure 3b, that also takes the dialog act label as input.", "The dialog act representation proposed in Chen et al. (2019) consists of the domain, act, and slot values.", "A linear feed-forward layer (FFN) acts as the embedding layer for converting their 44-dimension multi-hot dialog act representation.", "The output embedding is added to the input token embeddings of the decoder in HIER++ model.", "Similar to HDSA, we also use ground truth dialog acts during training, and predictions from a fine-tuned BERT model during validation and testing.", "HIER++ is applied to the Context-to-Response generation task of the MultiWOZ dataset.", "HIER-CLS: As described in Section 2.1, the encoding scheme of HIER-CLS is more akin to the HRED (Chen et al., 2019) and HIBERT (Zhang et al., 2019) models.", "It differs from HIER++ only with respect to the CT-Mask.", "Ablations To understand the individual impact of UT-Mask and CT-Mask, we ran the same experiments with the following model ablations.", "1. SET: HIER without the context encoder.", "Each utterance is encoded independently.", "It shows the importance of context encoding.", "Effectively, this model is only the shared utterance encoder (SET) applied to each utterance independently.", "2. MAT: HIER without the utterance encoder.", "This model only uses the context encoder as per the context attention mask of Figure 3a.", "As this is equivalent to a simple transformer encoder with a special attention mask, we call it the Masked Attention Transformer or MAT.", "3. SET++: An alternative version of SET with dialog-act input to the decoder similar to HIER++.", "HIER-Joint: Finally, we propose the HIER-Joint model 3 suitable for the end-to-end response generation task of the MultiWOZ dataset.", "The HIER-Joint model comprises an HT-Encoder and three transformer decoders for decoding belief state sequence, dialog act sequence, and response.", "It is jointly trained to predict all three sequences simultaneously.", "As belief state labels can help dialog-act generation, and similarly, both belief and act labels can assist response generation, we pass the token embedding from the belief decoder and act decoder to the response decoder.", "Act decoder receives mean token embedding from the belief decoder too.", "Our implementation is based on the PyTorch library.", "All the models use a vocabulary of size 1,505.", "We generate responses using beam search 4 with beam width 5. The model optimizes a cross entropy loss.", "Full details of model parameters are given in suplementary material.", "Dataset We use MultiWOZ 5 (Budzianowski et al., 2018), a multi-domain task-oriented dataset.", "It contains a total of 10,400 English dialogs divided into training (8,400), validation (1,000) and test (1,000).", "Each turn in the dialog is considered as a prediction problem with all utterances upto that turn as the context.", "6 3 Block diagram for HIER-Joint model has been provided in supplementary material.", "4 https://github.com/OpenNMT/ OpenNMT-py/tree/master/onmt/translate 5 MultiWOZ v2.0 https://github.com/ budzianowski/multiwoz/blob/master/data/MultiWOZ_2.0.zip 6 See supplementary for more details.", "Baselines To fully grasp the effectiveness of our proposed approaches, we consider several baseline 3 models with varying complexity and architectures.", "Token-MoE (Pei et al., 2019) is a token level mixture-of-experts (MoE) model.", "It builds upon the base architecture of LSTM-Seq2Seq with soft attention.", "In the decoding phase, they employ k expert decoders and a chair decoder network which combines the outputs from the experts.", "Attn-LSTM (Budzianowski et al., 2018) uses an LSTM Seq2Seq model with attention on encoded context utterance, oracle belief state and DB search results.", "HRED (Serban et al., 2017) model is based on the same idea of hierarchical encoding in RNN Seq2Seq networks (results source: Peng et al., 2019, 2020b).", "The transformer based baseline (Vaswani et al., 2017) concatenates the utterances in dialog context to obtain a single source sequence and treats the task as a sequence transduction problem.", "HDSA (Chen et al., 2019) uses a dialog act graph to control the state of the attention heads of a Seq2Seq transformer model.", "Zhang et al. (2020a) proposes to augment the training dataset by building up a one-to-many state-to-action map, so that the system can learn a more balanced distribution for the action prediction task.", "Using this method they train a domain-aware multi-decoder (DAMD) network for predicting belief, action and response, jointly.", "As each agent response may cover multiple domains, acts or slots at the same time, Marco (Wang et al., 2020) learns to generate the response by attending over the predicted dialog act sequence at every step of decoding.", "SimpleTOD (Hosseini-Asl et al., 2020) and SOLOIST (Peng et al., 2020a) are both based on the GPT-2 (Radford et al., 2019) architecture.", "The main difference between these two architectures is that SOLOIST further pretrains the GPT-2 model on two more dialog corpus before fine-tuning on MultiWOZ dataset.", "Following the literature (Zhang et al., 2020a; Peng et al., 2020a), we now consider four different settings for evaluating the strength of hierarchical encoding.", "1. No Annotations First, to simply gauge the benefit of using a Hierarchical encoder in a Transformer Seq2Seq model, we compare the performance of HIER to other baselines including HRED and vanilla Transformer without any belief states and dialog act annotations.", "2. Oracle Policy In this setting, several recently proposed model architectures for the response generation task of MultiWOZ are compared against each other in presence of ground truth belief state and dialog act annotations.", "This experiment helps us understand the models' capabilities towards generating good responses (BLEU score) when true belief state and(or) dialog acts are available to them.", "3. Context-to-Response The model is given true belief states and DB search results in this experiment, but they need to generate the dialog act and response during inference.", "Some of the baselines generate dialog act as an intermediate step in their architecture whereas others use a fine-tuned BERT model.", "4. End-to-End This is the most realistic evaluation scheme where a model has to predict both belief states and dialog act (or one of these as per the models input requirement) for searching DB or generating response.", "We used the official evaluation metrics 7 released by the authors of the MultiWOZ dataset (Budzianowski et al., 2018): Delexicalized-BLUE score , INFORM rate (measures how often the entities provided by the system are correct), SUCCESS rate (reflects how often the system is able to answer all the requested attributes), Entity-F1 score (Wen et al., 2017) (measures the entity coverage accuracy), and Combined Score ( S = BLEU +0 .", "5 ( Inform + Success ) ) to measure the overall quality.", "Training Cross-entropy losses over the ground truth response and/or belief and act sequences are used for the training the models.", "We did hyperparameter search using the Optuna library (Akiba et al., 2019) by training the model upto 5 epochs.", "Final models were trained 8 upto 30 epochs with early stopping.", "For the four different experimental settings discussed in Section 3.1, we showcase results from those experiments in Tables 2 through 5. Table 2 shows the results from our experiments when no", "oracle is present.", "By comparing the performance of Transformer, SET and MAT baselines against that of HIER we can see that in each case HIER is able to improve in terms of BLEU, Success and overall Score.", "HIER being better than SET and MAT implies that only the UT-Mask or the CT-Mask is not sufficient, the full scheme of HT-Encoder is necessary for the improvement.", "The exception in the improvements is the SET model which has the highest inform score of 76.80.", "Although, we observe that it is the combination of the BLEU and Inform score that depicts the real quality of the responses.", "As BLEU measures precision of n-grams and inform measures recall of task related entities, only when both metrics increase we get a better performing model.", "This is reflected upto some extent in Entity-F1 score (H-Mean of entity recall and precision), but it too ignores tokens other than task related entities.", "So SET only having a higher inform score may mean that it is over-predicting some entities leading to improved recall.", "In the Context-to-Response generation task with oracle policy (Table 3), our HIER++ and HIER-CLS models show very strong performance and beat the HDSA model (in terms of Inform and Success rates) and even the GPT-2 based baseline SimpleTOD (in terms of BLEU and Success rate).", "This shows that without the intricacies of the baselines, just by applying a hierarchical encoder based model we are able to perform almost at the level of the state-of-the-art model.", "Compared to HIER, SimpleTOD utilizes GPT-2's pretraining, and DAMD uses attention over previous belief states and action sequences.", "Whereas, HIER's access to oracle policy is only through the average embedding of its tokens.", "Further in Table 5, we compare end-to-end generation performance of HIER-Joint with baseline models that can perform belief-state and/or dialog act generation.", "In terms of BLEU and combined score HIER-Joint is able to perform better than the baselines.", "With respect to inform and success the model outperforms the DAMD baseline.", "While the above experiments focus on proving the base performance of the proposed response generation models (HIER, HIER++, HIER-CLS, and ablations), HT-Encoder can be applied to any model that uses a standard transformer encoder.", "Hence, in a final experiment (Table 6), we integrate HT-Encoder with an existing state-of-the-art model Marco.", "We replace the standard transformer in Models Evaluation Metrics BLEU Entity-F1 Inform Success Score HRED 17.50 -70.7 60.9 83.3 TokenMoE 16.81 -75.30 59.70 84.31 Transformer 19.1 55.1 71.1 59.9 84.60 SET 18.67 51.61 76.80 57.69 85.92 MAT 18.86 54.89 71.9 52.5 81.06 HIER 20.91 54.45 73.60 60.10 87.76 Table 2: Simplest Baselines in absence of both Belief or Policy / Dialog Act annotations Models Pretraining Annotations Evaluation Metrics Belief DB Policy BLEU Entity-F1 Inform Success Score SimpleTOD GPT-2 Oracle Oracle Oracle 17.78 -93.4 83.2 106.08 SimpleTOD GPT-2 Oracle Oracle 18.61 -92.3 85.8 107.66 HDSA Oracle Oracle Oracle 30.4 86.2 87.9 78.0 113.4 DAMD Oracle Oracle Oracle 27.3 -95.4 87.2 118.5 SET++ -Oracle 25.56 82.27 85.7 74.3 105.56 HIER++ -Oracle 29.54 85.01 88.3 85.4 116.39 HIER-CLS -Oracle 29.29 84.23 88.3 85.9 116.39 Table 3: Context-to-Response generation with Oracle Policy.", "Marco with an HT-Encoder and rerun the context-to-response generation experiment.", "Introducing HT-Encoder into Marco helps improve in terms of inform (minor), success and the combined score metric.", "The results of this experiment show that HT-Encoder is suitable for any model architecture.", "Overall, our experiments show how useful the proposed HT-Encoder module can be for dialog systems built upon transformer encoder-decoder architecture.", "It is also applicable to tasks where the input sequence can be split into an abstract set of subunits (e.g., search history in Sordoni's application).", "We believe that our proposed approach for hierarchical encoding in transformers and the algorithm for converting the standard transformer encoder makes it an invaluable but accessible resource for Models Pretraining Annotations Evaluation Metrics Belief DB Policy BLEU Entity-F1 Inform Success Score DAMD Gen* Oracle Gen 16.60 -76.40 60.40 85.00 SimpleTOD GPT-2 Gen Gen 15.01 -84.4 70.1 92.26 SOLOIST GPT-2, DC Gen Gen -16.54 -85.50 72.90 95.74 HIER-Joint Gen Gen 19.74 53.94 80.5 71.7 95.84 Table 5: End-to-End: Belief State predicted by model itself.", "future researchers working on dialog systems or similar problem statements with transformer-based architectures.", "Task Oriented Dialog Systems Researchers identify four different subtasks for any task-oriented dialog system (Wen et al., 2017), natural language understanding (NLU), dialog state tracking (DST), dialog act or policy generation, and Natural Language Generation (NLG).", "Before the advent of large scale Seq2Seq models, researchers focused on building feature-rich models with rule-based pipelines for both natural language understanding and generation.", "It usually required separate utterance-level and dialog-level NLU feature extraction modules.", "These NLU features decide the next dialog act that the system should follow.", "This act is then converted into a natural language response using the NLG module.", "Young et al. (2013) modeled this problem as a Markov Decision Process whose state comprised of various utterance and dialog features detected by an NLU module.", "However, such models had the usual drawback of any pipelined approaches, error propagation.", "Wen et al. (2017) proposed using neural networks for extracting features like intent, belief states, etc. and training the NLU and NLG modules end-to-end using a single loss function.", "Marco (Wang et al., 2020) and HDSA (Chen et al., 2019) used a fine-tuned BERT model as their act predictor as it often triumphs other ways to train the dialog policy network (even joint learning).", "HDSA is a transformer Seq2Seq model with act-controllable self-attention heads (in the decoder) to disentangle the individual tasks and domains within the network.", "Marco uses a soft-attention over the act sequence during the response generation process.", "Hierarchical Encoders The concept of Hierarchical Encoders have been used in many different context in the past.", "It has been most well known in the area of dialog response generation as the HRED model.", "Many open domain dialog systems have used the hierarchical recurrent encoding scheme of HRED for various tasks and architectures.", "Hierarchical Encoder was first proposed by (Sordoni et al., 2015a) for using in a query suggestion system.", "They used it encode the user history comprising multiple queries using an Hierarchical LSTM network.", "Serban et al. (2016) extended this work to open domain dialog generation problems and proposed the HRED network.", "HRED captures the high level features of the conversation in a context RNN.", "Several models have adopted this approach later on, e.g. VHRED (Serban et al., 2017), CVAE (Zhao et al., 2017), DialogWAE (Gu et al., 2018), etc.", "Another area in which researchers have proposed the use of hierarchical encoder is for processing of paragraph or long documents.", "Li et al. (2015) used a hierarchical LSTM network for training an autoencoder that can encode and decode long paragraphs and documents.", "Zhang et al. (2019) proposed HIBERT where they introduced hierarchy into the BERT architecture to remove the limitation on length of input sequence.", "HIBERT samples a single vector for each sentence or document segment (usually contextual embedding of CLS or EOS token) from the sentence encoder to be passed onto the higher level transformer encoder.", "Liu and Lapata (2019) applies a similar approach for encoding documents in a multi-document summarization task.", "This paper explored the use of hierarchy in transformer-based models for task-oriented dialog system.", "We started by proposing a generalized framework for Hierarchical Transformer Encoders (HT-Encoders).", "Using that, we implemented two models, one new model called HIER, and another HIER-CLS model by adapting the existing HIBERT architecture into our framework.", "We thoroughly experimented with these models in four different response generation tasks of the MultiWOZ dataset.", "We compared the proposed models with an exhaustive set of recent state-of-the-art models to thoroughly analyze the effectiveness of HT-Encoders.", "We empirically show that the basic transformer seq2seq architecture, when equipped with an HT-Encoder, outperforms many of the state-of-the-art models in each experiment.", "We further prove its usefulness by applying it to an existing model Marco.", "This work opens up a new direction on hierarchical transformers in dialogue systems where complex dependencies exist between the utterances.", "It would also be beneficial to explore the effectiveness of the proposed HT-Encoder when applied for various other tasks." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "method", "result", "objective", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "objective", "method", "objective", "result", "method", "objective", "abstain" ]
[ "A typical conversation comprises of multiple turns between participants where they go back-and-forth between different topics.", "At each user turn, dialogue state tracking (DST) aims to estimate user's goal by processing the current utterance.", "However, in many turns, users implicitly refer to the previous goal, necessitating the use of relevant dialogue history.", "Nonetheless, distinguishing relevant history is challenging and a popular method of using dialogue recency for that is inefficient.", "We, therefore, propose a novel framework for DST that identifies relevant historical context by referring to the past utterances where a particular slot-value changes and uses that together with weighted system utterance to identify the relevant context.", "Specifically, we use the current user utterance and the most recent system utterance to determine the relevance of a system utterance.", "Empirical analyses show that our method improves joint goal accuracy by 2.75% and 2.36% on WoZ 2.0 and MultiWoZ 2.0 restaurant domain datasets respectively over the previous state-of-the-art GLAD model.", "Dialog state tracking (DST) is a vital component in the task-oriented dialog systems which is used to estimate user's goals and requests in order to plan next action and respond accordingly.", "At each turn, DST aims to identify the set of goals that a user aims to achieve and requests that are represented as slot-value pairs.", "Typically, this decision is made by considering user utterance in the current turn or system actions in the previous turn.", "However, in many cases, the considered user utterance or system actions do not present enough information and refers to a previous utterance.", "As shown through an example in Figure 1, while exploring different available options, user User: hello, i'm looking for a restaurant with fair prices System: There are 31 places with moderate price range.", "Can you please tell me what kind of food you would like?", "Sys Act: food Turn Label: price range = moderate User: well I want to eat in the North, what's up that way?", "System: I have two options that fit that description, Golden Wok chinese restaurant and Nirala which serves Indian food.", "Do you have a preference?", "Sys Act: food Turn Label: area = north User: Can I have the address and phone number for the Golden Wok chinese restaurant?", "System: The phone number is 01223 350688.", "Turn Label: request = address, phone; food = chinese User: thank you.", "what is the address?", "System: The address is 191 Histon Road Chesterton.", "Turn Label: request = address User: Okay, what about Nirala, what's the address and phone of that?", "System: 7 Milton Road Chesterton and the number is 01223 360966 Turn Label: request = address, phone; food = indian Figure 1: An example dialog from WoZ 2.0 dataset.", "can go back-and-forth between the currently and previously discussed facts.", "For instance, when offered with two different restaurant options namely Nirala (food=indian) and Golden Wok (food=chinese) in the second turn, user first inquires about the details of Golden Wok .", "And after getting relevant details about the Golden Wok in the following two turns, user refers back to the second option provided in second turn and asks about Nirala restaurant.", "To predict the correct slot-value pair food=indian in the dialog state of the fifth turn, the system is required to refer back to the second turn again to find information about Nirala , as the context obtained from the current dialog turn is insufficient.", "Identifying such implicitly referenced historical turns is challenging since implicit references are not local and most recent turns are often not informative.", "Therefore, the traditional approach of modeling dialogue recency (El Asri et al., 2017) may not suffice.", "Instead, we propose to model implicit references by storing links to the past turn where each of the slots was modified.", "Then at each turn, we look up though the stored links to find the previous turn which may provide additional cues for predicting the appropriate slot-value.", "Moreover, the dialogue system often asks polar questions with yes-no answers.", "For instance, the DST system should update the dialogue state with food=indian when a user replies Yes to a system utterance Do you want Indian food?", ".", "In such cases, neither the user utterance nor system acts ( food in this example) contain any information about the actual slot-value.", "This makes utilization of both system and user utterance eminent for dialog state tracking.", "However, utilizing the previous system utterance together with the current user utterance always at each turn may add noise.", "Therefore, we use a gating mechanism based on both utterances to determine the relevance of the previous system utterance in the current turn.", "The evaluation shows that identifying the relevant context is essential for dialogue state tracking.", "Our novel model that discerns important details in non-adjacent dialogue turns and the previous system utterance from a dialog history is able to improve the previous state-of-the-art GLAD (Zhong et al., 2018) model on all evaluation metrics for both WoZ and MultiWoZ (restau-rant) datasets.", "Furthermore, we empirically show that a simple self-attention based biLSTM model, using only one-third of the number of parameters as GLAD, outperforms GLAD by identifying and incorporating the relevant context.", "Early work for DST relied on separate Spoken Language Understanding (SLU) module (Hender-son et al., 2012) to extract relevant information from user utterances in a pipelined approach.", "Such systems are prone to error accumulation from a separate SLU module, in absence of necessary dialog context required to interpret the user utterance.", "Thus, later work on DST moved away from separate SLU modules and inferred the dialog state directly from user utterance and dialog history (Hen-derson et al., 2014b,c; Zilka and Jurcicek, 2015).", "These models depend on delexicalization, using generic tags to replace specific slot types and values, and handcrafted semantic dictionaries.", "In practice, it is difficult to scale these models for every slot type and recent state-of-the-art models for DST use deep learning based methods to learn general representations for user and system utterances and previous system actions, and predict the turn state (Henderson et al., 2013, 2014b; Mrksic et al., 2015, 2017; Hori et al., 2016; Liu and Lane, 2017; Dernoncourt et al., 2017; Chen et al., 2016).", "However, these systems are found to perform poorly on rare and unknown slot-value pairs which was recently addressed through local slot-specific encoders (Zhong et al., 2018) and pointer network (Xu and Hu, 2018).", "A crucial limitation to all these approaches lies in the modeling of appropriate historical context, which is simply ignored in most of the works.", "Since user's goal may change back-and-forth between previous values, incorporating relevant historical context is useful in monitoring implicit goal references.", "In a recent work, El Asri et al. (2017) discussed on similar limitations of current DST task and introduced a new task of frame tracking that explicitly tracks every slot-values that were introduced during the dialogue.", "However, that significantly complicates the task by maintaining multiple redundant frames that are often left unreferenced.", "Our proposed model, that explicitly track relevant historical user and system utterances, can be easily incorporated into any known DST or frame tracking systems such as Schulz et al. (2017) to replace the recency encoding.", "Similar to previous works, we decompose the multi-label classification problem to binary classification where we score each slot-value pair and select the ones that receive a score above a threshold to be included in the current dialog state.", "To predict the score for a candidate slot-value pair, the model uses the relevant past user utterance (refer-ential utterance), a fused utterance composed using the current user utterance and the system utterance of the previous turn, as well as previous system actions as evidence.", "Shown in Figure 2, our model comprises of: Lookup module: retrieves a link to the turn where each of the slots changes.", "At each step, our system refers to the lookup module that returns the past user utterance (the antecedent user utterance ) Slot-valueEncoder Slot-value Lookup SigmoidWeightedSum User Utt Encoder Sys Utt Encoder Past Utt Encoder System Act Encoder FusionScorer System Act Scorer Past Utt Scorer Past slot-valueScorer Referential Context Scorer Figure 2: The Architecture of Context Aware Dialogue State Tracker.", "GLE modules: Each of the five green modules in Figure 2 is a global-locally self-attentive encoder (GLE module) (Zhong et al., 2018) that encodes each type of evidence into a vector representation ( c ).", "Each input is represented as a sequence of words which is encoded to a vector representation via global-local self-attentive encoder (GLE) module (Zhong et al., 2018).", "Specifically, GLE employs local slot-specific bidirectional LSTMs and a global bidirectional LSTM (Hochreiter and Schmidhuber, 1997) that is shared across all slots for encoding the input sequence into a sequence of hidden states ( H ), followed by a self-attention layer (Lin et al., 2016) to obtain a fixed dimension vector representation ( c ).", "The GLE modules are used to encode the antecedent user utterance ( H up , c up ), the current user utterance ( H u , c u ), the previous system utterance ( H s , c s ), each of the system acts ( H a i , c a i ), as well as the previous slot-value ( H vp , c vp ) and the candidate ( H v , c v ) slot-value.", "Referential Context Scorer: uses the candidate slot value ( c v ), the antecedent user utterance as well as the previous slot-value to determine if the candidate slot value was referenced in the antecedent utterance.", "Specifically, the scorer uses the representation of the candidate slot value c v to attend over hidden states of the antecedent user utterance and the previous slot-value, H up and H vp , and then computes attention weights for each of the hidden states.", "Next, the scorer sums up the hidden states weighed with the calculated attentions to get the summary context (Equation 1).", "Finally, the scorer applies a linear neural layer to calculate the scores y vp and y up representing the likelihoods that the candidate slot-value is different from the previous slot-value and the candidate slot-value was unreferenced in the antecedent utterance (Equation 2).", "Q ( H, c ) : a j = ( H j ) (cid:62) c ; p = softmax ( a ) Q ( H, c ) = (cid:88) i p i H i (1) y up = W up Q ( H up , c v ) + b up y vp = W vp Q ( H vp , c v ) + b vp (2) Fusion Scorer: leverages necessary details in the previous system utterance to enrich the current user utterance.", "First, we use a gating mechanism based on c s and c u that determines the relevance of the previous system utterance in the current turn.", "We concatenate c s and c u and use a linear layer with sigmoid activation to calculate the score (Equation 3).", "Then, we use attention from c v over H s and H u to calculate context summaries ( l s , l u ), and combine the summary vectors by taking their normalized weighted sum based on .", "We finally apply a single linear layer to calculate the score y f that determines the likelihood of the candidate slot-value based on both the current user utterance and the previous system utterance (Equation 4).", "f c = W fc ( c s c u ) + b fc = ( W tanh ( f c ) + b ) (3) l s = Q ( H s , c v ) ; l u = Q ( H u , c v ) l f = l s + (1 ) l u ; y f = W lf l f + b lf (4) System Act Scorer: is the same as the action scorer proposed by (Zhong et al., 2018).", "Specifically, The scorer uses attention from c u over C a to calculate action summary followed by a linear layer with sigmoid activation to calculate the score y a that determines the relevance of the candidate slot-value based on the previous system actions (Equation 5).", "It then calculates the final score of the candidate slot-value by taking weighted sum of the four scores ( y up , y vp , y f , y a ) followed by a sigmoid layer, where weights are learned in the network.", "We primarily use WoZ 2.0 (Wen et al., 2017) restaurant reservation task dataset that consists of 1200 dialogues for training and evaluation.", "Each dialogue has an average of eight turns, where each turn contains system utterance transcript , user utterance transcript , turn label and belief state .", "All the dialogue states and actions are based on a task ontology that supports three different informable slot-types namely price range with 4 values, food with 72 values, area with 7 values, and requests of 7 different types like address and phone .", "Following the standard settings, we use 600 dialogues for training, 200 for validation and the remaining 400 for testing.", "We also use dialogues from restaurant domain in MultiWoZ 2.0 dataset (Budzianowski et al., 2018) for secondary evaluation.", "It banks on a significantly complex ontology covering seven informable slot types with 276 different values ( food, price range, restaurant name, area, book time, book day and book people with 97, 6, 105, 8, 43, 8 and 9 values respectively).", "We use standard training, validation and test splits of 1199, 50 and 61 dialogues respectively.", "All the models on WoZ 2.0 are evaluated on the two standard metrics introduced in Henderson et al. (2014a).", "First, Joint Goal Accuracy is the percentage of turns in a dialogue where the user's informed joint goals are identified correctly.", "Joint goals are accumulated turn goals up to the current dialog turn.", "Second, Turn Request Accuracy calculates the percentage of turns in a dialogue where the user's requests were correctly identified.", "Models on MultiWoZ 2.0 dataset are evaluated using joint goal and turn inform accuracies, as used by Nouri and Hosseini-Asl (2018).", "We use pretrained GloVe word embeddings (Pen-nington et al., 2014) concatenated with charac-WoZ", "ter n-gram embeddings (Hashimoto et al., 2017) which are kept fixed during the training.", "Each of bi-LSTMs use 200 hidden dimensions.", "All the models are trained using ADAM optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001.", "Dropout rate (Srivastava et al., 2014) is set to 0.2 for all biLSTM modules and the embedding layer.", "The models are trained for a maximum of 100 epochs with a batch size of 50.", "The validation data was used for early stopping and hy-perparameter tuning.", "Table 1 compares the performance of our proposed models with different baselines, including delexalisation-based model + SD (Wen et al., 2017), DNN and CNN variants of neural belief tracker (Mrksic et al., 2017) and the previous state-of-the-art GLAD systems (Zhong et al., 2018) on WoZ 2.0 dataset.", "We also implement a simplified variant of GLAD, Global BiLSTM Model Approx.", "based GLE , by removing slot-specific local biL-STMs from the GLE encoder.", "We then successively combine it with referential context ( Global biLSTM based GLE + RC ) and the fused previous system utterance ( Global biLSTM based GLE + RC + FS ).", "Finally, we directly incorporate the referential context and gate selected system utterance into the GLAD system ( GLAD + RC + FS ).", "Irrespective of the underlying system, utilizing appropriate context from the previous turns improves the overall performance of a dialogue state tracker on both joint goal and turn request accuracies on WoZ 2.0 dataset.", "First, incorporating relevant referential utterances to identify implicitly mentioned slot-value improves the accuracy of global biLSTM based GLE model on joint goal task by 2.4%.", "Then, gating based mechanism to augment user utterance with relevant information from the previous system utterance further improves the joint goal accuracy by 1.0%.", "Together, they improve joint goal and request accuracy of the global biLSTM based GLE model by 3.4% and 0.2% respectively.", "Furthermore, as evident from the results in Table 2, both referential context and fused system utterance proportionally improve performance on MultiWoZ 2.0 dataset as well with overall improvement of 2.36% and 1.77% on joint goal and turn inform accuracies respectively.", "Performances of all models on MultiWoZ 2.0 are significantly inferior compared to WoZ 2.0 owing to higher complexity, with richer and longer utterances and considerably more slot-values in the former dataset.", "The utilization of relevant context results in sig-nificant reduction in the number of learnable parameters in the model as shown in Table 3.", "Relevant context with the baseline model is able to outperform GLAD while using only one third of the number of learnable parameters.", "The parameters added due to using relevant context are the parameters for encoding the antecedent referential user utterance and the previous system utterance as well as the past utterance and past slot-value scorers.", "However, we also observe high variance in the joint goal accuracy.", "Since joint goal is calculated by accumulating turn goals, an error in predicting a turn goal is propagated to all the downstream turns.", "We have presented a novel method for identifying the relevant historical user utterance as well as determining the relevance of the system utterance from the last turn to enrich the current user utterance and improve goal tracking in dialogue systems.", "The experimental results show that discerning relevant context from the dialog history is crucial for tracking dialog states.", "We want to thank our anonymous reviewers for providing insightful review comments." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "method", "other", "other", "other", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "other" ]
[ "The framing of political issues can influence policy and public opinion.", "Even though the public plays a key role in creating and spreading frames, little is known about how ordinary people on social media frame political issues.", "By creating a new dataset of immigration-related tweets labeled for multiple framing typologies from political communication theory, we develop supervised models to detect frames.", "We demonstrate how users' ideology and region impact framing choices, and how a message's framing influences audience responses.", "We find that the more commonly-used issue-generic frames obscure important ideological and regional patterns that are only revealed by immigration-specific frames.", "Furthermore, frames oriented towards human interests, culture, and politics are associated with higher user engagement.", "This large-scale analysis of a complex social and linguistic phenomenon contributes to both NLP and social science research.", "Framing selects particular aspects of an issue and makes them salient in communicating a message (Entman, 1993).", "Framing can impact how people understand issues, attribute responsibility (Iyengar, 1991), and endorse possible solutions, thus having major implications for public opinion and policy decisions (Chong and Druckman, 2007).", "While past work has studied framing by the news media and the political elite, little is known about how ordinary people frame political issues.", "Yet, framing by ordinary people can influence others' perspectives and may even shape elites' rhetoric (Russell Neuman et al., 2014).", "To shed light on this important topic, we focus on one issueimmigrationand develop a new methodology to computationally analyze its framing on Twitter.", "social media content enables us to compare framing strategies across countries and political ideologies.", "Furthermore, social media provides unique insights into how messages resonate with audiences through interactive signals such as retweets and favorites.", "By jointly analyzing the production and reception of frames on Twitter, we provide an in-depth analysis of immigration framing by and on the public.", "Political communications research has identified numerous typologies of frames, such as issue-generic policy , immigration-specific , and narrative .", "Each of these frame types can significantly shape the audience's perceptions of an issue (Iyengar, 1991; Chong and Druckman, 2007; Lecheler et al., 2015), but prior NLP work seeking to detect frames in mass media (e.g. Card et al., 2016; Field et al., 2018; Kwak et al., 2020) has largely been limited to a single issue-generic policy typology.", "Multiple dimensions of framing must be considered in order to better understand the structure of immigration discourse and its effect on public opinion and attitudes.", "We thus create a novel dataset of immigration-related tweets containing labels for each typology to facilitate more nuanced computational analyses of framing.", "This work combines political communication theory with NLP to model multiple framing strategies and analyze how the public on Twitter frames immigration.", "Our contributions are as follows: (1) We create a novel dataset of immigration-related tweets labeled for issue-generic policy, immigration-specific, and narrative frames.", "(2) We develop and evaluate multiple methods to detect each type of frame.", "(3) We illustrate how a message's framing is influenced by its author's ideology and country.", "(4) We show how a message's framing affects its audience by analyzing favoriting and retweeting behaviors.", "Finally, our work highlights the need to consider multiple framing typologies and their effects.", "Framing serves four functions:", "(i) defining problems,", "(ii) diagnosing causes,", "(ii) making evaluative judgments, and (iv) suggesting solutions (Entman, 1993).", "Framing impacts what people notice about an issue, making it a key mechanism by which a text influences its audience.", "Framing Typologies We draw upon distinct typologies of frames that can be applied to the issue of immigration: (1) issue-specific, which identify aspects of a particular issue, or (2) issue-generic, which appear across a variety of issues and facilitate cross-issue comparison (de Vreese, 2005).", "Issue-generic frames include policy frames that focus on aspects of issues important for policymaking, such as economic consequences or fairness and equality (Boydstun et al., 2013).", "Other generic frames focus on a text's narrative; news articles use both episodic frames, which highlight specific events or individuals, and thematic frames, which place issues within a broader social context.", "The use of episodic versus thematic frames can influence the audience's attitudes.", "For example, episodic frames lead audiences to attribute responsibility for issues such as poverty to individual citizens while thematic frames lead them to hold the government responsible (Iyengar, 1991).", "Issue-specific frames for immigration focus on the portrayal of immigrants.", "Our analysis uses Benson (2013)'s set of issue-specific frames, which represent immigrants as heroes (cultural diversity, integration, good workers), victims (humanitarian, global economy, discrimination), and threats (to jobs, public order, taxpayers, cultural values).", "Both issue-specific and generic frames provide unique insights but present advantages and drawbacks.", "While issue-specific frames analysis are specific and detailed, they are hard to generalize and replicate across studies, which is a key advantage for generic frames (de Vreese, 2005).", "Framing effects Studies of framing typically focus on either frame-building or frame-setting (Scheufele, 1999; de Vreese, 2005).", "Frame-building is the process by which external factors, such as a journalist's ideology or economic pressures, influence what frames are used; frame-building studies thus treat framing as the dependent variable.", "Frame-setting studies treat frames as independent variables that impact how an audience interprets and evaluates issues.", "news highlight region and ideology as particularly important factors.", "Right-leaning media from conservative regions are more likely to frame immigrants as intruders (van Gorp, 2005), and as threats to the economy and public safety (Fryberg et al., 2012).", "Framing also differs across countries; while the US press emphasizes public order, discrimination, and humanitarian concerns, the French press more frequently frames immigrants as victims of global inequality (Benson, 2013).", "Frame-setting has also been studied in the context of immigration.", "For example, experimental work has shown that frames eliciting angry or enthusiastic emotions impact participants' opinions on immigration (Lecheler et al., 2015).", "While past work has analyzed linguistic framing in Twitter immigration discourse (e.g., de Saint Laurent et al., 2020), little is known about how such framing affects users' interactive behaviors such as resharing content, which is a key objective of frame setting.", "Because many people now generate and consume political content on social media, scholars have increasingly used automated techniques to study framing on social media.", "Large-scale research of framing on Twitter has commonly focused on unsupervised approaches.", "(e.g., Russell Neuman et al., 2014; Meraz and Papacharissi, 2013; de Saint Laurent et al., 2020).", "Such approaches, including those focused on hash-tag analysis, can reveal interesting framing patterns.", "For instance, Siapera et al. (2018) shows that frame usage varies across events.", "Similarly, topic models have been used to compare refugee crisis\" media discourses across the European countries (Heiden-reich et al., 2019), and to uncover differences in attitudes towards migrants (Hartnett, 2019).", "Although lexicon analysis and topic models can provide insights about immigration discourse, here, we adopt a supervised approach to ground our work in framing research and to enable robust evaluation.", "We draw inspiration from a growing body of NLP research that uses supervised approaches to detect issue-generic policy frames in news articles, a task popularized by the Media Frames Corpus (Card et al., 2015), which contains issue-generic policy frame labels for articles across several issues (Boydstun et al., 2013).", "Using this corpus, prior work has detected frames with techniques including logistic regression (Card et al., Frame Type Frame Description Issue-Generic Economic Financial implications of an issue Policy Capacity & Resources The availability or lack of time, physical, human, or financial resources Morality & Ethics Perspectives compelled by religion or secular sense of ethics or social responsibility Fairness & Equality The (in)equality with which laws, punishments, rewards, resources are distributed Legality, Constitutionality & Jurisdiction Court cases and existing laws that regulate policies; constitutional interpretation; legal processes such as seeking asylum or obtaining citizenship; jurisdiction Crime & Punishment The violation of policies in practice and the consequences of those violations Security & Defense Any threat to a person, group, or nation and defenses taken to avoid that threat Health & Safety Health and safety outcomes of a policy issue, discussions of health care Quality of Life Effects on people's wealth, mobility, daily routines, community life, happiness, etc.", "2016), recurrent neural networks (Naderi and Hirst, 2017), lexicon induction (Field et al., 2018), and fine-tuning pretrained language models (Khane-hzar et al., 2019; Kwak et al., 2020).", "Roy and Goldwasser (2020) further extracted subcategories of issue-generic policy frames in newspaper coverage using a weakly-supervised approach.", "Finally, issue-generic frames have also been computationally studied in other media, including online fora and politicians' tweets (Johnson et al., 2017; Hartmann et al., 2019).", "We build upon this literature by incorporating additional frame typologies that re-flect important dimensions of media discourse with real-world consequences (Iyengar, 1991; Gross, 2008; Eberl et al., 2018).", "Beyond detecting frames, we computationally analyze frame-building and frame-setting among social media users; though well-studied in traditional news media, little is known about how social media users frame immigration or its effects (Eberl et al., 2018).", "studied issue-specific frames in news media for issues such as missile defense and gun violence (Morstat-ter et al., 2018; Liu et al., 2019a).", "We extend issue-specific frame analyses to immigration by adopting an immigration-specific typology developed by political communication scholars (Benson, 2013).", "In contrast to prior NLP work focused on traditional media or political elites (Johnson et al., 2017; Field et al., 2018), we highlight the role that social media publics play in generating and propagating frames.", "Furthermore, we provide a new computational model of narrative framing (Iyengar, 1991), that together with models for issue-generic policy and issue-specific frames, provides complementary views on the framing of immigration.", "Finally, our large-scale analysis of frame-setting illustrates the potential for using NLP to understand how a message's framing shapes its audience behavior.", "Data Collection We extract all English-language tweets in 2018 and 2019 from the Twitter Decahose containing at least one of the following terms: immigration , immigrant(s) , emigration , emigrant(s) , migration , migrant(s) , illegal alien(s) , illegals , and undocumented 1 .", "We focus on content creation and thus exclude retweets from our dataset, though we consider retweeting rates when analyzing the social influence of different frames.", "We further restrict our dataset to tweets whose authors are identified as being located in the United States (US), United Kingdom (GB), and European Union (EU) by an existing location inference tool (Compton et al., 2014).", "To compare framing across political ideologies, we obtain ideal point estimates for nearly two-thirds of US-based users with Barber (2015)'s Bayesian Spatial Following model.", "Our full dataset contains over 2.66 million tweets, 86.2% of which are from the United States, 10.4% from the United Kingdom, and 3.4% from the European Union.", "Data Annotation Tweets are annotated using three frame typologies:", "(i) issue-generic policy,", "(ii) immigration-specific, and", "(iii) narrative frames, where a tweet may use multiple frames simultaneously.", "We use Boydstun et al. (2013)'s Policy Frames Codebook to formulate our initial guidelines to code for policy frames.", "We use Benson (2013)'s immigration-specific frames, but follow Hovden and Mjelde (2019) in including an additional category for framing immigrants as victims of war.", "Finally, we code for narrative frames using definitions from Iyengar (1991).", "All frames and descriptions can be found in Table 1, with a complete codebook in Supplementary Materials.", "Because annotation guidelines from prior work focus on elite communications, we first adjusted our codebook to address challenges posed by Twitter content.", "Changes were made based on feedback from four trained annotators who labeled 360 tweets from 2018, split between the EU, GB, and US.", "Even for humans, identifying frames in tweets is a difficult task.", "Defining the boundaries of what constitutes a message is not trivial.", "Beyond the text, frames could be identified in hashtags, images, videos, and content from linked pages.", "Furthermore, tweets are often replies to other users or part of a larger thread.", "This additional context may 1 We obtained this list by starting with the seed terms immigrants , immigration , and illegal aliens .", "influence an issue's framing.", "For simplicity, we treat each tweet as a standalone message and label frames based only on the text (including hashtags).", "Unlike news stories, where frames are clearly cued, tweets often implicitly allude to frames due to character limitations.", "For example, a tweet expressing desire to drive immigrants out\" with no additional context may suggest a criminal frame, but criminality is not explicit.", "To minimize errors, we avoid making assumptions about intended meaning and interpret all messages literally.", "Training, development, and test data were annotated using two procedures after four annotators completed four rounds of training.", "The dataset contains equal numbers of tweets from the EU, UK, and US.", "Training data was singly annotated and includes 3,600 tweets, while the development and test sets each contain 450 tweets (10% of the full dataset) and were consensus-coded by pairs of trained annotators.", "We opt for this two-tier approach due to", "(i) the inherent difficulty of the task 2 and", "(ii) the need to maximize diversity seen in training.", "During annotator training, pilot studies attained moderate agreement, suggesting that to attain high-reliability, consensus coding with adjudication would be needed (Krippendorff, 2013), which comes at a cost of substantially increased time.", "Because a large dataset of unique, singly-coded documents is preferable to a small dataset of documents coded by multiple annotators for text classification (Barbera et al., 2021), we decided to increase corpus diversity in the training data by singly-annotating, at the expense of potentially noisier annotation, and to consensus code all evaluation data.", "On the double annotated data, annotators attained Krippendorff's =0.45.", "Additional details are provided in Supplementary Material (B, Figures 6 and 7).", "Results We observe differences across frame typologies in coverage rates within the annotated data set.", "While 84% of tweets are labeled with at least one issue-generic policy frame and 85% with at least one narrative frame, only 51% are labeled with at least one issue-specific frame.", "This difference is due to immigration-specific frames being more narrowly-defined, as they require explicit judgment of immigrants as heroes, victims, or threats.", "Further details about frame distributions 2 For example, in identifying just the primary issue-generic frame of a document, the Media Frames corpus attained an Krippendorff's = 0.6 (Card et al., 2015, Fig. 4), whereas we ask annotators to identify all frames across three typologies.", "Frame Type Precision Recall F1 Score LRAP Issue-Generic Policy 0.727 0.721 0.711 0.750 Issue-Specific 0.593 0.531 0.552 0.806 Narrative 0.757 0.887 0.808 0.894 Table 3: Test set performance on each frame typology.", "in our annotations can be found in Supplementary Material (A, Figure 5).", "While the precision of issue-specific frames can reveal patterns otherwise obscured by the broader issue-generic frames, this lack of coverage presents two challenges: 1) automated detection is more challenging given this sparsity and 2) analyses of issue-specific frames do not capture a large portion of immigration-related discourse.", "By incorporating multiple framing strategies, we leverage both the coverage of issue-generic frames and the precision and interpretability of issue-specific frames.", "We formulate frame detection as a multilabel classification problem for each of the three typologies, using our dataset to train supervised models.", "Experimental Setup Our proposed model is a RoBERTa model (Liu et al., 2019b) trained using binary cross-entropy on the CLS token.", "We consider both", "(i) a model trained using the roberta-base parameters and", "(ii) a second model that has first been fine-tuned on our full set of immigration tweets using masked-language modeling.", "Fine tuning was performed for 60 epochs.", "In both models, early stopping is used to avoid overfitting.", "Models are compared with two baselines: random prediction, and logistic regression with unigram and bigram features.", "Each model was trained five times with different random seeds and we report bootstrapped mean performance.", "Results The fine-tuned RoBERTa model significantly outperforms all baselines (Table 2).", "RoBERTa has the most substantial gains over logistic regression for low-frequency frames (Sup-plementary Material C, Figure 8).", "These gains for rare frames are essential for analyzing immigration discourse on social media in order to capture diverse perspectives and arguments.", "Table 3 shows several evaluation metrics separated by frame type.", "Precision, recall, and F1 are calculated as unweighted averages over all frames belonging to each category.", "Overall, issue-generic policy and narrative frames can be detected more effectively than issue-specific frames.", "This difference reflects that issue-specific frames were sparser in the training data, but also that detecting these frames is inherently more challenging because it requires jointly reasoning about immigration-related topics and how these topics affect immigrants.", "For example, tweets about immigrants committing crimes and tweets about hate crimes committed against immigrants have distinct issue-specific frames ( threat: public order and victim: discrimination ), even though these texts can be linguistically quite similar.", "Given some thematic similarities between typologies, we tested an additional model that jointly predicted frames from all three typologies using the fine-tuned RoBERTa model; however, the resulting model offered worse performance than any single-typology model, suggesting minimal benefits of cross-typology learning.", "Supplementary Section C contains additional model performance analyses by frame and region.", "Hegemonic Framing Conservative media's framing of political issues is known to be more consistent, coordinated, and hegemonic than mainstream media, which has been vital to the success of the American conservative movement (Hemmer, 2016; Speakman and Funk, 2020).", "If the same pattern holds for social media, we would expect automated Error Type Description Example Plausible interpretation These instances highlight the challenges of annotation; there are convincing arguments that model's predicted frames can be appropriate labels.", "frame detection to achieve higher performance on conservative tweets due to more linguistic regularities across messages.", "Indeed, we find that issue-generic and issue-specific classifiers achieve higher F1 scores on tweets written by conservative authors compared to liberal authors (Figure 1), even though there are fewer conservative tweets in the training data (334 conservative vs 385 liberal tweets).", "Higher model performance on conservative tweets suggests that, like political and media elites, conservatives on social media are more consistent than liberals in their linguistic framing of immigration.", "Error Analysis We identify classification errors by qualitatively analyzing a random sample of 200 tweets that misclassified at least one frame.", "Table 4 shows the most common categories of errors.", "In writing about an issue, individuals are known to select particular framesa process known as frame-buildingbased on numerous factors, such as exposure to politicians' rhetoric or their own identity (Scheufele, 1999).", "Here, we focus on two specific identity attributes affecting frame building:", "(i) political ideology and", "(ii) country/region.", "The political, social, and historical contexts of an one's nation-state can impact how they frame immigration (Helbling, 2014).", "Immigration has a long history in the USA relative to Europe, and former European colonial powers (e.g. the UK) have longer immigration histories than other countries (e.g. Norway) (Thorbjrnsrud, 2015; Eberl et al., 2018).", "Cross-country variation in news framing also arise from differences in immigration policies (Helbling, 2014; Lawlor, 2015), media systems (Thorbjrnsrud, 2015), journalistic norms (Papacharissi and De Fatima Oliveira, 2008), geographic proximity to immigrant populations or points of entry (Grimm and And-sager, 2011; Fryberg et al., 2012), and immigrants' race/ethnicity (Grimm and Andsager, 2011).", "At the same time, increased globalization may result in a uniform transnational immigration discourse (Hel-bling, 2014).", "Framing variations across countries has implications for government policies and initiatives, particularly in determining what solutions could be applied internationally or tailored to each country (Caviedes, 2015).", "Prior studies on the role of ideology in frame-building have focused on the newspapers or political movements, showing patterns in frames like morality and security by political affiliation in European immigration discourse (Helbling, 2014; Hogan and Haltinner, 2015) or in use of economic frames by American newspapers (Fryberg et al., 2012; Abrajano et al., 2017).", "However, it remains unclear whether these patterns observed for elite groups can generalize to the effect of individual people's political dispositions.", "Experimental Setup We detect frames for all 2.6M immigration-related tweets using the fine-tuned RoBERTa model with the best-performing seed on development data.", "Using this labeled data, we estimate the effects of region and ideology by fitting separate mixed-effects logistic regression models to predict the presence or absence of each frame.", "We treat region (US, UK, and EU) as a categorical variable, with US as the reference level.", "Ideology is estimated using the method of Barber (2015), which is based on users' connections to US political elites; as such, we restrict our analysis of ideology to only tweets from the United States.", "To account for exogenous events that may impact framing, we include nested random effects for year, month, and date.", "We further control for user characteristics (e.g. the author's follower count, friends count, verified status and number of prior tweets) as well as other tweet characteristics (e.g. tweet length, if a tweet is a reply, and whether the tweet contains hashtags, URLs, or mentions of other users).", "We apply Holm-Bonferroni corrections on p-values before significance testing to account for multiple hypothesis testing.", "Ideology Ideology is strongly predictive of framing strategies in all three categories, as shown in Figure", "2. Our results reveal three broad themes.", "First, prior work has argued that liberals and conservatives adhere to different moral foundations, with conservatives being more sensitive to in-group/loyalty and authority than liberals, who are more sensitive to care and fairness (Graham et al., 2009).", "Our results agree with this argument.", "Liberals are more likely to frame immigration as a fairness and morality issue, and immigrants as victims of discrimination and inhumane policies.", "More conservative authors, on the other hand, focus on frames with implications for the in-group.", "They express concerns about 1) immigrants imposing a burden on taxpayers and governmental programs and 2) immigrants being criminals and threats to public safety.", "We qualitatively observe three distinct, though unsubstantiated, conservative claims contributing to the latter: (i.) Immigrants commit violent crimes (Light and Miller, 2018), (ii.) Undocumented immigrants illegally vote in US elections (Smith, 2017; Udani and Kimball, 2018), and (iii.) Immigrants are criminals simply by virtue of being immigrants (Ewing et al., 2015).", "Figure 2 shows a clear ideological stratification for issue-specific frames: liberals favor hero and victim frames, while conservatives favor threat frames.", "This finding is consistent with prior work on the role perceived threats play in shaping white American attitudes towards immigration (Brader et al., 2008), and the disposition of political conservatism to avoid potential threats (Jost et al., 2003).", "Second, while all frame categories show ideological bias, issue-specific frames are the most extreme.", "Most notably, our analysis shows that focusing solely on issue-generic policy frames would obscure important patterns.", "For example, the issue-generic cultural identity frame shows a slight liberal bias; yet, related issue-specific frames diverge: hero: cultural diversity is very liberal while threat: national cohesion is very conservative.", "Similarly, the issue-generic economic policy frame is slightly favored by more conservative authors, but the related issue-specific frames threat: jobs and hero: worker reveal ideological divides.", "This finding highlights the importance of using multiple framing typologies to provide a more nuanced analysis of immigration discourse.", "Third, more liberal authors tend to use episodic frames, while conservative authors tend to use thematic frames.", "This difference is consistent with Somaini (2019)'s finding that a local liberal newspaper featured more episodic framing in immigration coverage, but a comparable conservative newspaper featured more thematic framing.", "Other efforts that examine the relationship between narrative frames and cognitive and emotional responses provide some clues for the observed pattern.", "For instance, Aare (2011) shows that thematic frames are stronger when there are no or weak emotional responses; and that the opposite is true for episodic frames.", "The divergence of findings could be driven by partisans' differing emotional responses.", "Our findings also highlight important consequences for opinion formation.", "Iyengar (1990) shows that episodic framing diverts attention from societal and political party responsibility; our results suggest that liberal Twitter users are likely to produce (and, due to partisan self-segregation, consume) social media content with such effects.", "Region Immigration framing depends heavily on one's geopolitical entity (US, UK, and EU), as shown in Figure", "3. Several notable themes emerge.", "First, many ideologically-extreme frames in the US, including crime & punishment , security & defense , threat: public order , and threat: fiscal are all significantly more likely to be found in US-based tweets relative to the UK and EU.", "This pattern suggests that region and ideology, and likely many other factors, interact in intricate ways to shape how ordinary people frame political issues.", "Second, cultural identity is more strongly associated with both the UK and EU than the US.", "Perhaps immigrants' backgrounds are more marked in European discourse than in US discourse because the UK and EU have longer histories of cultural and ethnic homogeneity (Thorbjrnsrud, 2015).", "This finding also reflects that Europeans' attitudes towards immigration depend on where immigrants are from and parallels how European newspapers frame immigration differently depending on mi-grants' countries of origin (Eberl et al., 2018).", "Finally, the bottom of Figure 3 shows that users from the UK are more likely to invoke labor-related frames.", "This prevalence of labor and economic frames has also been found in British traditional media (Caviedes, 2015; Lawlor, 2015), and has been attributed to differences in the labor mar-ket.", "Unlike migrants in the US, Italy, and France, who often work clandestinely in different economic sectors than domestic workers, UK migrants have proper authorization and are thus viewed as competition for British workers because they can work in the same industries (Caviedes, 2015).", "Chong and Druckman (2007, p. 116) assert that a challenge for future work concerns the identifi-cation of factors that make a frame strong.", "Studies of frame-setting i.e., how a message's framing affects its audience's emotions, beliefs, and opinionshave largely been restricted to small-scale experimental studies because responses to news media framing cannot be directly observed (Eberl et al., 2018).", "However, Twitter provides insight into the frame-setting process via interactive signals: favorites and retweets.", "While related, these two actions can have distinct underlying motivations: favoriting often indicates positive alignment between the author and the reader; in contrast, retweeting may also be driven by other motivations, such as the desire to inform or entertain others (boyd et al., 2010).", "Different audience interactions have been shown to exhibit distinct patterns in political communication on Twitter (Minot et al., 2020).", "Here, we test how a message's framing impacts both the favorites and retweets that it receives.", "Experimental Setup We fit hierarchical linear mixed effects models with favorites and retweets (log-transformed) as the dependent variable on US 0.05 0.00 0.05 0.10 Change in (log) responses Morality & Ethics Fairness & Equality Public Sentiment Cultural Identity Quality of Life Economic Political Factors Policy Prescription Health & Safety Security & Defense Capacity & Resources Crime & Punishment External Regulation Hero: Integration Hero: Cultural Diversity Victim: Discrimination Victim: Humanitarian Victim: Global Economy Threat: Public Order Threat: Jobs Threat: National Cohesion Threat: Fiscal ThematicEpisodic Response Favorite Retweet Figure 4: Effects of framing on two audience responses: favorites and retweets.", "tweets with detected author ideology.", "The presence of a frame is treated as a binary fixed effect.", "We control for all temporal, user-level and tweet-level features as in the prior section, as well as ideology.", "Results The framing of immigration has a significant impact on how users engage with the content via retweets and favorites (Figure 4).", "Many issue-specific frames have a stronger effect on audience responses than either of the other typologies.", "As recent NLP approaches have adopted issue-generic frames for analysis (e.g., Kwak et al., 2020), the strength of issue-specific frames highlights the importance of expanding computational analyses beyond issue-generic frames, as other frames may have larger consequences for public opinion.", "Most frames impact favorites and retweets differently, suggesting that the strength of a frame's effects is tied to the specific engagement behavior.", "Cultural frames (e.g. hero: integration ) and frames oriented around human interest (e.g. morality , victim: discrimination ) are particularly associated with more endorsements (favorites), perhaps due to their increased emotional appeal to readers (Semetko and Valkenburg, 2000).", "On the other hand, political factors & implications is most highly associated with increased retweets.", "As the political frame emphasizes competition and strategy (Boydstun et al., 2013), this result mirrors similar links between the horse-race\" frame in news reports and engagement (Iyengar et al., 2004); users may prefer amplifying political messages via retweeting to help their side win.", "Similarly, frames about security and safety (e.g. crime & punishment , victim: humanitarian ) are highly associated with more retweets, but not necessarily favorites.", "While security and safety frames may not lead audience members to endorse such messages, perhaps they are more likely to amplify these messages due to perceived urgency or the desire to persuade others of such concerns.", "Finally, Figure 4 shows how a message's narrative framing impacts audience response, even after controlling for all other frames.", "Both episodic and thematic frames are significantly associated with increased engagement (retweets), but less strongly than issue frames.", "Having a clear narrative is important for messages to spread, but the underlying mechanisms driving engagement behaviors may differ for episodic and thematic frames; prior work on mainstream media has found that news stories using episodic frames tend to be more emotionally engaging, while thematic frames can be more persuasive (Iyengar, 1991; Gross, 2008).", "Users' exposure to political information on social media can have immense consequences.", "By leveraging multiple theory-informed typologies, our computational analysis of framing enables us to better understand public discourses surrounding immigration.", "We furthermore show that framing on Twitter affects how audience interactions with messages via favoriting and retweeting behaviors.", "This work has implications for social media platforms, who may wish to improve users' experiences by enabling them to discover content with a diversity of frames.", "By exposing users to a wide range of perspectives, this work can help lay foundations for more cooperative and effective online discussions.", "All code, data, annotation guidelines, and pretrained models are available at https: //github.com/juliamendelsohn/framing .", "Our analysis of frame-building involves inferring political ideology and regional from users with existing tools, so we aggregated this information in our analysis in order to minimize the risk of exposing potentially sensitive personal data about individuals.", "Our dataset includes tweet IDs along with frame labels, but no additional social information.", "However, there are also ethical consequences of categorizing people along these social dimensions.", "We acknowledge that reducing people's social identities to region and ideology obscures the wide range of unobservable and non-quantifiable predispositions and experiences that may impact framing and attitudes towards immigration.", "We emphasize that our dataset is not fully representative of all immigration discourse and should not be treated as such.", "Twitter's demographics are not representative of the global population (Mis-love et al., 2011).", "Furthermore, our dataset only includes tweets with authors from particular Western countries.", "All tweets were automatically identified by Twitter as being written in English, thus additionally imposing standard language ideologies on the data that we include (Milroy, 2001).", "Furthermore, language choice itself can be a socially and politically meaningful linguistic cue that may have unique interactions with framing (e.g., Gal, 1978; Shoemark et al., 2017; Stewart et al., 2018; Ndubuisi-Obi et al., 2019).", "Although we do not focus on abusive language, our topical content contains frequent instances of racism, Islamophobia, antisemitism, and personal insults.", "We caution future researchers about potentially traumatic psychological effects of working with this dataset.", "We aim to support immigrants, an often marginalized group, by shedding light on their representation on social media.", "However, there is a risk that malicious agents could exploit our frame-setting findings by disseminating harmful content packaged in more popular frames.", "We thank Anoop Kotha, Shiqi Sheng, Guoxin Yin, and Hongting Zhu for their contributions to the data annotation effort.", "We also thank Libby Hemphill and Stuart Soroka for their valuable comments and feedback.", "This work was supported in part through funding from the Volkswagen Foundation." ]
[ "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "method", "abstain", "other", "abstain", "objective", "method", "objective", "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "The widespread use of word embeddings is associated with the recent successes of many natural language processing (NLP) systems.", "The key approach of popular models such as word2vec and GloVe is to learn dense vector representations from the context of words.", "More recently, other approaches have been proposed that incorporate different types of contextual information, including topics, dependency relations, n-grams, and sentiment.", "However, these models typically integrate only limited additional contextual information, and often in ad hoc ways.", "In this work, we introduce attr2vec, a novel framework for jointly learning embeddings for words and contextual attributes based on factorization machines.", "We perform experiments with different types of contextual information.", "Our experimental results on a text classification task demonstrate that using attr2vec to jointly learn embeddings for words and Part-of-Speech (POS) tags improves results compared to learning the embeddings independently.", "Moreover, we use attr2vec to train dependency-based embeddings and we show that they exhibit higher similarity between functionally related words compared to traditional approaches.", "Neural network-based methods have been successful in advancing the state-of-the-art in a wide range of NLP tasks, such as dependency parsing (Chen and Manning, 2014), sentence classification (Kim, 2014), machine translation (Sutskever et al., 2014; Luong and Manning, 2016), and information retrieval (Zhang", "et al., 2017).", "In all these approaches, vectorial distributed word representations, known as word embeddings , have become a fundamental building block.", "The use of word embeddings is considered a secret sauce for contributing to the success of many of these algorithms in recent years (Luong et al., 2013).", "Popular models for learning such word embeddings include word2vec (Mikolov et al., 2013a,b,c), GloVe (Pen-nington et al., 2014) and fastText (Bojanowski et al., 2017; Joulin et al., 2017).", "The main idea behind these techniques is to represent a word by means of its context.", "The most popular forms of context are neighboring words in a window of text (Mikolov et al., 2013b; Pennington et al., 2014), though examples of additional contextual information might also include document topics (Li et al., 2016), dependency relations (Levy and Goldberg, 2014), morphemes (Lu-ong et al., 2013), n -grams (Bojanowski et al., 2017), and sentiment (Tang et al., 2014).", "The embedding idea was originally devised to help overcome problems associated with the high dimensionality of sparse vector representations of words, particularly in the case of neural network modeling, though embeddings have since been used in a variety of machine learning approaches.", "However, existing models generally exploit just a small portion of the available contextual information, and they tend to do so in ad hoc ways.", "The main purpose of context in these models is to shape the word vector space (that is, to associate a representation to the word), but contextual information is not usually represented in this space.", "For instance, Li and Jurafsky (2015) used document topics to derive multiple vectors for the same word, each capturing a different sense, but 453 their method does not represent topics in the vector space, that is, it does not generate topic vectors.", "Such contextual representations, jointly learned with the word representations, could potentially be useful for multiple tasks.", "For instance, pre-trained contextual vectors could be used as additional features, together with pre-trained word vectors, to improve the performance of existing models.", "In this paper, we propose attr2vec, a novel framework for learning word embedding models that jointly associate distributed representations with words and with generic contextual attributes.", "attr2vec is inspired by the GloVe approach of Pennington et al. (2014) and can mimic it when no additional contextual attribute is considered.", "In contrast with GloVe, attr2vec uses Factorization Machines (FMs) (Rendle et al., 2011; Rendle, 2012).", "FMs are a generalization of matrix factorization approaches, such as GloVe, and can combine different generic feature types, even when the input data is sparse.", "Moreover, FMs do not consider input features as independent but model their interaction by factorizing their latent representation in pairwise fashion.", "Here, we conduct an experimental study to assess whether the proposed embedding model can lead to better performance for a text classification task on the Reuters-21578 dataset, using trained vectors as input to a convolutional neural network.", "The results show that jointly learned word and Part-of-Speech (POS) embeddings with attr2vec can achieve higher F1 and precision scores compared to embeddings learned independently.", "Moreover, we use attr2vec to train dependency-based word embeddings and show, using the publicly available WordSim353 dataset, that such embeddings yield more functional similarities than embeddings trained using a linear bag-of-word approach (such as GloVe).", "We also performed a qualitative analysis that provides insights on how contextual attributes affects the distribution of words in the vector space.", "Summing up, the main contributions of our work are the following: we extend the GloVe model to consider additional contextual information.", "To the best of our knowledge, this is the first work to present a general model able to jointly train dense vector representations for word and multiple arbitrary contextual attributes; we define a novel loss function based on factorization machines to jointly learn word and contextual attribute embeddings; we show how to model the input data and compute co-occurrence statistics using either a linear bag-of-word approach or syntactic dependency relations.", "We provide the source code for the attr2vec model at https://github.com/ thomsonreuters/attr2vec .", "The remainder of this paper is organized as follows.", "Section 2 provides an overview of related work and Section 3 introduces the attr2vec model.", "In Section 4, we present the experimental results, and close this paper with some concluding remarks in Section 5.", "We have already introduced some of the main related approaches including word2vec and GloVe.", "Essentially, the GloVe model (Pennington et al., 2014) derives word representations by factorizing the word co-occurrence count matrix.", "The skip-gram and continuous bag-of-words (CBOW) models of Mikolov et al. (2013a), instead, build the vector space by trying to predict a word given its neighbouring words.", "Mnih and Kavukcuoglu (2013) proposed a closely related model that works in the opposite way, trying to predict neighbouring words given a word.", "Facebook's fastText model (Bojanowski et al., 2017; Joulin et al., 2017) augments word embeddings with subword-level information using character n-grams.", "Other examples of word embedding models include the work of Levy et al. (2014), where an explicit word vector space representation is derived using a PPMI metric, and WordRank of Ji et al. (2016), which learns word representations by adopting a ranking-based loss function.", "However, none of these models includes any contextual information beyond the neighbouring words.", "Several forms of contextual information have been successfully integrated into word embedding models.", "For instance, Luong et al. (2013); Cotterell and Schutze (2015); Bhatia et al. (2016) capture morphological information into word representations; Bojanowski et al. (2017); Wieting et al. (2016) include character n-grams in their embedding model; Tang et al. (2014) learn sentiment-specific word embeddings by integrating sentiment information in the loss function; Li et al. 454 (2016) combine word embedding and topic modeling to jointly learn a representation for topics and words.", "In addition, several works in recent years focused on learning separate embeddings for multiple senses of a word (Neelakantan et al., 2015; Iacobacci et al., 2015; Pilehvar and Collier, 2016).", "However, all these techniques target a particular type of context.", "Our attr2vec model differs in that it can jointly represent generic contextual attributes and words in the embedding model.", "To do so, it makes use of factorization machines (Rendle, 2012), which have been successfully used to exploit contextual information in relation extraction tasks (Petroni et al., 2015) and recommender systems (Rendle et al., 2011) It is well known that contextual information can improve performance of a wide range of NLP tasks, such as machine translation (Koehn and Hoang, 2007; Garca-Martnez et al., 2017), named entity typing (Corro et al., 2015) or sentiment analysis (Weichselbraun et al., 2013).", "In addition, (Melamud et al., 2016) observed that different contextual attributes work well for different tasks and that simple concatenation of embeddings, learned independently with different models, can yield further performance gains.", "Our attr2vec model can jointly learn embeddings for words and contextual attributes, and we show (see Section 4) that using such jointly learned embeddings yields to better performance on a text classification task compared to embeddings learned independently.", "This section presents the attr2vec model.", "We first describe how we model the input data in terms of a feature matrix and a target vector (Section 3.1) and then how to factorize those using a factorization machines-based formulation (Section 3.3) to obtain word and contextual attribute embeddings.", "We consider as input a large corpus of text.", "Let the vocabulary of considered words be denoted by W = { w 1 , ..., w | W | } and the set of all contextual variables denoted by C = { c 1 , ..., c | C | } .", "We denote V = W C the set of all considered words and contextual variables.", "In the rest of the paper we will refer to the elements of V as variables .", "We model the input data in terms of a target vector Y R m and feature matrix X R m n , in which each row x i R n corresponds to a feature vector and there are as many columns as the number of variables (i.e., | V | = n ).", "We group columns according to the type of the variables; e.g., there are word columns and a group of columns for each type of contextual information considered.", "Each target value y i Y represents the number of times the feature vector x i has been observed in the input (i.e., the co-occurrence count).", "We consider a two-fold way to compute the co-occurrence count of a feature vector x i : (1) linear bag-of-words and (2) dependency-based .", "Linear Bag-of-Words The approach used in Pennington et al. (2014) is to compute the co-occurrence count using a linear bag-of-words as-sumption.", "The idea is to use a window of size k around the target word w , and considering the k words before and the k words after w to compute co-occurrence statistics 1 .", "Note that a small window size may miss some information, while a large window size might result in the algorithm capturing accidental pairs.", "A decay factor is commonly used to weight the contribution of an observation to the total co-occurrence count according to the distance to the target word.", "Here we consider a fractional decay, where the importance of a word is assumed to be inversely proportional to its distance from the target word.", "To build the feature matrix X we set the values of the variables associated with the words pair and with each contextual variable observed in correspondence with that pair to", "1. Note that there could be multiple rows in X referring to the same pair of words but associated with different contextual variables.", "When contextual variables can assume multiple values for a single observation (e.g., a document with multiple topics) we evenly distribute the unitary weight across all variables (e.g., across all document topics).", "The target values represent the number of times the corresponding combination of features (i.e., the pair of word and the contextual variables) has been observed in the input corpus, weighting each contribution with the fractional decay factor described above.", "An example is shown in Figure", "1. The first row in the figure corresponds to the observation of the pair of words brothers and lehman in a text window, with POS tags nnp and nnps , respectively, referring to the named entity Lehman 1 In this paper we focus on symmetric windows, however the same model can be extended to asymmetric windows.", "Brothers , from documents published in 2008 with topic economy .", "Such a combination of features (i.e., brothers-lehman-nnp-nnps-Lehman Brothers-2008-economy has been observed in the input 10 5 times (i.e., y = 10 5 ).", "The fourth row of Figure 1 conveys the information that the same pair of words (i.e., brothers-lehman ) has been also observed 10 times in the input associated with different contextual information, in particular in documents published in 2017 with multi-topic economy and story .", "When contextual information is not considered (i.e., c = ) y is equal to the co-occurrence count of the pair of words, as in Pennington et al. (2014).", "Dependency-Based Our attr2vec model can learn dependency-based embedding as well.", "In order to do so we adopted a similar strategy of Levy and Goldberg (2014).", "The main idea is to parse each sentence in the input corpus and to use the dependency tree to derive the co-occurrence count.", "In particular, for a target word w with modifiers m 1 , ..., m o and a head h , we considered the dependency labels ( m 1 , lbl 1 ) , ..., ( m o , lbl o ) , ( h, lbl 1 h ) , where lbl is the type of the dependency relation between the head and the modifier and lbl 1 is used to mark the inverse relation.", "Moreover, edges that include a preposition are collapsed by connecting the head and the object of the proposition, and subsuming the preposition itself into the dependency label (see the example in the top part of Figure 2).", "The feature matrix X in this case consists of two group of columns, one for the words and one for dependency labels, plus one group of columns for each additional contextual information considered.", "The co-occurrence count is driven by the dependency tree: each target value represents the number of times the corresponding word and dependency label (and all considered additional contextual information) appear in the dependency trees representing the sentences in the input corpus.", "An example is shown in Figure", "2. The first row in the matrix corresponds to the observation of the word ganymede connected to the word discovered through the inverse relation dobj in the dependency tree (i.e., discovered/dobj 1 ).", "Notice that using this approach it is possible to capture relevant relations between words far apart in the text including long-distance dependencies (e.g., telescope is not in the window of text around discovered for k = 3 ), and also filter out accidental neighbouring words (e.g., his is in the window of text of discovered for k = 3 ).", "An additional advantage of this approach with respect to a linear bag-of-words solution is that each observation is typed, indicating, for instance, that Ganymede is the object of discovered and Galileo is the subject.", "Before introducing the attr2vec model we briefly describe the GloVe factorization approach (Pen-nington et al., 2014).", "In particular, GloVe employs a matrix factorization model using as input the word-word co-occurrence count.", "In our example of Figure 1 this corresponds to considering in input just the first group of columns (i.e., the pair of words).", "Starting from the observation that the ratio of co-occurrence probabilities is more appropriate for learning word representations as opposed to the probabilities of the words themselves, Pennington et al. propose the following weighted least 456 his Galileo Ganymede discovered with telescope dobj subj prep pobj poss Galileo discovered Ganymede his telescope prep_with dobj subj poss g a lil e o g a ny m e d e t e l e s c op e d i s c ov e r e d / s ub j 1 g a lil e o / s ub j t e l e s c op e / p r e p_ w it h d i s c ov e r e d / p r e p_ w it h 1 g a ny m e d e / dob j d i s c ov e r e d / dob j 1 t e l e s c op e / po ss 1 h i s / po ss d i s c ov e r e d h i s 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 word dependency label 1 1 1 1 1 1 1 1 y1 y2 y3 y4 y5 y6 y7 y8 x1 x2 x3 x4 x5 x6 x7 x8 Figure 2: Example for representing input data with attr2vec using a dependency-based approach to compute co-occurrence count.", "where y max and are hyperparameters of the model.", "The model we propose, attr2vec, employs a matrix factorization model based on factorization machines.", "In particular, we associate with each variable v V a bias term b v R and a latent factor vector f v R d , where the dimensionality d of the latent factor space is a hyperparameter of the model.", "For each input feature vector x X , we denote by x v the value of variable v V in the corresponding row of the features data matrix.", "We employ a weighted least squares model that is based of the formulation of Pennington et al. (2014) (Equation 1).", "In contrast to GloVe, we define a novel score s ( x ) that takes into account both words and contextual attributes, computed as follows: s ( x ) = X v V x v b v + X v 1 VX v 2 V \\{ v 1 } x v 1 x v 2 f Tv 1 f v 2 (4) Here, the bias terms b v model the contribution of each individual variable to the final score, whereas the latent factor vectors f v model the contribution of all pairwise interactions between variables.", "Rendle (2012) has shown that score computation is fast, since s ( x ) can be computed in time linear to both the number of nonzero entries in x and the dimensionality d .", "Each latent factor vector can be interpreted as a low-dimensional representation of the corresponding variable, both for variables that refer to words and for variables that refer to contextual information.", "Note that, when contextual information is not considered the formulation of our factorization model is equivalent to the formulation of Pennington et al. (2014).", "The model parameters = { b v , f v | v V } are estimated by minimizing J , for instance, through stochastic gradient descent.", "Each f v can be interpret as a dense vector representation of variable v V .", "We conducted an experimental study on real-world data to compare our attr2vec model with other state-of-the-art approaches.", "For a training corpus to learn embeddings, we used the Reuters News Archive, in particular, the collection of all news stories published by the Reuters News Agency from 2003 to 2017 .", "We 457 embedding input logistic regression convolutional neural network static non-static random ~w r 70.5 (69.3) 75.7 (77.5) 74.4 (76.2) random ~w r_ ~p r 74.0 (73.9) 77.9 (79.7) 77.8 (79.8) GloVe ~w i 77.5 (77.5) 79.7 (81.5) 82.7 (84.3) GloVe ~w i_ ~p r 80.2 (85.4) 82.5 (84.1) 84.5 (86.1) GloVe ~w i_ ~p i 79.3 (83.3) 84.3 (85.8) 84.9 (86.4) attr2vec ~w j 77.5 (77.3) 80.6 (82.3) 82.8 (84.5) attr2vec ~w j_ ~p j 80.1 (83.1) 84.9 (86.1) 85.5 ( 86.8 ) Table 1: Average F1 score (and precision in parentheses) for topic prediction on the Reuters-21578 dataset.", "first applied a heuristic filtering approach to exclude non-textual documents, resulting in a collection of 8 M news articles ( 3 B tokens).", "We then performed tokenization, part-of-speech tagging, and syntactic dependency parsing on the corpus using NLP4J 2 (Choi et al., 2015; Choi, 2016).", "The POS tagger achieves an accuracy score of 97 .", "64% (Choi, 2016), the dependency parser achieve a label accuracy score of 94 .", "94% (Choi et al., 2015).", "As a baseline we considered 200 dimensional GloVe vectors trained on the corpus using the code and hyperparameters of Pennington et al. (2014).", "In particular, we used y max = 100 and = 3 / 4 for all our experiments.", "Experimental Setup For this experiment we trained 200 -dimensional attr2vec vectors using part-of-speech tags as additional contextual information and a linear bag-of-words approach i.e., each row in the feature matrix consists of a pair of words and the corresponding pair of POS tags (see the first two groups of columns for the example in Figure 1).", "We used the same hyperparameters as in GloVe.", "To make a fair comparison we trained two independent GloVe models, one to obtain word vectors ( ~w i ) and one to obtain POS tag vectors ( ~p i ).", "The latter model is trained by substituting each word in the corpus with the corresponding POS tag.", "Note that our attr2vec model can jointly learn a representation for both words ( ~w j ) and POS tags ( ~p j ).", "As a baseline we also considered randomly initialized vectors for words ( ~w r ) and POS tags ( ~p r ).", "To train attr2vec we 2 https://emorynlp.github.io/nlp4j used a modified version 3 of tffm (Trofimov and Novikov, 2016), an open-source TensorFlow implementation of Factorization Machines.", "To evaluate the performance of the attr2vec model, we used the trained vectors as input for a convolutional neural network (CNN).", "We used the CNN architecture described by Kim (2014), in particular a modified version of the TensorFlow implementation in Britz (2015), where we add support for pre-trained embeddings.", "As hyperparameters, we used a batch size of 128 training samples, no dropout, one layer, filter windows of 3 , 4 , 5 with 100 feature maps each.", "We trained using the Adam optimizer and a learning rate of 0 .", "001 and let the models train for 100 epochs (an epoch is an iteration over all the training points).", "We executed three independent runs for each experiment and we report averaged results.", "As a benchmark we used the following text classification task: predict all the topic codes associated with an article using the first tokens in the article.", "We used the Reuters-21578 4 dataset.", "This corpus contains 10788 news documents clas-sified into 90 topics.", "We used the provided train-ing/test split.", "For each document, we considered the first = 250 tokens as input text for the CNN.", "Note that, in contrast to other previous work (Li et al., 2016), we consider all topics and formulate a multi-label classification problem.", "For each test article we computed precision, recall and F1 score comparing the actual topic codes and those predicted by the CNN.", "As evaluation metrics we used the average F1 score and the average preci-3 Source code available at https://github.com/ thomsonreuters/attr2vec 4 http://www.nltk.org/book/ch02.html 458 sion across all test articles.", "We trained multiple CNN models, using either word vectors ( ~w ) or the concatenation of word and POS tag vectors ( ~w _ ~p ) as inputs, and keeping these vectors static throughout training or allowing the CNN to update them via backpropaga-tion ( non-static ).", "We also considered logistic regression as baseline method, using averaged vectors calculated over the input text as features, as in Zhang and Wallace (2015).", "As this is a multilabel task, we used the one-vs-all formulation of logistic regression, which attempts to fit one classifier per class with each class being fitted against all other classes.", "L 1 regularization was applied with a weight of 0 .", "005 .", "Results Table 1 reports the result of our experiments.", "Each entry shows the average F1 score and the average precision in parentheses.", "First note that the CNN model consistently outperforms logistic regression for all considered settings.", "The CNN performance improves if it receives as input pre-trained vectors as opposed to random ones, consistently with other works (Kim, 2014).", "The performance is comparable when GloVe or attr2vec word vectors are used as input.", "The key advantage of our attr2vec model over GloVe is demonstrated when additional contextual information is considered in the CNN model.", "The performance of the CNN model improves if POS vectors are considered together with GloVe word vectors in input, both when such POS vectors are randomly initialized ( ~w i_ ~p r ) and independently trained with the GloVe model ( ~w i_ ~p i ).", "However, the best performance is achieved when word and POS tags vectors are jointly trained with our attr2vec model ( ~w j_ ~p j ).", "Note that the aim of the paper was not to show that POS tags help for text classification tasks (to that end, an exhaustive exploration of the parameter space would have been needed); instead, the goal of this work is to introduce a new embedding model that jointly learns a representation for words and POS tags, capturing the interaction between them, and to show that such representation is beneficial for a CNN with respect to embeddings learned in an independent fashion, given the same network settings.", "Standard embedding models (like GloVe), in fact, can capture either the interactions between words or between POS tags in an independent fashion.", "Our attr2vec model, in addition, captures the cross-interaction between Figure 3: Recall-precision curve when attempting to rank the similar words above the related ones on the WordSim353 dataset.", "words and contextual attributes, jointly learning their representation, and our results suggest that this additional information is beneficial for the performance of the CNN model.", "Moreover, note that our attr2vec algorithm, unlike GloVe, can handle generic contextual information.", "In our second experiment we wanted to address if our attr2vec model was able to produce dependency-based embeddings that exhibit more functional similarity than GloVe embeddings (that usually yield broad topical similarities).", "To this end, we trained 200 -dimensional attr2vec vectors using a dependency-based approach i.e, each row in the feature matrix consist of a word and a dependency label (see the example in Figure 2).", "Our evaluation closely follows the one in Levy and Goldberg (2014).", "In particular, we used the WordSim353 dataset (Finkelstein et al., 2001; Agirre et al., 2009) containing pairs of similar words that reflect either relatedness (topical similarity) or similarity (functional similarity) relations.", "The pairs are ranked according to the co-sine similarity between the corresponding word vectors.", "The idea is that a model that focuses on functional similarity should rank similar pairs in the dataset above the related ones.", "For instance, such a model should rank the pair money-currency (i.e., functionally similar words) above the pair money-laundering (i.e., topically similar words).", "We drew a recall-precision curve by considering related pair as a miss and similar pair as a hit.", "In this way we aimed to capture the embeddings affinity towards the similarity subset over the re-459 latedness one.", "Figure 3 reports the result of the experiment.", "The attr2vec curve (orange solid line) is higher than the GloVe one (blue dashed line) and the area under the cuve is larger ( 0 . 74 with respect to 0 . 57 ), suggesting that attr2vec yields more functional similarities with respect to GloVe.", "Note that a similar behaviour has been observed in Levy and Goldberg (2014) for context-predictive models (i.e., the skip-gram model with negative sam-pling).", "To the best of our knowledge, attr2vec is the first model that incorporates syntactic dependency relations in a co-occurrence counts based model (such as GloVe).", "Moreover, attr2vec is a general model that can handle additional arbitrary contextual information.", "Our final evaluation is qualitative.", "We trained 200 -dimensional attr2vec embeddings using news topics as additional contextual information and a linear bag-of-words approach i.e., each row in the feature matrix consists of a pair of words and the topic of the news article where such pair has been observed (see the first and the last group of columns for the example in Figure 1).", "In particular, we used the same collection of 8 M news articles presented in Section 4.1 and we considered the following two article topics: general news stories ( G ) and sport news ( SP O ).", "Figure 4 shows a two-dimensional projection of the 200 -dimensional vector space where words and topics representations lie, obtained using the t-SNE 5 visualisation technique (Maaten and Hinton, 2008).", "Here the two topic points ( G on the left and SP O on the right of the figure) seem to metaphorically act as magnets, modifying the space and forming two clusters of words.", "The left cluster around the representation of topic G includes general words not related to sports such as mars, sound, warranty, finance, train, while the right cluster around the representation of topic SP O contains words related to sports such as football, coach, game, stadium, cricket.", "Words that are related with both general news stories and sport news lie somewhere in the middle between these two clusters.", "Examples of such words include penalties, transfer, medical, goal, supporters.", "Note that there are other attractive and repulsive forces in the vec-5 We used the TensorBoard implementation of t-SNE .", "tor space driven by word similarity, and that a two-dimensional representation is only able to capture a small portion of all relations that take place in the higher dimensional space.", "In this paper, we proposed attr2vec, a novel embedding model that can jointly learn a distributed representation for words and contextual attributes.", "Our model is general and can handle multiple arbitrary contextual information simultaneously.", "To do so, we defined a novel loss function based on factorization machines.", "Moreover, attr2vec can mimic existing word embedding algorithms when no additional contextual information is considered.", "In particular, GloVe is a special case of our model.", "We have presented an experimental study where we considered POS tags as additional contextual information, and fed a convolutional neural network (CNN) with both word and POS tag vectors.", "The results suggest that the CNN prediction performance improves when word and context vectors are jointly learned by our attr2vec model.", "In addition, we described how to train dependency-based attr2vec embeddings and showed that they produce different kinds of similarities.", "We also provided some insights into how the vector space is affected by contextual attributes, which seem to act like magnets that attract or repulse words, that are themselves subject to attractive or repulsive forces driven by similarity.", "creased, and the computational cost is increased compared to a model that does not use contextual information.", "Each additional attribute may furthermore introduce its own noise (component-specific errors) into the process.", "Nevertheless, the overall improvement can help in tasks where quality is of the utmost importance and high-quality annotation components are available.", "In future work, we aim to investigate the effect of adding different contextual information, and we plan to test the resulting models in various applications." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "method", "objective", "other", "other", "method", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "method", "result", "result", "method", "abstain", "abstain", "abstain", "objective" ]
[ "We consider event extraction in a generative manner with template-based conditional generation.", "Although there is a rising trend of casting the task of event extraction as a sequence generation problem with prompts, these generation-based methods have two significant challenges, including using suboptimal prompts and static event type information.", "In this paper, we propose a generative template-based event extraction method with dynamic prefix (GTEE-DYNPREF ) by integrating context information with type-specific prefixes to learn a context-specific prefix for each context.", "Experimental results show that our model achieves competitive results with the state-of-the-art classification-based model ONEIE on ACE 2005 and achieves the best performances on ERE.", "Additionally, our model is proven to be portable to new types of events effectively.", "Event extraction is an essential yet challenging task for natural language understanding.", "Given a piece of text, event extraction systems need to recognize event triggers with specific types and the event arguments with the correct roles in each event record according to an event ontology, which defines the event types and argument roles (Dod-dington et al., 2004; Ahn, 2006).", "As an example, the context in Figure 1 contains two event records, a Transport event triggered by returned and an Arrest-Jail event triggered by capture .", "In the Transport event, the Artifact is the man , the Destination is Los Angeles and the Origin is Mexico .", "In the Arrest-Jail event, the Person is the man , the Time is Tuesday and the Agent is bounty hunters .", "In this work, we focus on the task setting of extracting Corresponding author.", "Most of the event extraction work treats the extraction of event triggers and event arguments as several classification tasks, either learned in a pipelined framework (Ji and Grishman, 2008; Liu et al., 2020; Du and Cardie, 2020; Li et al., 2020) or a joint formulation (Li et al., 2013; Yang and Mitchell, 2016; Nguyen et al., 2016; Liu et al., 2018; Wadden et al., 2019; Lin et al., 2020).", "There is a rising trend of casting the task of event extraction as a sequence generation problem by applying special decoding strategies (Paolini et al., 2021; Lu et al., 2021) or steering pretrained language models to output conditional generation sequences with discrete prompts (Li et al., 2021; Hsu et al., 2021).", "Compared with classification-based methods, this line of work is more data-efficient and flexible, which requires less annotated data to achieve acceptable model performances, being easier to extend to new event types by slightly modifying the designed prompts and decoding strategies.", "However, these generation-based methods have two significant challenges, which impede achieving competitive results with the classification-based methods.", "(1) suboptimal prompts : First, they manually design prompts for each event type (Li et al., 2021; Hsu et al., 2021), which are suboptimal without tuning and largely affect the model performances.", "(2) static event type information : Second, when extracting events of a particular type, recent generation-based methods will receive the same event type information concerning only the running event type, regardless of the associations between other possible event types.", "To alleviate the above two challenges, we propose a generative template-based event extraction method with dynamic prefixes, denoted as GTEE-DYNPREF .", "As demonstrated in Figure 1, we follow 5216 Our Method Generation-based Method Generation-based Event Extraction 1) Extracting Transport event 2) Extracting Arrest-Jail event Trigger Person Time Agent capture the man Tuesday bounty hunters Event type Arrest-Jail Event type Transport Trigger Artifact Destination Origin returned the man Los Angeles Mexico Generation-based Event Extraction The man returned to Los Angeles from Mexico following his capture Tuesday by bounty hunters.", "the previous work (Li et al., 2021; Hsu et al., 2021), extracting event records one type by one type, using the pretrained encoder-decoder language model BART (Lewis et al., 2020) for conditional generation.", "For each event type, we first initialize a type-specific prefix consisting of a sequence of tunable vectors as transformer history values (Li and Liang, 2021).", "The type-specific prefix offers tunable event type information for one single type.", "Then we integrate context information with all type-specific prefixes to learn a context-specific prefix, dynamically combining all possible event type information.", "We evaluate our model on two widely used event extraction benchmarks, ACE 2005 and ERE.", "Experimental results show that our model achieves competitive results with the state-of-the-art classification-based model ONEIE on ACE 2005 and achieves the best performances on ERE.", "Additionally, according to the transfer learning results, our model also can be adapted to new types of events effectively.", "This paper is related to the following lines of work.", "Event extraction is usually formulated as a sequence labeling classification problem (Nguyen et al., 2016; Wang et al., 2019; Yang et al., 2019; Wadden et al., 2019; Liu et al., 2018).", "Some of them incorporate global features and apply joint inference (Lin et al., 2020; Li et al., 2013; Yang and Mitchell, 2016) to collectively model event dependencies.", "Additionally, recent work casts event extraction as a machine reading comprehension (MRC) problem (Liu et al., 2020; Du and Cardie, 2020; Li et al., 2020) by constructing questions to query event triggers and arguments.", "Our work treats event extraction as a conditional generation task, which is more flexible and portable, which reduces the burden of annotation.", "There is a rising line of work casting event extraction as a sequence generation problem, such as transforming into translation tasks (Paolini et al., 2021), generating with constrained decoding methods (Lu et al., 2021) and template-based conditional generation (Li et al., 2021; Hsu et al., 2021).", "The two closest methods above (Li et al., 2021; Hsu et al., 2021) both utilize manually designed discrete templates, which caused the sub-optimal problem.", "Besides, the applied static type instruction does not consider the connections between events within the same context.", "We replaced the static type instructions with the dynamic prefixes, which are continuous and tunable vectors during training, combining the manual event templates and alleviating the sub-optimal problem.", "There is a line of work using specific sentence templates with pre-trained models to solve natural language understanding tasks.", "It natural to come up with prefix-style (Brown et al., 2020) or cloze-style (Petroni et al., 2019) prompts based on human introspection, which are called descrete prompts.", "Existing works on discrete prompt tuning(Shin et al., 2020; Gao et al., 2021; Schick et al., 2020) depend on verbalizers to map from class labels to answer tokens.", "These methods are proven to be effective in the few-shot setting for text classification and conditional text generation tasks (Schick and Schtze, 2021b,a,c).", "There are also methods that explore continuous prompts directly operating in the embedding space of the model, like tuning on vectors(Li and Liang, 2021; Lester et al., 2021; 5217 Encoder Decoder Trigger meeting <IN_SEP> Hariri met with head of state in palace place Event type Meet . Trigger <trg> <IN_SEP> <arg> met with <arg> in <arg> place Hariri submitted his resignation during a 10-minute meeting with the head of state at the Baabda presidential palace Type Instruction Template Context Output Prompt Extracting events for the Contact:Meet type Figure 2: The framework of our base model GTEE-BASE . We use different colors to differentiate different components as follows. for the context, for the template, for the type instruction, for the encoder-decoder language model, and for the answered prompt as output. Tsimpoukelli et al., 2021), initializing with discrete prompts(Zhong et al., 2021; Qin and Eisner, 2021; Hambardzumyan et al., 2021) and hybrid prompt tuning(Liu et al., 2021b,a; Han et al., 2021).", "We revisit the task of event extraction as the process of conditional generation and present our base model (GTEE-BASE ) as illustrated in Figure", "2. 3.1 Problem Statement In the conditional generation task formulation for event extraction, the whole extraction process for a textual context is divided into several subtasks according to event types.", "Specifically, given an event ontology O with an event type set E = { e i | i [1 , |E| ] } , the input in each subtask S e i , C for event type e i consists of a context C and a designed prompt P e i .", "And the output is the answered prompts A e i , containing extracted event records.", "We take one single conditional generation subtask S e i , C for event type e i as example to explain the following content.", "As shown in Figure 2, the conditional generation subtask is modeled by a pretrained encoder-decoder language model (LM), BART (Lewis et al., 2020) and T5 (Raffel et al., 2020).", "In the generation process, the encoder-decoder LM models the conditional probability of selecting a new token y i given the previous tokens y <i and the encoder input X .", "Therefore, the entire probability p ( Y|X ) of generating the output sequence Y given the input sequence X is calculated as p ( Y|X ) = |Y| (cid:89) i =1 p ( y i | y <i , X ) X = [ P e i ; [SEP] ; C ] Y = A e i (1) where [ ; ] denotes the sequence concatenation operation and [SEP] 1 is the corresponding separate marker in the applied encoder-decoder LM.", "Similar to the state-of-the-art end-to-end generative method DEGREE-E 2 E (Hsu et al., 2021) for event extraction, the prompt P e i for subtask S e i , C in our base model GTEE-BASE contains the type instruction I e i and the template T e i .", "Type Instruction.", "A short natural language sequence I e i describing the event type e i in the subtask.", "We use the pattern Event type is [MASK] . to construct type instructions for the event type set E .", "For example, the type instruction for event type Meet is Event type is Meet..", "Template.", "A type-specific pattern T e i , which contains several placeholders, reflecting how the arguments participant in the event.", "We use two types of placeholdes, <trg> and <arg> s, for representing trigger and arguments, respectively.", "The template is consists of a trigger part and a argument part.", "The two parts are concatenated by a new seperate marker <IN_SEP> .", "As illstrated in Figure 2, the trigger part is Trigger <trg> , which is identical for all event types.", "The argument part is specific to event type e i .", "Due to the manual efforts of designing and searching for an optimal template, we follow Li et al. (2021) to reuse the pre-defined argument templates 2 in the ontology O .", "The original pre-defined argument templates natively contain 1 In this paper, we use [*] to represent the special tokens used in pretrained LM and <*> to indicate the user-defined special tokens.", "2 The argument template and all the used ontologies can be accessed at https://github.com/raspberryice/ gen-arg except for ERE.", "Since the ERE event types are subsets of the RAMS AIDA ontology and the KAIROS ontology, following Li et al. (2021), we also reuse the argument templates from these ontologies.", "numeric labels for each <arg> placeholder (as <arg1> ) and the slot mappings M to the corresponding argument roles.", "We also follow Li et al. (2021) to remove these numeric labels.", "Ground Truth Construction.", "For each event type e i in the context C , we construct the ground truth sequence G e i , C for conditional generation by filling the gold event records into the template T e i .", "If there is no event record of event type e i , the generation ground truth of the will be Trigger <trg> .", "Otherwise, the event record is filled in the template T e i as the output in Figure", "2. If several arguments are categorized as the same role, these arguments are first sorted by spans and then concatenated by and.", "If there are multiple event records, they will be sorted by the spans of the triggers, and the filled sequences will be concatenated by a new separate marker <OUT_SEP> .", "Training.", "The trainable parameters of our base model GTEE-BASE are only the encoder-decoder LM.", "And we use to denote all the trainable parameters.", "Therefore, the training target is to minimize the negative loglikelihood of all subtasks S e i , C j in the training set D , where C j denotes the j -th context in D .", "L ( D ) = |D| (cid:88) j =1 |E| (cid:88) i =1 log p ( G e i , C j |X e i , C j ) X e i , C j = [ P e i ; [SEP] ; C j ] (2) Inference.", "In the inference stage, our base model generates sequences by beam search BEAM = 6 .", "The maximum sequence length is set according to dataset statistics, which is a bit larger than the length of the longest ground truth.", "Parsing.", "Basically, we parse the event records by template matching and slot mapping according to the ontology O .", "Please note that not all the generated output sequences are valid.", "For each generated sequence, we will first try to parse a trigger.", "If failed, we will skip the sequence.", "Then if we fail to match <IN_SEP> or the argument part of the template T e i , we will skip the argument parsing and only keep a trigger.", "By investigating the parsed event records, we find that our model has the bias to generate event records even for irrelevant event types.", "This will be fatal when the input context does not contain any event record, which will largely hurt the precision score and F1 score.", "There are 80.28% and 71.02% sentences that do not contain any event records in ACE 2005 and ERE, respectively.", "Therefore, we propose a simple yet effective solution to alleviate this problem by separately training an irrelevance classifier IC.", "With context C as input, we finetune a BERT mdoel (Devlin et al., 2019) by feeding the encoded [CLS] vector to a MLP as a binary classifier to see whether the context contains any event records or is entirely irrelevant for the ontology O .", "It is worth noticing that there may exist other ways to avoid the problem, as Cui et al. (2021) formulate the NER task as a ranking task to avoid irrelevant entity types in a similar conditional generation task setting.", "We propose dynamic prefix-tuning with task-specific prefix and context-specific prefix to alleviate the two main challenges in generation-based event extraction.", "The framework of our model with dynamic prefix tuning, GTEE-DYNPREF , is shown in Figure", "3. We will introduce the dynamic prefix-tuning step by step.", "Inspired by PREFIX-TUNING (Li and Liang, 2021), we use event type-specific prefix STAPREF , which is a pair of two transformer activation sequences { sp, sp (cid:48) } , each containing L continuous D -dim vectors as the history values for encoder and de-ocder, respectively.", "From the view of the encoder and decoder input, in the subtask S e i , C , the prefix is virtually prepended for the sequences X and Y in an encoder-decoder LM.", "The main advantage of these transformer activation sequences is that they provide trainable context for both encoder and decoder, which is also computationally achievable.", "We first initialize a pair of task-specific prefixes { sp e i , sp (cid:48) e i } for each event type e i in the ontology O .", "In the conditional generation subtask S e i , C , we then prepend the corresponding pair of task-specific prefixes { sp e i , sp (cid:48) e i } as transformer activations for the encoder and decoder.", "Following Li and Liang (2021), we use a trainable embedding tensor P R |E| L D to model 5219 Encoder Decoder Trigger meeting <IN_SEP> Hariri met with head of state in palace place Trigger <trg> <IN_SEP> <arg> met with <arg> in <arg> place Hariri submitted his resignation during a 10-minute meeting with the head of state at the Baabda presidential palace Template Context Output Extracting events for the Contact:Meet type Dynamic Prefix' Dynamic Prefix Type-specific Prefix MeetAttack Phone-Write ...", "the type-specific prefix sp .", "For the event type e i in the ontology O , the prefix vector sp te i at index t is sp te i = P [ e i , t, :] (4) The reason we call the task-specific prefix static is that for subtasks of the same event types, the output type instructions are the same.", "In other words, such prefixes only preserve context concerning one single type of event, ignoring the association between different event types.", "Aiming to capture the associations between different event types when constructing trainable prefixes, we present DYNPREF , which constructs dynamic prefix with context-specific information when prompting pretrained language models.", "As shown in Figure 4, dp C has the same sequence length L as sp .", "For each position t , the prefix vector dp t C is computed by dynamically integrating all the prefix vector sp te i of event type e i in the ontology O according to the context-specific information c by multi-head attention (Vaswani et al., 2017).", "To calculate the context-specific information c , we apply a BERT mdoel (Devlin et al., 2019) as the context encoder by feeding the context C as input and taking the [CLS] vector as c .", "The context-specific prefix dp C is dynamic because it takes both the type-specific information in ontology O and the unique context information into account when steering LMs.", "Following Li and Liang (2021), we compute the decoder transformer activation vector h i , which is a concatenation of all layers, at time step i in encoder-decoder LM recurrently.", "The computation of the encoder transformer activation vector is similar.", "Except for the LM parameters , the additional trainable parameters of DYNPREF include the embedding tensor P and the BERT encoder modeling context information.", "Specially, we follow the training suggestions (Li and Liang, 2021) and reparametrize the embedding tensor P by modeling a MLP and another embedding tensor P (cid:48) R |E| L D (cid:48) with small dimension D (cid:48) < D .", "In the end, P is computed as P [ e i , t, :] = MLP( P (cid:48) [ e i , t, :]) (7) 5220 Dataset Split #Sents #Events #Roles ACE05-E Train 17,172 4202 4859 Dev 923 450 605 Test 832 403 576 ACE05-E + Train 19,216 4419 6607 Dev 901 468 759 Test 676 424 689 ERE-EN Train 14,736 6208 8924 Dev 1209 525 730 Test 1163 551 822 Table 1: Dataset statistics.", "The training objective is still to minimize the negative loglikelihood in equation (2) for and .", "However, in our preliminary experiments, we find that jointly learning the LM parameters and the DYNPREF parameters requires different scales of training hyperparameters, being difficult to learn the ability to extract event arguments.", "Therefore, we train them separately in three steps: (1) First, we train using GTEE-BASE to learn the task information.", "(2) Then we fix the LM parameters and mask all other event types except for e i in each subtask S e i , C , only optimizing , to learn the type-specific information for each event type.", "(3) Last, we remove the masking of event types, remaining the LM parameters fixed and only optimizing using DYNPREF , to capture the associations between related event types.", "We conducted experiments on two widely used event extraction benchmarks, ACE 2005 (LDC2006T06) and ERE (LDC2015E29, LDC2015E68, and LDC2015E78).", "ACE 2005 dataset has 599 annotated English documents, 33 event types, and 22 argument roles.", "ERE contains 458 English documents, 38 event types, and 21 argument roles.", "We preprocess the datasets following previous work (Zhang et al., 2019; Wadden et al., 2019; Du and Cardie, 2020; Lin et al., 2020; Lu et al., 2021; Hsu et al., 2021), and obtain three datasets, ACE05-E, ACE05-E + and ERE-EN.", "Statistics of the datasets are shown in Table", "1. Compared to ACE05-E, both ACE05-E + and ERE-EN contain pronoun roles and multi-token event triggers.", "We use the same evaluation criteria in previous work (Zhang et al., 2019; Wadden et al., 2019; Lin et al., 2020; Lu et al., 2021; Hsu et al., 2021) and report the Precision P , Recall R and F1 score F 1 of trigger classification ( Trg-C ) and argument classification ( Arg-C ).", "Trg-C : a trigger is correctly classified if its offset and event type matches the ground truth.", "Arg-C : an argument is correctly classified if its offset, event type and role label all matches the ground truth.", "Following Lu et al. (2021), we also obtain the offset of extracted triggers by string matching in the input context one by one.", "For the predicted argument, we find the nearest matched string to the predicted trigger as the predicted offset.", "We compare GTEE-DYNPREF with two groups of event extraction work.", "The first group is about classification-based event extraction methods.", "DY GIE++ (Wadden et al., 2019): a BERT-based model which captures both within-sentence and cross-sentence context.", "GAIL (Zhang et al., 2019): an RL model jointly extracting entity and event.", "ONEIE (Lin et al., 2020): an end-to-end IE system which employs global feature and beam search, which is the state-of-the-art.", "BERT_QA (Du and Cardie, 2020): a MRC-based model using multi-turns of separated QA pairs to extract triggers and arguments.", "MQAEE (Li et al., 2020): a multi-turn question answering system.", "The second group contains generation-based event extraction methods.", "TANL (Paolini et al., 2021): a method use translation tasks modeling event extraction in a trigger-argument pipeline.", "BART-GEN (Li et al., 2021): a template-based conditional generation method.", "TEXT 2E VENT (Lu et al., 2021): a sequence-to-structure generation method.", "DEGREE-E 2 E (Hsu et al., 2021): an end-to-end conditional genration method with discrete prompts.", "We use the huggingface implementation of BART-large as the encoder-decoder LM and BERT-large", "as the binary irrelevance classifier IC in 3.5 and the context encoder in 4.2.", "We optimized our models by AdamW (Loshchilov and Hutter, 2019).", "The hyperparameters we used are shown in Table", "2. Each experiment is conducted on NVIDIA A100 Tensor Core GPU 40GB.", "For simplicity, we randomly initialize 3 the embedding tensor P (cid:48) .", "As mentioned in 3.5, there is an overwhelming amount of negative samples compared with positive samples.", "Therefore, we sample only 4% negative samples in the train and dev split for the three datasets, keeping all samples in test split.", "We evaluate the proposed model GTEE-DYNPREF under the supervised learning setting.", "Table 3 shows the comparison results on ACE05-E against all baseline methods, and Table 4 illustrates the results compared with the state-of-the-art in each research line on ACE05-E + and ERE-EN.", "New state-of-the-art.", "As we can see from Table 3, GTEE-DYNPREF achieves the highest F1 scores for Trg-C and Arg-C on ACE05-E, 3 The random initialization is implemented in the torch.nn.EmbeddingLayer class in PyTorch v1.7.1.", "compared with all the generation-based baselines.", "Besides, GTEE-DYNPREF is competitive with the state-of-the-art classification-based method ONEIE, outperforming the others.", "In Table 4, GTEE-DYNPREF achieves competitive Arg-C F1 score with ONEIE on ACE05-E + , while obtaining 7.5% and 4.6% gain of F1 scores for Trg-C and Arg-C, respectively, achieving new state-of-the-art on ERE-EN.", "Trainable prompts boost the performances.", "Compared with DEGREE, the event extraction method using fixed templates, and TEXT 2E VENT , the generative event extraction method without prompts, GTEE-DYNPREF outperforms them in all the datasets, showing the effectiveness of the trainable dynamic prefix with prompts.", "GTEE-DYNPREF utilizes the event type templates and optimize them with context-specific information in the dynamic prefix, which is easy to extend to a new type of event.", "Therefore, aiming to verify the ability of GTEE-DYNPREF to learn from new event types, we conduct experiments under the transfer learning setting following Lu et al. (2021).", "Specifically, we divide the event mentions whose context contains no less than eight tokens in ACE05-E + into two subsets, denoted by src and tgt .", "src contains top-10 frequent types of events and tgt contains the rest 23 types of events.", "We then randomly split each subset into a train set and a test set with the ratio 4 : 1 .", "Specifi-cally, for transfer learning, we will first pre-train on src-train to learn the task information and then fine-tune on tgt-train for extracting the new types of events.", "Table 6 shows the evaluation results on tgt-test under the transfering learning setting and when solely training on tgt-train without transfering knowledge.", "We choose the state-of-the-art classification-based model ONEIE and generation-based method TEXT 2E VENT as the baselines.", "We can see that GTEE-DYNPREF achieves the highest Trg-C F1 and Arg-C F1 scores, which indicates that with the help of dynamic prefix, GTEE-DYNPREF can be adopted to new types of events more effectively.", "Additionally, comparing with solely training on tgt , transfering the knowledge from src allows GTEE-DYNPREF to achieve higher F1 scores than ONEIE and TEXT 2E VENT .", "The reason may be that ONEIE relies on multi-task 5222 Model ACE05-E + ERE-EN Trg-C Arg-C Trg-C Arg-C P R F1 P R F1 P R F1 P R F1 ONEIE 72.1 73.6 72.8 55.4 54.3 54.8 58.4 59.9 59.1 51.8 49.2 50.5 TEXT 2E VENT 71.2 72.5 71.8 54.0 54.8 54.4 59.2 59.6 59.4 49.4 47.2 48.3 DEGREE-E 2 E-72.7 -55.0 -57.1 -49.6 GTEE-DYNPREF 67.3 83.0 74.3 49.8 60.7 54.7 61.9 72.8 66.9 51.9 58.8 55.1 Table 4: Results on ACE05-E + and ERE-EN for event extraction in the supervised learning setting.", "annotated information, and TEXT 2E VENT requires learning the structural information of new types of events.", "In contrast, GTEE-DYNPREF only requires an easy-to-acquire template, which can be further optimized during training.", "In this section, we study the effectiveness of each proposed module by adding them into our base model GTEE-BASE and finally get our final model GTEE-DYNPREF .", "The results on ACE05-E, ACE05-E + and ERE-EN are presented in Table 5.", "Continuous Prompt vs Discrete Prompt.", "We first compare GTEE-STAPREF with GTEE-BASE .", "Based on GTEE-BASE with discrete prompts, GTEE-STAPREF further combines type-specific prefixes as to form continuous prompts.", "It can be observed that there is a 0.8%, 0.7% and 0.9% gain for the Trg-C F1 score on ACE05-E, ACE05-E + and ERE-EN, respectively.", "Additionally, there is a 0.6%, 1.0% and 0.8% improvement for the Arg-C F1 score, demonstrating the effectiveness and flexibility of STAPREF to model the type-specific information.", "Dynamic Prefix vs Static Prefix.", "Next we compare GTEE-DYNPREF with GTEE-STAPREF to study the advantages of constructing dynamic prefix.", "On the basis of GTEE-STAPREF , integrating context-specific information leads to a constent gain for Trg-C F1 score on all the datasets as 0.8%, 0.6% and 0.5%, respectively.", "There can also be observed a 1.5%, 0.5% and 0.8% increase for the Arg-C F1 scores, respectively.", "It indicates that integrating context-specific information into type-specific information and transforming static prefix to dynamic is beneficial for generative template-based event extraction.", "Our goal of the irrelevance classifier IC is to recognize the context that does not contain any event records in a given ontology O .", "According to 3.5, we train an IC and use it for each dataset separately.", "Please note that on one specific dataset, we will use the same IC for all the experiments corresponding to that dataset.", "The accuracy of IC is 95.4%, 93.5% and 94.2% for ACE05E, ACE05E + and ERE-EN, respectively.", "To further study the influence of IC, we compare the performances of using no IC, trained IC, and gold IC.", "The compared F1 scores are listed in Table 7.", "First, we find that with the help of our trained ICs on each dataset, the Trg-C and Arg-C F1 scores have been improved a lot by more than ten percentage points, indicating the necessity of IC.", "Second, 5223 20 40 60 80 100 120 Prefix length L 73.0 73.4 73.8 74.2 74.6 75.0 T r g -CF 1 53.4 53.6 54.0 54.3 54.6 54.9 A r g -CF 1 Trg-C F1 Arg-C F1", "by replacing the trained IC with the oracle gold IC results, we can still observe possible increasements for F1 scores, suggesting the existence of likely chances for further optimizing IC performances.", "We leave the optimization for IC as future work.", "We study the intrinsic characteristics of GTEE-DYNPREF by showing the influences of model hyperparameters on ACE05-E + .", "Prefix length L .", "We first study the impact of prefix length L by grid search in { L | L = 10 k, k N k 12 } .", "Figure", "5(a) shows the Trg-C and Arg-C F1 scores.", "We can observe that both Trg-C and Arg-C F1 scores increase as the prefix length L increases to 80, afterward, a slight fluctuation.", "We think the longer L introduces more trainable parameters and a more vital ability to model the context-specific type information.", "Therefore, we choose 80 as the prefix length in GTEE-DYNPREF .", "Embedding dimension D (cid:48) .", "Similarly, we study the impact of the dimension D (cid:48) of the embedding tensor P (cid:48) by increasing from 64 to 1024.", "The results of Trg-C and Arg-C F1 scores are illustrated in Figure", "5(b).", "We find that although the bigger embedding dimension D (cid:48) theoretically provides expressive type-specific information and improves the F1 scores when D (cid:48) < = 512 , the continual improvement is interrupted when the embedding dimension is around 512.", "In this paper, we studied event extraction in the template-based conditional generation manner.", "We proposed the dynamic prefix tuning model GTEE-DYNPREF for event extraction.", "On the one hand the method constructs tunable prefixes to model type-specific information and on the other hand GTEE-DYNPREF captures the associations between event types and calculates a context-specific prefix when steering pretrained language models.", "Experimental results show that our model achieves competitive results with the state-of-the-art on ACE 2005, which is also proven to be portable to new types of events effectively.", "Event extraction is a standard task in NLP.", "We do not see any significant ethical concerns.", "Our work is easy to adapt to new event types by offering some examples and pre-defined templates.", "Therefore, the expected usages of our work is to identify interesting event records from user textual input such as a piece of sentence or document.", "We thank the anonymous reviewers for their valuable comments and suggestions.", "This work is supported by National Natural Science Foundation of China (No. U19B2020 and No. 62106010)." ]
[ "method", "abstain", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "abstain", "abstain", "abstain", "method", "method", "result", "objective", "method", "other", "other", "other", "method", "other", "other", "other", "method", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other" ]
[ "We show that a general algorithm for efficient computation of outside values under the minimum of superior functions framework proposed by Knuth (1977) would yield a subexponential time algorithm for SAT, violating the Strong Exponential Time Hypothesis (SETH).", "Weighted deduction systems are used in a number of NLP applications, including parsing for context-free grammars (Shieber et al., 1995; Sikkel, 1997; Nederhof, 2003; Eisner and Satta, 1999) and machine translation (Melamed et al., 2004; Lopez, 2009).", "In these applications, the inside-outside algorithm enables efficient calculation of the total weight of all derivations passing through a specific item in the weighted deduction system by computing tables of inside and outside values.", "Goodman (1999) develops a generalized inside-outside algorithm that can be used with any commutative semiring.", "Applying the sum-product semiring results in the standard inside-outside algorithm used as a subroutine in Expectation Maximization (Dempster et al., 1977).", "Applying the max-product semiring results in an efficient algorithm for finding, for example, the best tree that incorporates a specified constituent covering a specified span of the input string.", "The minimum of superior functions framework of Knuth (1977) is an alternative to the semiring framework for analyzing weighted deduction systems.", "Knuth's framework is more general than semirings in that it allows more general functions to be used for combining the weights of subderiva-tions.", "Knuth's framework has the advantage that it allows for best-first search with a generalization of Dijkstra's algorithm, as well as for A search (Nederhof, 2003).", "efficient outside computation, allowing for a generalized inside-outside algorithm?", "In this paper, we answer this question in the negative.", "We prove that a general algorithm for efficient outside computation in this framework would imply the existence of a subexponential time algorithm for satisfiability of boolean formulas in conjunctive normal form (SAT), violating the Strong Exponential Time Hypothesis (SETH) (Impagliazzo and Paturi, 1999) which postulates that no such algorithms exist.", "This result may be counterintuitive, because one might expect efficient outside computation to be possible whenever efficient inside computation is possible.", "We believe this result to be the first formal hardness proof for outside computation in weighted deduction systems.", "A weighted deduction system (Nederhof, 2003) has rules of the form X 1 ,...,X n Y where X 1 , ..., X n are items forming the antecedents of the rule and item Y is the consequent of the rule.", "A derivation of item X is a tree of rules where the antecedents of each rule are the consequents of its children, and X is the consequent of the root of the tree.", "The leaves of this tree are rules with zero antecedents, called axioms.", "Each rule R is also associated with a rule weight function FR which takes as input the weights of the antecedents and calculates a new weight for the consequent.", "The weight of a derivation is the weight of the rule serving as the root of the tree, calculated by recursively evaluating the rule weight functions FR ; that is, for a derivation D formed by applying rule R to antecedent derivations D 1 , ..., D n : weight( D ) = FR (weight( D 1 ) , ..., weight( D n )) To show both the rule and the weights of the antecedents and consequent, we use a notation where each item's weight is written to its left.", "This is ex-emplified in Figure 1, which shows an example rule for CFG parsing with items of the form [ A, i, j ] , representing a subtree rooted by nonterminal A and spanning input tokens i + 1 through j .", "One item in the weighted deduction system is designated as the goal item, and the fundamental problem is to calculate the total weight of all derivations of this item, where the total weight is calculated using a generalized sum operation, written (cid:76) .", "An extension of this is to calculate the total weight of all derivations of the goal item G that also contain item X , written ( X ) (Goodman, 1999): ( X ) = (cid:77) D : X,G D weight( D ) where X, G D means that item X and goal item G are each the consequent of some rule in D (for G , this is specifically the root rule).", "These values are a core component of the inside-outside Expectation Maximization (EM) algorithm for unsupervised probabilistic context-free grammar (PCFG) induction (Baker, 1979), where ( X ) is calculated by combining a corresponding inside value (total weight of subtrees rooted at X ) and outside value (cost of completing a derivation containing X ).", "For the purposes of the EM algorithm, the operation is standard addition, and FR computes the product of its arguments.", "If we define the operation to be max , ( X ) corresponds to the value of the best parse tree subject to the constraint that a particular constituent X be included.", "This value can be found by combining an inside and outside value, using the same procedure as is used for EM, but substituting max for addition.", "Gildea (2020) discussed classes of weighted deduction system where computation of outside values (and by extension, values) can be done efficiently.", "Formally, they were interested in systems where ( X ) can (or cannot) be calculated for every item X in time O ( | E | ) , where | E | is the number of rules in the system, and = max X | ( X ) | is the largest number of bits required to represent the total weight of an item.", "One important class of weighted deduction system is the minimum of superior functions (Knuth, 1977). In this framework, each rule weight function FR is a superior function, meaning that it is monotonically increasing in each argument and the result is always greater than or equal to each of its arguments. The generalized sum (cid:76) in this framework used for calculating total weight is the minimum operation: ( X ) = (cid:77) D : X,G D weight( D ) = min D : X,G D weight( D ) Best-first search is possible in this framework using a generalization of Dijkstra's algorithm (Neder-hof, 2003). It is interesting to ask whether efficient outside computation is always possible within this framework, and even more generally, whether the conditions necessary for best-first search are sufficient for efficient outside computation. The A* parsing system of Klein and Manning (2003) is an instance of the minimum of superior functions framework 1 that uses best-first search. Outside values are of particular interest for A* parsing because they can be used as admissible search heuristics (Pauls and Klein, 2009a), and to efficiently find the k best parses (Pauls and Klein, 2009b). When the function FR simply takes a product of its arguments, as in Pauls and Klein (2009b), efficient outside computation is possible. In this paper, we address the question of whether this is guaranteed by the minimum of superior functions framework or merely an artifact of this particular system. Gildea (2020) pointed out that there is no known efficient algorithm for outside computation in the minimum of superior functions framework. However, they did not present a formal hardness result. In this work, we prove that general efficient outside computation in this framework would yield a subquadratic time algorithm for the Orthogonal Vectors Problem, violating the Orthogonal Vectors Conjecture (Williams, 2005; Vassilevska Williams, 2015), which states that no such algorithms exist because their existence would violate the Strong Exponential Time Hypothesis (SETH) (Impagli-azzo and Paturi, 1999) and yield a subexponential 1 To see this, simply negate the log probabilities and replace max with min in their system. The superior function FR is addition for all R . ... ... ... ... ... ... ... ... ... Figure 2: Graphical representation of the different paths through the weighted deduction system. time algorithm for SAT. The Strong Exponential Time Hypothesis, a somewhat stronger assumption than P (cid:54) = NP, is widely conjectured to be true, and has been used as an assumption in a number of recent hardness results, including the result that string edit distance cannot be computed in strongly subquadratic time, unless SETH is false (Backurs and Indyk, 2015). 3 Reduction We begin with the Orthogonal Vectors Problem: given sets A, B { 0 , 1 } d where | A | = | B | = n , determine whether there exist vectors a A and b B such that their dot product a b = (cid:80) dk =1 a k b k is 0. We now reduce this problem to a weighted deduction system in the minimum of superior functions framework. First, define n axiom items X i , i [1 , n ] , and construct n corresponding rules RA i leading from X i to item Y : w : X i F Ai ( w ) : Y where F Ai ( w ) = w +( d +1) i . The weight for each axiom item X i is defined to be 0 . The intuition here is that the index i refers to a specific vector A i A , and the resulting weight will allow later rule weight functions to identify the starting point for the derivation and thus which vector in A to compare to a vector in B . This is possible because all derivations from Y to the goal item will add no more than d to F Ai (weight( X i )) = ( d + 1) i , making the value of i uniquely recoverable. Next, we construct n rules R Bj, 1 , j [1 , n ] of the form: w : Y F Bj, 1 ( w ) : Z j, 1 where each F Bj, 1 is the rule weight function corresponding to the first dimension of vector B j B . We define the rule weight functions used here and those in the upcoming rules in the following way: F Bj,k ( w ) = w + A index( w ) ,k B j,k where index( w ) = (cid:98) wd +1 (cid:99) . Intuitively, these functions look up the choice of which vector A i was used to begin the computation using index( w ) , and multiply the k -th dimension of that vector with the k -th dimension of B j .", "Note that while item Y could be removed by defining a rule deriving each Z j, 1 from each X i directly with an appropriately-defined rule weight function, this would require n 2 rules, whereas introducing the intermediate item Y provides the same connectivity with only 2 n rules while using the weight to keep track of which X i was chosen.", "This is important because our proof requires that the deduction system used for the reduction be constructed in subquadratic time.", "w : Z j,k 1 F Bj,k ( w ) : Z j,k", "where F Bj,k was defined above.", "The intuition is that each family of Z j,k items for a particular j forms a chain that eventually covers all d dimensions of B j .", "So far we have not covered the final dimension of the B vectors, so we do so now by constructing n rules R Bj,d of the form: w : Z j,d 1 F Bj,d ( w ) : G where G is the goal item of the weighted deduction system.", "We now discuss properties of the resulting system, a graphical representation of which is presented in Figure", "2. Every computation begins at one of the axiom items X i corresponding to A i and always passes through Y .", "The computation then proceeds down one of n chains, each corresponding to a vector in B .", "Because the rule weight function F Bj,k ( w ) applied at each edge adds at most 1 to w and each chain from Y to G consists of exactly d edges, the weight of any item in the chosen chain will never be more than d greater than the weight of the edge from the chosen X i to Y .", "Because each of those edges' weights is a distinct multiple of d + 1 , the choice of the starting point (and corresponding vector in A ) can be recovered by each F Bj,k in the chain using the index( w ) function.", "This allows each chain to effectively calculate the dot product between its respective vector in B and the chosen vector in A .", "In the superior function framework of Knuth (1977), the total weight of an item C (referred to as ( C ) ) is the minimum weight over complete derivations D containing C and the goal item G : ( C ) = min D : C,G D weight( D ) where weight( D ) is the result of the rule weight function for the (unique) rule producing G in derivation D .", "For the purposes of the reduction, we are interested in the n total weights ( X i ) .", "Note that every derivation containing X i defines a path from X i to G , and there are exactly n such paths for a given X i : one for each chain from Y to G , each corresponding to a vector B j .", "Recalling that weight( X i ) = 0 , we can rewrite ( X i ) as follows: ( X i ) = min j (cid:34)(cid:32) d 1 k =0 F Bj,d k (cid:33) (cid:0) F Ai (weight( X i )) (cid:1)(cid:35) = min j (cid:34)(cid:32) d 1 k =0 F Bj,d k (cid:33) (cid:0) F Ai (0) (cid:1)(cid:35) = min j (cid:34)(cid:32) d 1 k =0 F Bj,d k (cid:33) (( d + 1) i ) (cid:35) = min j (cid:34)(cid:32) d 1 (cid:88) k =0 A i,d k B j,d k (cid:33) + ( d + 1) i (cid:35) = min j [( A i B j ) + ( d + 1) i ] where represents repeated function composition.", "We can use the values of ( X i ) to solve the Orthogonal Vectors Problem.", "Because A i B j is at most d , ( X i ) is evenly divisible by ( d + 1) if and only if there is a vector in B that is orthogonal to A i : ( X i ) 0 (mod d + 1) j ( A i B j = 0) The complete algorithm for solving the problem using this technique is as follows:", "1. Construct the deduction system as described above.", "( O ( nd ) time)", "2. Calculate ( X i ) for all i [1 , n ] .", "3. Check whether ( X i ) 0 (mod d + 1) for any i .", "If yes, then there exist vectors A i A and B j B such that A i B j = 0 .", "Otherwise, no such vectors exist.", "( O ( n ) time) If all values ( X i ) could be calculated in linear time with respect to the number of edges | E | O ( nd ) , then the Orthogonal Vectors Problem could be solved in time O ( nd ) , violating the Orthogonal Vectors Conjecture which states that there is no strongly subquadratic time algorithm for this problem, and by extension violating the Strong Exponential Time Hypothesis (SETH) (Im-pagliazzo and Paturi, 1999).", "Because the proposed deduction system is an instance of the superior functions framework of Knuth (1977), we conclude that efficient outside computation is not possible in general under that framework unless SETH is false.", "This work provides a formal proof that efficient outside computation is not possible in general for the minimum of superior functions framework (Knuth, 1977) (unless the Strong Exponential Time Hypothesis is false).", "This indicates that the conditions necessary for best-first search are not sufficient for efficient outside computation.", "It remains an open problem to characterize the class of functions for which best-first search and efficient outside computation are both always possible.", "Acknowledgments This work was funded by NSF awards IIS-1813823 and CCF-1934985." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "method", "other", "other", "other", "objective", "abstain", "other", "other", "other", "other", "other", "other", "objective", "other", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain" ]
[ "Guangnan Ye IBM Research New York, USA", "[email protected] [email protected] Abstract", "In the recent advances of natural language processing, the scale of the state-of-the-art models and datasets is usually extensive, which challenges the application of sample-based explanation methods in many aspects, such as explanation interpretability, efficiency, and faithfulness.", "In this work, for the first time, we can improve the interpretability of explanations by allowing arbitrary text sequences as the explanation unit.", "On top of this, we implement a hessian-free method with a model faithfulness guarantee.", "Finally, to compare our method with the others, we propose a semantic-based evaluation metric that can better align with humans' judgment of explanations than the widely adopted diagnostic or retraining measures.", "The empirical results on multiple real data sets demonstrate the proposed method's superior performance to popular explanation techniques such as Influence Function or TracIn on semantic evaluation.", "As complex NLP models such as the Transformers family (Vaswani et al., 2017; Devlin et al., 2019) become an indispensable tool in many applications, there are growing interests to explain the working mechanism of these black-box models.", "Among the vast of existing techniques for explaining machine learning models, Influence Functions (Ham-pel, 1974; Koh and Liang, 2017) that uses training instances as explanations to a model's behavior have gained popularity in NLP very recently.", "Different from other methods such as using input erasure (Li et al., 2016), saliency maps or attention matrices (Serrano and Smith, 2019; Jain and Wallace, 2019; Wiegreffe and Pinter, 2019) that only look at Equal Contribution.", "Wei Zhang did the work while being a research scientist at IBM T.J. Watson Research Center at Yorktown Heights, NY, USA; Ziming Huang was a research scientist at IBM Research Lab at Beijing, China.", "how a specific input or input sequence impacts the model decision, explaining with training instances can cast light on the knowledge a model has encoded about a problem, by answering questions like ' what knowledge did the model capture from which training instances so that it makes decision in such a manner during test?", "'.", "Very recently, the method has been applied to explain BERT-based (Devlin et al., 2019) text classification (Han et al., 2020; Meng et al., 2020b) and natural language inference (Han et al., 2020) models, as well as to aid text generation for data augmentation (Yang et al., 2020a) using GPT-2 (Radford et al., 2019).", "Although useful, Influence Function may not be entirely bullet-proof for NLP applications.", "First, following the original formulation (Koh and Liang, 2017), the majority of existing works use entire training instances as explanations.", "However, for long natural language texts that are common in many high-impact application domains (e.g., healthcare, finance, or security), it may be difficult, if not impossible, to comprehend an entire instance as an explanation.", "For example, a model's decision may depend only on a specific part of a long training instance.", "Second, for modern NLP models and large-scale datasets, the application of Influence Functions can lead to prohibitive computing costs due to inverse Hessian matrix approximation.", "Although hessian-free influence score such as TracIn (Pruthi et al., 2020b) was introduced very recently, it may not be faithful to the model in question and can result in spurious explanations for the involvement of sub-optimal checkpoints.", "Last, the evaluation of explanation methods, in particular, for the training-instance-based ones, remains an open question.", "Previous evaluation is either under an over-simplified assumption on the agreement of labels between training and test instances (Hanawa et al., 2020; Han et al., 2020) or is based on indirect or manual inspection (Hooker et al., 2019; Meng et al., 2020b; Han et al., 2020; Pruthi et al., 2020a).", "A method to automatically measure the semantic relations at scale and that highly correlates to human judgment is still missing in the evaluation toolset.", "To address the above problems, we propose a framework to explain model behavior that includes both a set of new methods and a new metric that can measure the semantic relations between the test instance and its explanations.", "The new method allows for arbitrary text spans as the explanation unit and is Hessian-free while being faithful to the final model.", "Our contributions are:", "1. We propose a new explanation framework that can use arbitrary explanation units as explanations and be Hessian-free and faithful at the same time;", "2. A new metric to measure the semantic relatedness between a test instance and its explanation for BERT-based deep models.", "Suppose a model parameterized by is trained on classification dataset D = { D train , D test } by empirical risk minimization over D train .", "Let z = ( x , y ) D train and z (cid:48) = ( x (cid:48) , y (cid:48) ) D test denote a training and a test instance respectively, where x is a token sequence, and y is a scalar.", "The goal of training instance based explanation is to provide for a given test z (cid:48) an ordered list of training instances as explanation.", "Two notable methods to calculate the influence score are IF and TracIn : IF (Koh and Liang, 2017) assumes the influence of z can be measured by perturbing the loss function L with a fraction of the loss on z , and obtain I pert,loss ( z, z (cid:48) ; ) = L ( z (cid:48) , ) H 1 L ( z, ) , (1) where H is the Hessian matrix calculated on the entire training dataset, a potential computation bottleneck for large dataset D and complex model with high dimensional .", "TracIn (Pruthi et al., 2020b) instead assumes the influence of a training instance z is the sum of its contribution to the overall loss all through the entire training history, and conveniently it leads to TracIn ( z, z (cid:48) ) = (cid:88) i i i L ( i , z ) i L ( i , z (cid:48) ) , (2) where i iterates through the checkpoints saved at different training steps and i is a weight for each checkpoint.", "TracIn does not involve Hessian matrix and more efficient to compute.", "We can summarize the key differences between them according to the following desiderata of an explanation method: Efficiency for each z (cid:48) , TracIn requires O ( CG ) where C is the number of models and G is the time spent for gradient calculation; whereas IF needs O ( N 2 G ) where N is the number of training instances, and N >> C in general.", "1 Faithfulness IF is faithful to since all its calculation is based on a single final model, yet TracIn may be less faithful to since it obtains gradients from a set of checkpoints 2 .", "Interpretability Both methods use the entire training instance as an explanation.", "Explanations with a finer-grained unit, e.g., phrases, may be easier to interpret in many applications where the texts are lengthy.", "To improve on the above desiderata, a new method should be able to: 1) use any appropriate granularity of span(s) as the explanation unit; 2) avoid the need of Hessian while maintaining faithfulness.", "We discuss the solutions for both in Section 3.1 and 3.2, and combine them into one formation in Section 3.3 followed by critical implementation details.", "To achieve 1), we first start with influence functions (Koh and Liang, 2017) and consider an arbitrary span of training sequence x to be evaluated for the qualification as explanation 3 .", "Our core idea is to see how the model loss on test instance z (cid:48) changes 1 some approximation such as hessian-inverse-vector-product (Baydin et al., 2016) may improve efficiency to O ( NSG ) where S is the approximation step and S < N 2 We may say TracIn is faithful to the data rather than to the model .", "And in the case where checkpoint averaging can be used as model prediction, the number of checkpoints may be too few to justify Eq.", "2. 3 the method can be trivially generalized to multiple spans with the training span's importance.", "The more important a training span is to z (cid:48) , the greater this influence score should be.", "We derive it in three following steps.", "First, we define the training span from token i to token j to be x ij , and the sequence with x ij masked is x ij = [ x 0 , ..., x i 1 , [MASK] , ..., [MASK] , x j +1 , ... ] and its corresponding training data is z ij .", "We use logit difference (Li et al., 2020) as importance score based on the empirical-risk-estimated parameter obtained from D train as: imp ( x ij | z, ) = logit y ( x ; ) logit y ( x ij ; ) , where every term in the right hand side (RHS) is the logit output evaluated at a model prediction y from model right before applying the SoftMax function.", "This equation tells us how important a training span is.", "It is equivalent to the loss difference imp ( x ij | z ; ) = L ( z ij ; ) L ( z ; ) , (3) when the cross entropy loss L ( z ; ) = (cid:80) y i I ( y = y i ) logit y i ( x ; ) is applied.", "Then, we measure x ij 's influence on model by adding a fraction of imp ( x ij | z ; ) scaled by a small value (cid:15) to the overall loss and obtain (cid:15), x ij | z := argmin E z i D train [ L ( z i , )] + (cid:15) L ( z ij ; ) (cid:15) L ( z ; ) .", "Applying the classical result in (Cook and Weisberg, 1982; Koh and Liang, 2017), the influence of up-weighing the importance of x ij on is d (cid:15), x ij | z d(cid:15) (cid:12)(cid:12)(cid:12) (cid:15) =0 = H 1 ( L ( z ; ) L ( z ij ; )) .", "Finally, applying the above equation and the chain rule, we obtain the influence of x ij to z (cid:48) as: IF + ( x ij | z, z (cid:48) ; ) := (cid:15) L ( z (cid:48) ; (cid:15), x ij | z ) | (cid:15) =0 = L ( z (cid:48) ; ) H 1 ( L ( z ; ) L ( z ij ; )) .", "IF + measures the influence of a training span on an entire test sequence.", "Similarly, we also measure the influence of a training span to a test span x (cid:48) kl by applying Eq.", "3 and obtain IF ++ ( x ij | z, x (cid:48) kl | z (cid:48) ; ) := (cid:15) L ( z (cid:48) kl ; (cid:15), x ij | z ) (cid:15) L ( z (cid:48) ; (cid:15), x ij | z ) | (cid:15) =0 =( L ( z (cid:48) kl ; ) L ( z (cid:48) ; )) H 1 ( L ( z ; ) L ( z ij ; )) .", "On the choice of Spans Theoretically, IF + and IF ++ can be applied to any text classification problem and dataset with an appropriate choice of the span.", "If no information about valid span is available, shallow parsing tools or sentence split-tools can be used to shatter an entire text sequence into chunks, and each chunk can be used as span candidates.", "In this situation, the algorithm can work in two steps: 1) using masking method (Li et al., 2020) to determine the important test spans; and 2) for each span we apply IF ++ to find training instances/spans as explanations.", "Usually, we can choose top-K test spans, and even can choose K = 1 in some cases.", "In this work, we look at the later case without loss of generality, and adopt two aspect-based sentiment analysis datasets that can conveniently identify a deterministic span in each text sequence, and frame the span selection task as a Reading Comprehension task (Rajpurkar et al., 2016).", "We discuss the details in Section 5.", "Note that the discussion can be trivially generalized to the case where K > 1 using Bayesian approach such as imp ( x ij ) = EP ( x (cid:48) kl ) [ imp ( x ij | x kl ) (cid:48) ] which can be explored in future work.", "To achieve 2), we would start with the method of TracIn (Pruthi et al., 2020b) described in Eq.", "2 which is Hessian free by design.", "TracIn defines the contribution of a training instance to be the sum of its contribution (loss) throughout the entire training life cycle, which eradicated the need for Hessian.", "However, this assumption is drastically different from IF 's where the contribution of z is obtained solely from the final model .", "By nature, IF is a faithful method , and its explanation is faithful to , and TracIn in its vanilla form is arguably not a faithful method.", "Proposed treatment Based on the assumption that the influence of z on is the sum of influ-ences of all variants close to , we define a set of faithful variants satisfying the constraint of { i | 1 > >> || i || 2 } , namely -faithful to .", "The smaller is, the more faithful the explanation method is.", "Instead, the for TracIn can be arbitrary large without faithfulness guarantees, as some checkpoints can be far from the final .", "Thus, we construct a -faithful explanation method that mirrors TracIn as: TracInF ( z, z (cid:48) ) = (cid:88) i + i L ( + i , z ) + i L ( + i , z (cid:48) ) .", "The difference between TracIn and TracInF is that the checkpoints used in TracIn are correlated in time whereas all variants of TracInF are conditionally independent.", "Finding a proper i can be tricky.", "If ill-chosen, i may diverge so much that hurts gradient estimation.", "In practice, we estimate i = i g ( z i | ) obtained from a single-step gradient descent g ( z i | ) with some training instance z i on model , scaled by an i -specific weighting parameter i , which in the simplest case is uniform for all i .", "Usually i should be small enough so that + i can stay close to .", "In this paper we set as the model learning rate for proof of concept.", "Is TracInF faithful?", "First, any + i is close to .", "Under the assumption of Lipschitz continuity, there exists a k R + such that L ( + i , z ) is bounded around L ( , z ) by k | i g 2 ( z i | ) | , the second derivative, because | L ( + i , z ) L ( , z ) | < k | i g 2 ( z i | ) | .", "A proper i can be chosen so that the right hand side (RHS) is sufficiently small to bound the loss within a small range.", "Thus, the gradient of loss, and in turn the TracInF score can stay -faithful to for an sufficiently small , which TracIn can not guarantee.", "By combining the insights from Section 3.1 and 3.2, we obtain a final form named TracIn ++ :", "TracIn ++ ( x (cid:48) kl | z (cid:48) , x ij | z ; ) = (cid:88) i (cid:2) L ( + i , z (cid:48) kl ) L ( + i , z (cid:48) ) (cid:3) (cid:2) L ( + i , z ) L ( + i , z ij ) (cid:3) .", "This ultimate form mirrors the IF ++ method, and it satisfies all of our desiderata on an improved explanability method.", "Similarly, TracIn + that mirrors IF + is TracIn + ( z (cid:48) , x ij | z ; ) = (cid:88) i L ( z (cid:48) ; + i ) (cid:2) L ( + i , z ) L ( + i , z ij ) (cid:3) .", "Since the RHS of IF , IF + and IF ++ equations all involve the inverse of Hessian Matrix, here", "we discuss the computation challenge.", "Following (Koh and Liang, 2017), we adopt the vector-Hessian-inverse-product (VHP) with stochastic estimation (Baydin et al., 2016).", "The series of stochastic updates, one for each training instance, is performed by the vhp() function in the torch.autograd.functional package and the update stops until convergence.", "Unfortunately, we found that naively applying this approach leads to VHP explosion due to large parameter size.", "To be specific, in our case, the parameters are the last two layers of RoBERTa-large (Liu et al., 2019) plus the output head, a total of 12M parameters per gradient vector.", "To stabilize the process, we take three approaches: 1) applying gradient clipping (set to 100) to avoid accumulating the extreme gradient values; 2) adopting early termination when the norm of VHP stabilizes (usually < 1000 training instances, i.e., the depth); and 3) slowly decaying the accumulated VHP with a factor of 0.99 (i.e., the damp) and update with a new vhp() estimate with a small learning rate (i.e., the scale) of 0.004.", "Please refer to our code for more details.", "Once obtained, the VHP is first cached and then retrieved to perform the dot-product with the last term.", "The complexity for each test instance is O ( dt ) where d is the depth of estimation and t is the time spent on each vhp() operation.", "The time complexity of different IF methods only vary on a constant factor of two.", "For each of TracIn , TracIn + and TracIn ++ , we need to create multiple model variants.", "For TracIn , we save three checkpoints of the most recent training epochs; For TracIn + or TracIn ++ , we start with the same checkpoint and randomly sample a mini-batch 3 times and perform one-step training (learning rate 1E-4) for each selection to obtain three variants.", "We do not over-tune those hyper-parameters for replicability concerns.", "Intuitively, a rational explanation method should rank explanations that are semantically related to the given test instance relatively higher than the less relevant ones.", "Our idea is to first define the semantic representation of a training span x ij of z and measure its similarity to that of a test span x (cid:48) kl of z (cid:48) .", "Since our method uses BERT family as the base model, we obtain the embedding of a training span by the difference of x and its span-masked version x ij as emb ( x ij ) = emb ( x ) emb ( x ij ) , (4) where emb is obtained from the embedding of sentence start token such as [CLS] in BERT (Devlin et al., 2019) at the last embedding layer.", "To obtain embedding of the entire sequence we can simply use the emb ( x ) without the last term in Eq.", "4.", "Thus, all spans are embedded in the same semantic space and the geometric quantities such as cosine or dot-product can measure the similarities of em-beddings.", "We define the semantic agreement Sag as: Sag ( z (cid:48) , { z }| K 1 ) = 1 K (cid:88) z cos ( emb ( x ij | z ) , emb ( x (cid:48) kl | z (cid:48) )) , (5) Intuitively, the metric measures the degree to which top-K training spans align with a test span on semantics.", "Label Agreement ( Lag ) label agreement (Hanawa et al., 2020) assumes that the label of an explanation z should agree with that of the text case z (cid:48) .", "Accordingly, we retrieve the top-K training instances from the ordered explanation list and calculate the label agreement ( Lag ) as follows: Lag ( z (cid:48) , { z }| N 1 ) = 1 K (cid:88) k [1 ,K ] I ( y (cid:48) == y k ) , where I ( ) is an indicator function.", "Lag measures the degree to which the top-ranked z agree with z (cid:48) on class label, e.g., if the sentiment of the test z (cid:48) and explanation z agree.", "Re-training Accuracy Loss ( Ral ) Ral measures the loss of test accuracy after removing the top-K most influential explanations identified by an explanation method (Hanawa et al., 2020; Hooker et al., 2019; Han et al., 2020).", "The assumption is that the higher the loss the better the explanation method is.", "Formally, Ral ( f, ) = Acc ( ) Acc ( (cid:48) ) , where (cid:48) is the model re-trained by the set D train / { z }| K 1 .", "Notice the re-training uses the same set of hyper-parameter settings as training (Section 6.1).", "To obtain { z }| K 1 , we combine the explanation lists for all test instances (by score addition) and then remove the top-K from this list.", "Our criteria for dataset selection are two folds:", "1. The dataset should have relatively high classification accuracy so that the trained model can behave rationally; and", "2. The dataset should allow for easy identification of critical/useful text spans to compare span-based explanation methods.", "We chose two aspect-based sentiment analysis (ABSA) datasets; one is ATSA, a subset of MAMS (Jiang et al., 2019) for product reviews, where aspects are the terms in the text.", "The other is sentihood (Saeidi et al., 2016) of location reviews.", "We can identify the relevant span of an aspect term semiautomatically and train models with high classification accuracy in both datasets.", "(see Section 6.1 for details).", "Data statistics and instances are in Table 1 and", "2. Train Dev Test MAMS 11186 1332 1336 sentihood 2977 747 1491 Table 1: Data Statistics.", "Automatic Span Annotation As shown in the colored text in Table 2, we extract the spans for each term to serve as explanation units for IF + , IF ++ , TracIn + and TracIn ++ .", "To reduce annotation effort, we convert span extraction into a question answering task (Rajpurkar et al., 2016) where we use aspect terms to formulate questions such as How is the service ? which concatenates with the text before being fed into pre-trained machine reading comprehension (RC) models.", "The output answer is used as the span.", "When the RC model fails, we use heuristics to extract words before and after the term word, up to the closest sentence boundary.", "See appendix for more details.", "We sampled a subset of 100 annotations and found that the RC model has about 70% of Exact Match (Ra-jpurkar et al., 2016) and the overall annotation has a high recall of over 90% but low EM due to the involvement of heuristics.", "For example, as shown in 2, if the span of location2 is annotated as I love it, span-based explanation methods will use it to find wrong examples for explanation.", "Thus test instances with incorrectly annotated spans are omitted, i.e., no tolerance to annotation error for test instances.", "To the contrary, for training instances, we do not correct the annotation error.", "The major reason is the explanation methods have a chance to rank the wrongly annotated spans lower (its importance score imp () of Eq.", "3 can be lower and in turn for its influence", "scores.) Also, It is labor-intensive to do so.", "We train two separate models for MAMS and sentihood.", "The model's input is the concatenation of the aspect term and the entire text, and the output is a sentiment label.", "The two models share similar settings:", "1. they both use ROBERTA-LARGE (Liu et al., 2019) from Huggingface (Wolf et al., 2019) which is fed into the BertForSequenceClassification function for initialization.", "We fine-tune the parameters of the last two layers and the output head using a batch size of 200 for ATSA and 100 for sentihood and max epochs of 100.", "We use AdamW optimizer (Loshchilov and Hutter, 2019) with weight decay 0.01 and learning rate 1E-4.", "Both models are written in Pytorch and are trained on a single Tesla V100 GPU and took less than 2 hours for each model to train.", "The models are selected on dev set performance, and both trained models are state-of-the-art: 88.3% on MAMS and 97.6% for sentihood at the time of writing.", "We compare the six explanation methods on two datasets and three evaluation metrics in Table 3 from which we can draw the following conclusions:", "1) TracIn family outperforms IF family according to Sag and Lag metrics.", "We see that both metrics are robust against the choice of K .", "It it worth noting that TracIn family methods are not only efficient, but also effective for extracting explanations compared to IF family as per Sag and Lag .", "2) Span-based methods (with +) outperform Vanilla methods (w/o +).", "It is good news because an explanation can be much easier to comprehend if we can highlight essential spans in text, and IF ++ and TracIn ++ shows us that such highlighting can be justified by their superiority on the evaluation of Sag and Lag .", "3) Sag and Lag shows a consistent trend of TracIn ++ and IF ++ being superior to the rest of the methods, while Ral results are inconclusive, which resonates with the findings in (Hooker et al., 2019) where they also observed randomness after removing examples under different explanation methods.", "This suggests that the re-training method may not be a reliable metric due to the randomness and intricate details involved in the re-training process.", "4) The Sag measures TracIn + differently than Lag shows that Lag may be an over-simplistic measure by assuming that label y can represent the entire semantics of x , which may be problematic.", "But Sag looks into the x for semantics and can properly reflect and align with humans judgments.", "The Impact of K on Metrics One critical parameter for evaluation metrics is the choice of K for Sag and Lag (We do not discuss K for Ral due to its randomness).", "Here we use 200 MAMS test instances as subjects to study the influence of K, as shown in Figure", "1. IF IF + IF ++ TracInF TracIn + TracIn ++ Faithful to ?", "We found that as K increases, all methods, except for IF and TracInF , decrease on Sag and Lag .", "The decrease is favorable because the explanation method is putting useful training instances before less useful ones.", "In contrast, the increase suggests the explanation method fails to rank useful ones on top.", "This again confirms that span-based explanation can take into account the useful information in x and reduce the impact of noisy information involved in IF and TracInF .", "How faithful our proposed TracIn ++ to ?", "To answer this question, we first define the notion of strictly faithful explanation and then test an explanation method's faithfulness against it.", "Note that none of the discussed methods is strictly faithful, since IF ++ used approximated inverse-Hessian and TracIn ++ is a away from being strictly faithful.", "To obtain ground truth, we modify TracIn ++ to use a single checkpoint as the ultimately faithful explanation method 4 .", "Then, we obtain an explanation list for each test instance and compute its Spearman Correlation with the list obtained from the ground truth.", "The higher the correlation, the more faithful the method is.", "In Table 4 we discovered that TracIn ++ has similar mean as IF ++ but has a much lower variance, showing its stability over IF ++ .", "This aligns with the finding of Basu et al. (2021) which argues that in deep non-convex networks, influence function usually is non-stable across test instances.", "TracIn family arguably may be a promising direction to stability.", "Both methods are more faithful to Ground truth than Control that uses checkpoints, 4 The choice of ground truth can also be the exact computation of inverse-Hessian in IF (our future work).", "Faithfulness does not equal to correctness; there is no guarantee the ground truth is a valid explanation method, but it can be a valid benchmark for faithfulness Spearman Method Mean Var.", "showing that the model ensemble around may be a better choice than checkpoint averaging for model explanations.", "Further explorations may be needed since there are many variables in this comparison.", "Table 5 demonstrate the differences of explanation methods.", "In action, TracIn ++ shows both the test span and explanation span to a user; TracIn + shows only the training span, and TracIn does not show spans.", "Interestingly we can observe the top-1 explanation found by TracIn ++ is more semantically related than others in the example, a common pattern among the test cases.", "Popular explanation methods include gradient-based (Sundararajan et al., 2017), attention-based (Clark et al., 2019; Jain and Wallace, 2019; Wiegreffe and Pinter, 2019), as well as sample-based (Koh and Liang, 2017; Yeh et al., 2018; Pruthi et al., 2020b) methods.", "Methods There have been a series of recent efforts to explain black-box deep neural nets (DNN), such as LIME (Ribeiro et al., 2016) that approximates the behavior of DNN with an interpretable model learned from local samples around prediction, Influence Functions (Koh and Liang, 2017; Koh et al., 2019) that picks training samples as explanation via its impact on the overall loss, and Exemplar Points (Yeh et al., 2018) that can assign weights to training samples.", "TracIn (Pruthi et al., 2020b) is the latest breakthrough that overcomes the computational bottleneck of Influence Functions with the cost of faithfulness.", "The Discussion of Explanation Faithfulness in NLP The issue of Faithfulness of Explanations was primarily discussed under the explanation generation context (Camburu et al., 2018) where there is no guarantee that a generated explanation would be faithful to a model's inner-workings (Jacovi and Goldberg, 2020).", "In this work, we discuss faithfulness in the sample-based explanations framework.", "The faithfulness to model either can be guaranteed only in theory but not in practice (Koh and Liang, 2017) or can not be guaranteed at all (Pruthi et al., 2020b).", "Sample-based explanation methods for NLP Han et al. (2020) applied IF for sentiment analysis and natural language inference and also studied its utility on detecting data artefacts (Gururangan et al., 2019).", "Yang et al. (2020b) used Influence Functions to filter the generated texts.", "The one closest to our work is (Meng et al., 2020a) where a single word is used as the explanation unit.", "Their formation uses gradient-based methods for single words, while ours can be applied to any text unit granularity using text masking.", "Explanation of NLP Models by Input Erasure Input erasure has been a popular trick for measuring input impact for NLP models by replacing input by zero vector (Li et al., 2016) or by marginaliza-tion of all possible candidate tokens (Kim et al., 2020) that arguably dealt with the out of distribution issue introduced by using zero as input mask.", "Similar to (Kim et al., 2020; Li et al., 2020; Jacovi and Goldberg, 2021) we also use [MASK] token, with the difference that we allow masking of arbitrary length of an input sequence.", "Evaluations of Sample-based Methods A benchmark of evaluating sample-based explanation methods has not been agreed upon.", "For diagnostic purposes, Koh et al. (2017) proposed a self-explanation method that uses the training instances to explain themselves; Hanawa et al. (2020) proposed the label and instance consistency as a way of model sanity check.", "On the non-diagnostic setting, sample removal and re-training (Han et al., 2020; Hooker et al., 2019) assumes that removing useful training instances can cause significant accuracy loss; input enhancement method assumes useful explanations can also improve model's decision making at model input side (Hao, 2020), and manual inspections (Han et al., 2020; Meng et al., 2020a) were also used to examine if the Test Case been here a few times and food has always been good but service really suffers when it gets crowded.", "TracIn ++ opens some new questions: 1) how can we generalize TracIn ++ to cases where test spans are unknown?", "2) Can we understand the connection between IF and TracIn which may spark discoveries on sample-based explanation methods?", "3) How can we apply TracIn ++ to understand sequence generation models?", "This work is supported by the MIT-IBM Watson AI Lab.", "The views and conclusions are those of the authors and should not be interpreted as representing the official policies of the funding agencies.", "We thank anonymous reviewers for their valuable feedback.", "We also thank your family for the support during this special time." ]
[ "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "other", "method", "other", "other", "method", "other", "other", "other", "abstain", "abstain", "method", "other", "other", "other", "other" ]
[ "When fine-tuning pretrained models for classification, researchers either use a generic model head or a task-specific prompt for prediction.", "Proponents of prompting have argued that prompts provide a method for injecting task-specific guidance, which is beneficial in low-data regimes.", "We aim to quantify this benefit through rigorous testing of prompts in a fair setting: comparing prompted and head-based fine-tuning in equal conditions across many tasks and data sizes.", "By controlling for many sources of advantage, we find that prompting does indeed provide a benefit, and that this benefit can be quantified per task.", "Results show that prompting is often worth 100s of data points on average across classification tasks.", "The main paradigm for adapting pretrained models for classification (Radford, 2018; Dong et al., 2019; Devlin et al., 2018) is fine-tuning via an explicit classifier head.", "However, an alternative approach has arisen: adapting the pretrained language model directly as a predictor through autoregressive text generation (Radford et al., 2019) or completion of a cloze task (Trinh and Le, 2018).", "This method is notably used in T5 fine-tuning (Raffel et al., 2019) leading to state-of-the-art results on the SuperGLUE benchmark (Wang et al., 2019).", "One argument made for classification by direct language generation is that it allows us to pick custom prompts for each task (McCann et al., 2018).", "While this approach can be used for zero-shot classification (Puri and Catanzaro, 2019) or priming (Brown et al., 2020), it can also be used in fine-tuning to provide extra task information to the classifier, especially in the low-data regime (Schick and Schtze, 2020a,b).", "If this argument is indeed true, it is natural to ask how it impacts the sample efficiency of the model, or more directly, how many data points is a prompt worth?", "As with many low-data and pretraining-based problems, this question is complicated by the fine-tuning setup, training procedure, and prompts themselves.", "We attempt to isolate these variables through diverse prompts, multiple runs, and best practices in low-training data fine-tuning.", "We introduce a metric, the average data advantage , for quantifying the impact of a prompt in practice.", "Our experiments find that the impact of task-targeted prompting can nicely be quantified in terms of direct training data, and that it varies over the nature of different tasks.", "On MNLI (Williams et al., 2018), we find that using a prompt contributes approximately 3500 data points.", "On SuperGLUE, it adds approximately 280 data points on RTE (Dagan et al., 2005) and up to 750 on BoolQ (Clark et al., 2019).", "In lowto medium-data settings, this advantage can be a real contribution to training a model.", "Prompting has been used both for zero-shot and fine-tuning based methods.", "Zero-shot approaches attempt to answer a task with a prompt without fine-tuning through generation (Radford et al., 2019).", "GPT3 (Brown et al., 2020) extends this approach to a supervised priming method by taking in training data as priming at inference time, so it can attend to them while answering.", "T5 (Raffel et al., 2019) and other sequence-to-sequence pretrained models use standard word-based fine-tuning with a marker prompt to answer classification tasks with strong empirical success.", "Our setting differs in that we are interested in using task-based prompts and fine-tuning, in-between the T5 and GPT2 setting.", "Our setting most closely resembles PET (Schick and Schtze, 2020a,b), which claims that task-specific prompting helps transfer learning, especially in the low-data regime.", "However, in order to reach the best possible results on SuperGLUE, PET introduces several other extensions: semi-supervision via additional pseudo-labeled data, en-sembling models trained with several different prompts, and finally distilling the ensemble into a linear classifier rather than a language model.", "Our aim is to isolate the specific contributions of prompting within supervised fine-tuning.", "Finally, recent papers have experimented with discovering prompts through automated processes tailored to the language model (Jiang et al., 2020; Schick et al., 2020).", "We limit ourselves to human-written prompts, as we are interested into whether prompting itself specifically adds information to the supervised task.", "It is an interesting question as to whether automatic prompts can have this same impact (relative to the training data they require).", "Consider two transfer learning settings for text classification: head-based , where a generic head layer takes in pretrained representations to predict an output class; prompt-based , where a task-specific pattern string is designed to coax the model into producing a textual output corresponding to a given class.", "Both can be utilized for fine-tuning with supervised training data, but prompts further allow the user to customize patterns to help the model.", "For the prompt model we follow the notation from PET (Schick and Schtze, 2020a) and decompose a prompt into a pattern and a verbalizer .", "The pattern turns the input text into a cloze task, i.e. a sequence with a masked token or tokens that need to be filled.", "Let us use as example an excerpt from SuperGLUE task BoolQ (Clark et al., 2019), in which the model must answer yes-or-no binary questions.", "In order to let a language model answer the question in italics , our pattern is in bold (Schick and Schtze, 2020b): \"Posthumous marriage Posthumous marriage (or necrogamy) is a marriage in which one of the participating members is deceased. It is legal in France and similar forms are practiced in Sudan and China. Since World War I, France has had hundreds of requests each year, of which many have been accepted. Based on the previous passage, can u marry a dead person in france ? <MASK> \" The masked word prediction is mapped to a verbalizer which produces a class.", "True. \"No\": False 1 ).", "Several pattern-verbalizer pairs ( PVPs ) could be used for a single task, differing either through the pattern, the verbalizer, or both.", "Fine-tuning is done by training the model to produce the correct verbalization.", "The loss is the cross-entropy loss between the correct answer and the distribution of probabilities amongst the tokens in the verbalizer.", "We re-use pattern choices from Schick and Schtze (2020b); examples are available in Appendix A. 4 Experimental Setting We run all experiments with the same pretrained checkpoint, roberta-large (355M parameters) from RoBERTa (Liu et al., 2019), which we load from the transformers (Wolf et al., 2020) library.", "2 In line with previous observations (McCoy et al., 2019; Dodge et al., 2020; Lee et al., 2020), head-based fine-tuning performance varies considerably.", "We follow recommendations of Mosbach et al. (2020) and Zhang et al. (2020) to train at a low learning rate ( 10 5 ) for a large number of steps (always at least 250 , possibly for over 100 epochs).", "We perform our evaluation on SuperGLUE and MNLI (Williams et al., 2018).", "These datasets comprise a variety of tasks, all in English, including entailment (MNLI, RTE (Dagan et al., 2005), CB (de Marneffe et al., 2019)), multiple choice question answering (BoolQ (Clark et al., 2019), MultiRC (Khashabi et al., 2018)), and commonsense reasoning (WSC (Levesque et al., 2012), COPA (Roemmele et al., 2011), WiC (Pilehvar and Camacho-Collados, 2018)).", "We do not include ReCoRD (Zhang et al., 2018) in our comparisons as there is no head model to compare with, since it is already a cloze task.", "Data sizes range from 250 data points for CB to 392 , 702 for MNLI.", "As test data is not publicly available for SuperGLUE tasks, we set aside part of training (from 50 for CB, COPA and MultiRC to 500 for BoolQ) to use for development, and evaluate on their original validation sets.", "For MNLI, we use the available matched validation and test sets.", "We compare models across a scale of available data, starting with 10 data points and increasing exponentially (as high-data performance tends to 1 The correct answer here is, of course, yes .", "Originated in 1803 as Napoleon rose to power, this practice was mainly to the benefit of war widows.", "2 After experimenting with RoBERTa, AlBERT (Lan et al., 2019) and BERT (Devlin et al., 2018), we found roberta-large to have the most consistent performance.", "saturate) to the full dataset.", "For example, for MultiRC, which has 969 data points initially, we start by reserving 50 data points for development.", "This leaves us with 919 training points, and we train models with 10, 15, 20, 32, 50, 70, 100, 150, 200, 320, 500, 750, and 919 training points.", "We run every experiment 4 times in order to reduce variance, for a total of 1892 training runs across all tasks.", "At every point, we report the best performance that has been achieved at that amount of data or lower.", "Full graphs are presented in Appendix B. 5 Results Figure 1 shows the main results comparing headand prompt-based fine-tuning with the best-performing pattern on that task.", "Prompting enjoys a substantial advantage on every task, except for WiC as is reported in previous results (Schick and Schtze, 2020b).", "Both approaches improve with more training data, but prompting remains better by a varying amount.", "Many tasks in SuperGLUE have relatively few data points, but we also see an advantage in large datasets like BoolQ and MNLI.", "To quantify how many data points the prompt is worth, we first isolate the y -axis band of the lowest-and highestaccuracy where the two curves match in accuracy.", "3 The horizontal line at these points represents the advantage of prompting.", "We then 3 We assume asymptotically the two curves would match, but are limited by data.", "take the integral in this region, i.e. area between the linearly-interpolated curves 4 , divided by the height of the band.", "The area has the dimension of a quantity of data points times the metric unit, so dividing by the performance range yields a # of data points advantage.", "As low data training is sensitive to noise, in addition to following best training practices we run several different experiments for each x -point.", "We use a bootstrapping approach to estimate confidence over these runs.", "Specifically, we hold out one of the 4 head runs and 4 prompt runs (16 combinations total), and compute the standard deviation of those outcomes.", "We report these quantities for every task in Table 1 as Average advantage .", "For almost all the tasks, we see that prompting gives a substantial advantage in terms of data efficiency, adding the equivalent of hundreds of data points on average.", "Impact of Pattern vs Verbalizer The intuition of prompts is that they introduce a task description in natural language, even with few training points.", "To better understand the zero-shot versus adaptive nature of prompts, we consider a null verbalizer , a control with a verbalizer that cannot yield semantic information without training.", "For every task that requires filling in one word (which excludes 4 In areas where the head model is better, if any, get subtracted from the total.", "the more free-form COPA and WSC), we replace the verbalizers, for example, \"yes\", \"no\", \"maybe\", \"right\" or \"wrong\", with random first names.", "Table 1 shows the advantage of the standard prompt over the null verbalizer to estimate this control.", "We see that for small data tasks such as CB, the null verbalizer removes much of the benefits of prompting.", "However, with more training data, the model seems to adapt the verbalizer while still gaining the inductive bias benefits of the pattern.", "Figure 2 showcases this dynamic on MNLI.", "This result further shows that prompting yields data efficiency even if it is not directly analogous to the generation process of training.", "Impact of Different Prompts If the prompt acts as a description of the task, one would expect different valid descriptions to vary in their benefits.", "In order to compare the different prompts we used on each task, we chart the median performance for each of them under different runs.", "In nearly every experiment, we find that the confidence intervals of those curves largely overlap, implying that prompt choice is not a dominant hyperparameter, i.e. the variance across random seeds usually outweighs the possible benefits of prompt choice.", "One exFigure 3: Median performance on MultiRC across runs for three prompts.", "Differences are inconsistent and eclipsed by the variance within one prompt's runs.", "ception is the low-data regime of BoolQ, where one of the prompts enjoys a significant few-shot advantage over the others.", "We plot this curve for MultiRC in Figure 3 and the rest in Appendix C. Metric sensitivity We treat each metric linearly in calculating advantage; alternatively, we could re-parameterize the y axis for each task.", "This choice does not have a consistent effect for or against prompting.", "For example, emphasizing gains close to convergence increases prompting advantage on CB and MNLI but decreases it on COPA or BoolQ.", "We investigate prompting through a systematic study of its data advantage.", "Across tasks, prompting consistently yields a varying improvement throughout the training process.", "Analysis shows that prompting is mostly robust to pattern choice, and can even learn without an informative verbalizer.", "On large datasets, prompting is similarly helpful in terms of data points, although they are less beneficial in performance.", "In future work, we hope to study the mechanism and training dynamics of the prompting benefits.", "Significant compute resources were used to run this paper's experiments.", "A single experiment (defined as one model run, at one data level, on one task) was quite light-weight, taking usually a little under an hour on a single Nvidia V100.", "However, as we computed a little under two thousand runs, this adds up to about 1800 GPU hours, to which one must add around 400 GPU hours of prototyping and hyper-parameter searching.", "Those 2200 GPU hours would usually have necessitated the release of about 400kg of CO2, about 40% of a transatlantic flight for a single passenger, in the country where we ran the experiments, although we used a carbon-neutral cloud compute provider.", "The main benefit of prompting, rather than compute efficiency, is data efficiency.", "Although we ran all of our experiments on English, we hope that this property will be especially helpful in low-resource language applications.", "In a sense, a practitioner could then remedy the lack of task-specific data in their language by introducing information through a prompt.", "However, this comes with the inherent risk of introducing human biases into the model.", "Prompt completion also suffers from biases already present within the language model (Sheng et al., 2019).", "This could cause a prompted model to repeat those biases in classification, especially in the few-shot setting where prompting mostly relies on the pretrained model.", "We thank Steven Cao and Joe Davison for the discussions about prompting that initially spurred this paper.", "We further thank Timo Schick for making the code for PET available and for discussions about performance replication.", "We lastly thank Canwen Xu, Yacine Jernite, Victor Sanh, Dimitri Lozeve and Antoine Ogier for their help with the figures and writing of this draft." ]
[ "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "objective", "abstain", "other", "objective", "other", "method", "other", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "other", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Representation learning is a critical ingredient for natural language processing systems.", "Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards tokenand sentence-level training objectives and do not leverage information on inter-document relatedness, which limits their document-level representation power.", "For applications on scientific documents, such as classification and recommendation, the embeddings power strong performance on end tasks.", "We propose SPECTER , a new method to generate document-level embedding of scientific documents based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph.", "Unlike existing pretrained language models, SPECTER can be easily applied to downstream applications without task-specific fine-tuning.", "Additionally, to encourage further research on document-level models, we introduce SCIDOCS , a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation.", "We show that SPECTER outperforms a variety of competitive baselines on the benchmark.", "1 1 Introduction As the pace of scientific publication continues to increase, Natural Language Processing (NLP) tools that help users to search, discover and understand the scientific literature have become critical.", "In recent years, substantial improvements in NLP tools have been brought about by pretrained neural language models (LMs) (Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019).", "While such models are widely used for representing individual words Equal contribution 1 https://github.com/allenai/specter or sentences, extensions to whole-document embeddings are relatively underexplored.", "Likewise, methods that do use inter-document signals to produce whole-document embeddings (Tu et al., 2017; Chen et al., 2019) have yet to incorporate state-of-the-art pretrained LMs.", "Here, we study how to leverage the power of pretrained language models to learn embeddings for scientific documents.", "A paper's title and abstract provide rich semantic content about the paper, but, as we show in this work, simply passing these textual fields to an off-the-shelf pretrained language modeleven a state-of-the-art model tailored to scientific text like the recent SciBERT (Beltagy et al., 2019)does not result in accurate paper representations.", "The language modeling objectives used to pretrain the model do not lead it to output representations that are helpful for document-level tasks such as topic classification or recommendation.", "In this paper, we introduce a new method for learning general-purpose vector representations of scientific documents.", "Our system, SPECTER , 2 incorporates inter-document context into the Transformer (Vaswani et al., 2017) language models (e.g., SciBERT (Beltagy et al., 2019)) to learn document representations that are effective across a wide-variety of downstream tasks, without the need for any task-specific fine-tuning of the pretrained language model.", "We specifically use citations as a naturally occurring, inter-document incidental supervision signal indicating which documents are most related and formulate the signal into a triplet-loss pretraining objective.", "Unlike many prior works, at inference time, our model does not require any citation information.", "This is critical for embedding new papers that have not yet been cited.", "In experiments, we show that SPECTER 's representations substantially outperform the state-2 SPECTER : Scientific Paper Embeddings using Citation-informed TransformERs of-the-art on a variety of document-level tasks, including topic classification, citation prediction, and recommendation.", "As an additional contribution of this work, we introduce and release SCIDOCS 3 , a novel collection of data sets and an evaluation suite for document-level embeddings in the scientific domain.", "SCIDOCS covers seven tasks, and includes tens of thousands of examples of anonymized user signals of document relatedness.", "We also release our training set (hundreds of thousands of paper titles, abstracts and citations), along with our trained embedding model and its associated code base.", "Our goal is to learn task-independent representations of academic papers.", "Inspired by the recent success of pretrained Transformer language models across various NLP tasks, we use the Transformer model architecture as basis of encoding the input paper.", "Existing LMs such as BERT, however, are primarily based on masked language modeling objective, only considering intra-document context and do not use any inter-document information.", "This limits their ability to learn optimal document representations.", "To learn high-quality document-level representations we propose using citations as an inter-document relatedness signal and formulate it as a triplet loss learning objective.", "We then pretrain the model on a large corpus of citations using this objective, encouraging it to output representations that are more similar for papers that share a citation link than for those that do not.", "We call our model SPECTER , which learns Scientific Paper Embeddings using Citation-informed TransformERs.", "With respect to the terminology used by Devlin et al. (2019), unlike most existing LMs that are fine-tuning based, our approach results in embeddings that can be applied to downstream tasks in a feature-based fashion, meaning the learned paper embeddings can be easily used as features, with no need for further task-specific fine-tuning.", "In the following, as background information, we briefly describe how pretrained LMs can be applied for document representation and then discuss the details of SPECTER .", "Recently, pretrained Transformer networks have demonstrated success on various NLP tasks (Rad-ford et al., 2018; Devlin et al., 2019; Yang et al., 2019; Liu et al., 2019); we use these models as the foundation for SPECTER .", "Specifically, we use SciBERT (Beltagy et al., 2019) which is an adaptation of the original BERT (Devlin et al., 2019) architecture to the scientific domain.", "The BERT model architecture (Devlin et al., 2019) uses multiple layers of Transformers (Vaswani et al., 2017) to encode the tokens in a given input sequence.", "Each layer consists of a self-attention sublayer followed by a feedforward sublayer.", "The final hidden state associated with the special [CLS] token is usually called the pooled output, and is commonly used as an aggregate representation of the sequence.", "Document Representation Our goal is to represent a given paper P as a dense vector v that best represents the paper and can be used in downstream tasks.", "SPECTER builds embeddings from the title and abstract of a paper.", "Intuitively, we would expect these fields to be sufficient to produce accurate embeddings, since they are written to provide a succinct and comprehensive summary of the paper.", "4 As such, we encode the concatenated title and abstract using a Transformer LM (e.g., SciBERT) and take the final representation of the [CLS] token as the output representation of the paper: 5 v = Transformer ( input ) [CLS] , (1) where Transformer is the Transformer's forward function, and input is the concatenation of the [CLS] token and WordPieces (Wu et al., 2016) of the title and abstract of a paper, separated by 4 We also experimented with additional fields such as venues and authors but did not find any empirical advantage in using those (see 6).", "See 7 for a discussion of using the full text of the paper as input.", "5 It is also possible to encode title and abstracts individually and then concatenate or combine them to get the final embedding.", "However, in our experiments this resulted in sub-optimal performance.", "the [SEP] token.", "We use SciBERT as our model initialization as it is optimized for scientific text, though our formulation is general and any Transformer language model instead of SciBERT.", "Using the above method with an off-the-shelf SciBERT does not take global inter-document information into account.", "This is because SciBERT, like other pretrained language models, is trained via language modeling objectives, which only predict words or sentences given their in-document, nearby textual context.", "In contrast, we propose to incorporate citations into the model as a signal of inter-document relatedness, while still leveraging the model's existing strength in modeling language.", "A citation from one document to another suggests that the documents are related.", "To encode this relatedness signal into our representations, we design a loss function that trains the Transformer model to learn closer representations for papers when one cites the other, and more distant representations otherwise.", "The high-level overview of the model is shown in Figure 1. In particular, each training instance is a triplet of papers: a query paper PQ , a positive paper P + and a negative paper P .", "The positive paper is a paper that the query paper cites, and the negative paper is a paper that is not cited by the query paper (but that may be cited by P + ).", "We then train the model using the following triplet margin loss function: L = max (cid:110)(cid:16) d (cid:0) PQ , P + (cid:1) d (cid:0) PQ , P (cid:1) + m (cid:17) , 0 (cid:111) (2) where d is a distance function and m is the loss margin hyperparameter (we empirically choose m = 1 ).", "Here, we use the L2 norm distance: d( PA , PB ) = (cid:107) v A v B (cid:107) 2 , where v A is the vector corresponding to the pooled output of the Transformer run on paper A (Equation 1).", "6 Starting from the trained SciBERT model, we pretrain the Transformer parameters on the citation objective to learn paper representations that capture document relatedness.", "The choice of negative example papers P is important when training the model.", "We consider two sets of negative examples: the first set simply consists of randomly selected papers from the corpus.", "Given a query paper, intuitively we would expect the model to be able to distinguish between cited papers, and uncited papers sampled randomly from the entire corpus.", "This inductive bias has been also found to be effective in content-based citation recommendation applications (Bhagavatula et al., 2018).", "But, random negatives may be easy for the model to distinguish from the positives.", "To provide a more nuanced training signal, we augment the randomly drawn negatives with a more challenging second set of negative examples.", "We denote as hard negatives the papers that are not cited by the query paper, but are cited by a paper cited by the query paper, i.e. if P 1 cite P 2 and P 2 cite P 3 but P 1 (cid:54) cite P 3 , then P 3 is a candidate hard negative example for P 1 .", "We expect the hard negatives to be somewhat related to the query paper, but typically less related than the cited papers.", "As we show in our experiments ( 6), including hard negatives results in more accurate embeddings compared to using random negatives alone.", "At inference time, the model receives one paper, P , and it outputs the SPECTER 's Transfomer pooled output activation as the paper representation for P (Equation 1).", "We note that for inference, SPECTER requires only the title and abstract of the given input paper; the model does not need any citation information about the input paper.", "This means that SPECTER can produce embeddings even for new papers that have yet to be cited, which is critical for applications that target recent scientific papers.", "Previous evaluations of scientific document representations in the literature tend to focus on small datasets over a limited set of tasks, and extremely high (99%+) AUC scores are already possible on these data for English documents (Chen et al., 2019; Wang et al., 2019).", "New, larger and more diverse benchmark datasets are necessary.", "Here, we introduce a new comprehensive evaluation framework to measure the effectiveness of scientific paper embeddings, which we call SCIDOCS .", "The framework consists of diverse tasks, ranging from citation prediction, to prediction of user activity, to document classification and paper recommendation.", "Note that SPECTER will not be further fine-tuned on any of the tasks; we simply plug in the embeddings as features for each task.", "Below, we describe each of the tasks in detail and the evaluation data associated with it.", "An important test of a document-level embedding is whether it is predictive of the class of the document.", "Here, we consider two classification tasks in the scientific domain: MeSH Classification In this task, the goals is to classify scientific papers according to their Medical Subject Headings (MeSH) (Lipscomb, 2000).", "7 We construct a dataset consisting of 23K academic medical papers, where each paper is assigned one of 11 top-level disease classes such as cardiovascular diseases, diabetes, digestive diseases derived from the MeSH vocabulary.", "The most populated category is Neoplasms (cancer) with 5.4K instances (23.3% of the total dataset) while the category with least number of samples is Hepatitis (1.7% of the total dataset).", "We follow the approach of Feldman et al. (2019) in mapping the MeSH vocabulary to the disease classes.", "Paper Topic Classification This task is predicting the topic associated with a paper using the pre-defined topic categories of the Microsoft Academic Graph (MAG) (Sinha et al., 2015) 8 .", "MAG provides a database of papers, each tagged with a list of topics.", "The topics are organized in a hierarchy of 5 levels, where level 1 is the most general and level 5 is the most specific.", "For our evaluation, we derive a document classification dataset from the level 1 topics, where a paper is labeled by its corresponding level 1 MAG topic.", "We construct a dataset of 25K papers, almost evenly split over the 19 different classes of level 1 categories in MAG.", "As argued above, citations are a key signal of relatedness between papers.", "We test how well different paper representations can reproduce this signal through citation prediction tasks.", "In particular, we focus on two sub-tasks: predicting direct citations , and predicting co-citations .", "We frame these as ranking tasks and evaluate performance using MAP and n DCG , standard ranking metrics.", "Direct Citations In this task, the model is asked to predict which papers are cited by a given query paper from a given set of candidate papers.", "The evaluation dataset includes approximately 30K total papers from a held-out pool of papers, consisting of 1K query papers and a candidate set of up to 5 cited papers and 25 (randomly selected) uncited papers.", "The task is to rank the cited papers higher than the uncited papers.", "For each embedding method, we require only comparing the L2 distance between the raw embeddings of the query and the candidates, without any additional trainable parameters.", "Co-Citations This task is similar to the direct citations but instead of predicting a cited paper, the goal is to predict a highly co-cited paper with a given paper.", "Intuitively, if papers A and B are cited frequently together by several papers, this shows that the papers are likely highly related and a good paper representation model should be able to identify these papers from a given candidate set.", "The dataset consists of 30K total papers and is constructed similar to the direct citations task.", "The embeddings for similar papers should be close to each other; we use user activity as a proxy for identifying similar papers and test the model's ability to recover this information.", "Multiple users consuming the same items as one another is a classic relatedness signal and forms the foundation for recommender systems and other applications (Schafer et al., 2007).", "In our case, we would expect that when users look for academic papers, the papers they view in a single browsing session tend to be related.", "Thus, accurate paper embeddings should, all else being equal, be relatively more similar for papers that are frequently viewed in the same session than for other papers.", "To build benchmark datasets to test embeddings on user activity, we obtained logs of user sessions from a major academic search engine.", "We define the following two tasks on which we build benchmark datasets to test embeddings: Co-Views Our co-views dataset consists of approximately 30K papers.", "To construct it, we take 1K random papers that are not in our train or development set and associate with each one up to 5 frequently co-viewed papers and 25 randomly selected papers (similar to the approach for citations).", "Then, we require the embedding model to rank the co-viewed papers higher than the random papers by comparing the L2 distances of raw embeddings.", "We evaluate performance using standard ranking metrics, n DCG and MAP .", "Co-Reads If the user clicks to access the PDF of a paper from the paper description page, this is a potentially stronger sign of interest in the paper.", "In such a case we assume the user will read at least parts of the paper and refer to this as a read action.", "Accordingly, we define a co-reads task and dataset analogous to the co-views dataset described above.", "This dataset is also approximately 30K papers.", "In the recommendation task, we evaluate the ability of paper embeddings to boost performance in a production recommendation system.", "Our recommendation task aims to help users navigate the scientific literature by ranking a set of similar pa-pers for a given paper.", "We use a dataset of user clickthrough data for this task which consists of 22K clickthrough events from a public scholarly search engine.", "We partitioned the examples temporally into train (20K examples), validation (1K), and test (1K) sets.", "As is typical in clickthrough data on ranked lists, the clicks are biased toward the top of original ranking presented to the user.", "To counteract this effect, we computed propensity scores using a swap experiment (Agarwal et al., 2019).", "The propensity scores give, for each position in the ranked list, the relative frequency that the position is over-represented in the data due to exposure bias.", "We can then compute de-biased evaluation metrics by dividing the score for each test example by the propensity score for the clicked position.", "We report propensity-adjusted versions of the standard ranking metrics Precision@1 ( P@1 ) and Normalized Discounted Cumulative Gain ( n DCG ).", "We test different embeddings on the recommendation task by including cosine embedding distance 9 as a feature within an existing recommendation system that includes several other informative features (title/author similarity, reference and citation overlap, etc.).", "Thus, the recommendation experiments measure whether the embeddings can boost the performance of a strong baseline system on an end task.", "For SPECTER , we also perform an online A/B test to measure whether its advantages 9 Embeddings are L2 normalized and in this case cosine distance is equivalent to L2 distance.", "on the offline dataset translate into improvements on the online recommendation task ( 5).", "Training Data To train our model, we use a subset of the Semantic Scholar corpus (Ammar et al., 2018) consisting of about 146K query papers (around 26.7M tokens) with their corresponding outgoing citations, and we use an additional 32K papers for validation.", "For each query paper we construct up to 5 training triples comprised of a query, a positive, and a negative paper.", "The positive papers are sampled from the direct citations of the query, while negative papers are chosen either randomly or from citations of citations (as discussed in 2.4).", "We empirically found it helpful to use 2 hard negatives (citations of citations) and 3 easy negatives (randomly selected papers) for each query paper.", "This process results in about 684K training triples and 145K validation triples.", "Training and Implementation We implement our model in AllenNLP (Gardner et al., 2018).", "We initialize the model from SciBERT pretrained weights (Beltagy et al., 2019) since it is the state-of-the-art pretrained language model on scientific text.", "We continue training all model parameters on our training objective (Equation 2).", "We perform minimal tuning of our model's hyperparameters based on the performance on the validation set, while baselines are extensively tuned.", "Based on initial experiments, we use a margin m =1 for the triplet loss.", "For training, we use the Adam optimizer (Kingma and Ba, 2014) following the suggested hyperparameters in Devlin et al. (2019) (LR: 2e-5, Slanted Triangular LR scheduler 10 (Howard and Ruder, 2018) with number of train steps equal to training instances and cut fraction of 0.1).", "We train the model on a single Titan V GPU (12G memory) for 2 epochs, with batch size of 4 (the maximum that fit in our GPU memory) and use gradient accumulation for an effective batch size of 32.", "Each training epoch takes approximately 1-2 days to complete on the full dataset.", "We release our code and data to facilitate reproducibility.", "11 Task-Specific Model Details For the classification tasks, we used a linear SVM where embedding vectors were the only features.", "The C hyperparameter was tuned via a held-out validation set.", "For the recommendation tasks, we use a feedforward ranking neural network that takes as input ten features designed to capture the similarity between each query and candidate paper, including the cosine similarity between the query and candidate embeddings and manually-designed features computed from the papers' citations, titles, authors, and publication dates.", "Baseline Methods Our work falls into the intersection of textual representation, citation mining, and graph learning, and we evaluate against state-of-the-art baselines from each of these areas.", "We compare with several strong textual models: SIF (Arora et al., 2017), a method for learning document representations by removing the first principal component of aggregated word-level embeddings which we pretrain on scientific text; SciBERT (Beltagy et al., 2019) a state-of-the-art pretrained Transformer LM for scientific text; and Sent-BERT (Reimers and Gurevych, 2019), a model that uses negative sampling to tune BERT for producing optimal sentence embeddings.", "We also compare with Citeomatic (Bhagavatula et al., 2018), a closely related paper representation model for citation prediction which trains content-based representations with citation graph information via dynamically sampled triplets, and SGC (Wu et al., 2019a), a state-of-the-art graph-convolutional approach.", "For completeness, additional baselines are also included; due to space constraints we refer to Appendix A for detailed discussion of all baselines.", "We tune hyperparameters of baselines to maximize performance on a separate validation set.", "Table 1 presents the main results corresponding to our evaluation tasks (described in 3).", "Overall, we observe substantial improvements across all tasks with average performance of 80.0 across all metrics on all tasks which is a 3.1 point absolute improvement over the next-best baseline.", "We now discuss the results in detail.", "For document classification, we report macro F1, a standard classification metric.", "We observe that the classifier performance when trained on our representations is better than when trained on any other baseline.", "Particularly, on the MeSH (MAG) dataset, we obtain an 86.4 (82.0) F1 score which is about a = + 2 .", "3 ( +1 . 5 ) point absolute increase over the best baseline on each dataset respectively.", "Our evaluation of the learned representations on predicting user activity is shown in the User activ-ity columns of Table 1. SPECTER achieves a MAP score of 83.8 on the co-view task, and 84.5 on co-read, improving over the best baseline (Citeomatic in this case) by 2.7 and 4.0 points, respectively.", "We observe similar trends for the citation and co-citation tasks, with our model outperforming virtually all other baselines except for SGC, which has access to the citation graph at training and test time.", "12 Note that methods like SGC cannot be used in real-world setting to embed new papers that are not cited yet.", "On the other hand, on co-citation data our method is able to achieve the best results with n DCG of 94.8, improving over SGC with 2.3 points.", "Citeomatic also performs well on the citation tasks, as expected given that its primary design goal was citation prediction.", "Nevertheless, our method slightly outperforms Citeomatic on the direct citation task, while substantially outperforming it on co-citations (+2.0 n DCG ).", "Finally, for recommendation task, we observe that SPECTER outperforms all other models on this task as well, with n DCG of 53.9.", "On the recommendations task, as opposed to previous experiments, the differences in method scores are generally smaller.", "This is because for this task the embeddings are used along with several other informative features in the ranking model (described under task-specific models in 4), meaning that embedding variants have less opportunity for impact on overall performance.", "We also performed an online study to evaluate whether SPECTER embeddings offer similar advantages in a live application.", "We performed an online A/B test comparing our SPECTER -based recommender to an existing production recommender system for similar papers that ranks papers by a textual similarity measure.", "In a dataset of 4,113 clicks, we found that SPECTER ranker improved clickthrough rate over the baseline by 46.5%, demonstrating its superiority.", "We emphasize that our citation-based pretraining objective is critical for the performance of SPECTER ; removing this and using a vanilla SciBERT results in decreased performance on all tasks.", "12 For SGC, we remove development and test set citations and co-citations during training.", "We also remove incoming citations from development and test set queries as these would not be available at test time in production.", "In this section, we analyze several design decisions in SPECTER , provide a visualization of its embedding space, and experimentally compare SPECTER 's use of fixed embeddings against a fine-tuning approach.", "Ablation Study We start by analyzing how adding or removing metadata fields from the input to SPECTER alters performance.", "The results are shown in the top four rows of Table 2 (for brevity, here we only report the average of the metrics from each task).", "We observe that removing the abstract from the textual input and relying only on the title results in a substantial decrease in performance.", "More surprisingly, adding authors as an input (along with title and abstract) hurts performance.", "13 One possible explanation is that author names are sparse in the corpus, making it difficult for the model to infer document-level relatedness from them.", "As another possible reason of this behavior, tokenization using Wordpieces might be suboptimal for author names.", "Many author names are out-of-vocabulary for SciBERT and thus, they might be split into sub-words and shared across names that are not semantically related, leading to noisy correlation.", "Finally, we find that adding venues slightly decreases performance, 14 except on document classification (which makes sense, as we would expect venues to have high correlation 13 We experimented with both concatenating authors with the title and abstract and also considering them as an additional field. Neither were helpful. 14 Venue information in our data came directly from publisher provided metadata and thus was not normalized. Venue normalization could help improve results. CLS USR CITE REC Avg. SPECTER 84.2 88.4 91.5 36.9 80.0 abstract 82.2 72.2 73.6 34.5 68.1 + venue 84.5 88.0 91.2 36.7 79.9 + author 82.7 72.3 71.0 34.6 67.3 No hard negatives 82.4 85.8 89.8 36.8 78.4 Start w/ BERT-Large 81.7 85.9 87.8 36.1 77.5 Table 2: Ablations: Numbers are averages of metrics for each evaluation task: CLS: classification, USR: User activity, CITE: Citation prediction, REC: Recommendation, Avg. average over all tasks & metrics. with paper topics).", "The fact that SPECTER does not require inputs like authors or venues makes it applicable in situations where this metadata is not available, such as matching reviewers with anonymized submissions, or performing recommendations of anonymized preprints (e.g., on OpenReview).", "One design decision in SPECTER is to use a set of hard negative distractors in the citation-based fine-tuning objective.", "The fifth row of Table 2 shows that this is importantusing only easy negatives reduces performance on all tasks.", "While there could be other potential ways to include hard negatives in the model, our simple approach of including citations of citations is effective.", "The sixth row of the table shows that using a strong general-domain language model (BERT-Large) instead of SciBERT in SPECTER reduces performance considerably.", "This is reasonable because unlike BERT-Large, SciBERT is pretrained on scientific text.", "Visualization Figure 2 shows t-SNE (van der Maaten, 2014) projections of our embeddings (SPECTER ) compared with the SciBERT baseline", "for a random set of papers.", "When comparing SPECTER embeddings with SciBERT, we observe that our embeddings are better at encoding topical information, as the clusters seem to be more compact.", "Further, we see some examples of cross-topic relatedness reflected in the embedding space (e.g., Engineering, Mathematics and Computer Science are close to each other, while Business and Economics are also close to each other).", "To quantify the comparison of visualized embeddings in Figure 2, we use the DBScan clustering algorithm (Ester et al., 1996) on this 2D projection.", "We use the completeness and homogeneity clustering quality measures introduced by Rosenberg and Hirschberg (2007).", "For the points corresponding to Figure 2, the homogeneity and completeness values for SPECTER are respectively 0.41 and 0.72 compared with SciBERT's 0.19 and 0.63, a clear improvement on separating topics using the projected embeddings.", "Comparison with Task Specific Fine-Tuning While the fact that SPECTER does not require fine-tuning makes its paper embeddings less costly to use, often the best performance from pretrained Transformers is obtained when the models are fine-tuned directly on each end task.", "We experiment with fine-tuning SciBERT on our tasks, and find this to be generally inferior to using our fixed representations from SPECTER .", "Specifically, we fine-tune SciBERT directly on task-specific signals instead of citations.", "To fine-tune on task-specific data (e.g., user activity), we used a dataset of co-views with 65K query papers, co-reads with 14K query papers, and co-citations (instead of direct citations) with 83K query papers.", "As the end tasks are ranking tasks, for all datasets we construct up to 5 triplets and fine-tune the model using triplet ranking loss.", "The positive papers are sampled from Training signal CLS USR CITE REC All SPECTER 84.2 88.4 91.5 36.9 80.0 SciBERT fine-tune on co-view 83.0 84.2 84.1 36.4 76.0 SciBERT fine-tune on co-read 82.3 85.4 86.7 36.3 77.1 SciBERT fine-tune on co-citation 82.9 84.3 85.2 36.6 76.4 SciBERT fine-tune on multitask 83.3 86.1 88.2 36.0 78.0 Table 3: Comparison with task-specific fine-tuning.", "the most co-viewed (co-read, or co-cited) papers corresponding to the query paper.", "We also include both easy and hard distractors as when training SPECTER (for hard negatives we choose the least non-zero co-viewed (co-read, or co-cited) papers).", "We also consider training jointly on all task-specific training data sources in a multitask training process, where the model samples training triplets from a distribution over the sources.", "As illustrated in Table 3, without any additional final task-specific fine-tuning, SPECTER still outperforms a SciBERT model fine-tuned on the end tasks as well as their multitask combination, further demonstrating the effectiveness and versatility of SPECTER embeddings.", "15 7 Related Work Recent representation learning methods in NLP rely on training large neural language models on unsupervised data (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019; Beltagy et al., 2019; Liu et al., 2019).", "While successful at many sentence-and token-level tasks, our focus is on using the models for document-level representation learning, which has remained relatively under-explored.", "There have been other efforts in document representation learning such as extensions of word vectors to documents (Le and Mikolov, 2014; Ganesh et al., 2016; Liu et al., 2017; Wu et al., 2018; Gy-sel et al., 2017), convolution-based methods (Liu et al., 2018; Zamani et al., 2018), and variational autoencoders (Holmer and Marfurt, 2018; Wang et al., 2019).", "Relevant to document embedding, sentence embedding is a relatively well-studied area of research.", "Successful approaches include seq2seq models (Kiros et al., 2015), BiLSTM Siamese networks (Williams et al., 2018), leveraging supervised data from other corpora (Conneau et al., 2017), and using discourse relations (Nie et al., 2019), and BERT-based methods (Reimers and Gurevych, 2019).", "Unlike our proposed method, 15 We also experimented with further task-specific fine-tuning of our SPECTER on the end tasks but we did not observe additional improvements.", "the majority of these approaches do not consider any notion of inter-document relatedness when embedding documents.", "Other relevant work combines textual features with network structure (Tu et al., 2017; Zhang et al., 2018; Bhagavatula et al., 2018; Shen et al., 2018; Chen et al., 2019; Wang et al., 2019).", "These works typically do not leverage the recent pretrained contextual representations and with a few exceptions such as the recent work by Wang et al. (2019), they cannot generalize to unseen documents like our SPECTER approach.", "Context-based citation recommendation is another related application where models rely on citation contexts (Jeong et al., 2019) to make predictions.", "These works are orthogonal to ours as the input to our model is just paper title and abstract.", "Another related line of work is graph-based representation learning methods (Bruna et al., 2014; Kipf and Welling, 2017; Hamilton et al., 2017a,b; Wu et al., 2019a,b).", "Here, we compare to a graph representation learning model, SGC (Sim-ple Graph Convolution) (Wu et al., 2019a), which is a state-of-the-art graph convolution approach for representation learning.", "SPECTER uses pretrained language models in combination with graph-based citation signals, which enables it to outperform the graph-based approaches in our experiments.", "SPECTER embeddings are based on only the title and abstract of the paper.", "Adding the full text of the paper would provide a more complete picture of the paper's content and could improve accuracy (Co-hen et al., 2010; Lin, 2008; Schuemie et al., 2004).", "However, the full text of many academic papers is not freely available.", "Further, modern language models have strict memory limits on input size, which means new techniques would be required in order to leverage the entirety of the paper within the models.", "Exploring how to use the full paper text within SPECTER is an item of future work.", "Finally, one pain point in academic paper recommendation research has been a lack of publicly available datasets (Chen and Lee, 2018; Kanakia et al., 2019).", "To address this challenge, we release SCIDOCS , our evaluation benchmark which includes an anonymized clickthrough dataset from an online recommendations system.", "We present SPECTER , a model for learning representations of scientific papers, based on a Transformer language model that is pretrained on citations.", "citations.", "We achieve substantial improvements over the strongest of a wide variety of baselines, demonstrating the effectiveness of our model.", "We additionally introduce SCIDOCS , a new evaluation suite consisting of seven document-level tasks and release the corresponding datasets to foster further research in this area.", "The landscape of Transformer language models is rapidly changing and newer and larger models are frequently introduced.", "It would be interesting to initialize our model weights from more recent Transformer models to investigate if additional gains are possible.", "Another item of future work is to develop better multitask approaches to leverage multiple signals of relatedness information during training.", "We used citations to build triplets for our loss function, however there are other metrics that have good support from the bibliometrics literature (Klavans and Boyack, 2006) that warrant exploring as a way to create relatedness graphs.", "Including other information such as outgoing citations as additional input to the model would be yet another area to explore in future.", "We thank Kyle Lo, Daniel King and Oren Etzioni for helpful research discussions, Russel Reas for setting up the public API, Field Cady for help in initial data collection and the anonymous reviewers (especially Reviewer 1) for comments and suggestions.", "This work was supported in part by NSF Convergence Accelerator award 1936940, ONR grant N00014-18-1-2193, and the University of Washington WRF/Cable Professorship." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "abstain", "abstain", "other", "abstain", "method", "result", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "result", "objective", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "other", "other", "other", "objective", "other", "method", "other", "other", "other", "abstain", "other", "method", "other", "other", "objective", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "result", "result", "abstain", "result", "result", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "other", "other" ]
[ "We consider the task of text attribute transfer: transforming a sentence to alter a specific attribute (e.g., sentiment) while preserving its attribute-independent content (e.g., changing screen is just the right size to screen is too small ).", "Our training data includes only sentences labeled with their attribute (e.g., positive or negative), but not pairs of sentences that differ only in their attributes, so we must learn to disentangle attributes from attribute-independent content in an unsupervised way.", "Previous work using adversarial methods has struggled to produce high-quality outputs.", "In this paper, we propose simpler methods motivated by the observation that text attributes are often marked by distinctive phrases (e.g., too small ).", "Our strongest method extracts content words by deleting phrases associated with the sentence's original attribute value, retrieves new phrases associated with the target attribute, and uses a neural model to fluently combine these into a final output.", "On human evaluation, our best method generates grammatical and appropriate responses on 22% more inputs than the best previous system, averaged over three attribute transfer datasets: altering sentiment of reviews on Yelp, altering sentiment of reviews on Amazon, and altering image captions to be more romantic or humorous.", "The success of natural language generation (NLG) systems depends on their ability to carefully control not only the topic of produced utterances, but also attributes such as sentiment and style.", "The desire for more sophisticated, controllable NLG has led to increased interest in text attribute transfer the task of editing a sentence to alter specific attributes, such as style, sentiment, and tense (Hu Work done while the author was a visiting researcher at Stanford University.", "et al., 2017; Shen et al., 2017; Fu et al., 2018).", "In each of these cases, the goal is to convert a sentence with one attribute (e.g., negative sentiment) to one with a different attribute (e.g., positive sen-timent), while preserving all attribute-independent content 1 (e.g., what properties of a restaurant are being discussed).", "Typically, aligned sentences with the same content but different attributes are not available; systems must learn to disentangle attributes and content given only unaligned sentences labeled with attributes.", "Previous work has attempted to use adversarial 1 Henceforth, we refer to attribute-independent content as simply content , for simplicity.", "networks (Shen et al., 2017; Fu et al., 2018) for this task, butas we demonstratetheir outputs tend to be low-quality, as judged by human raters.", "These models are also difficult to train (Salimans et al., 2016; Arjovsky and Bottou, 2017; Bous-malis et al., 2017).", "In this work, we propose a set of simpler, easier-to-train systems that leverage an important observation: attribute transfer can often be accomplished by changing a few attribute markers words or phrases in the sentence that are indicative of a particular attributewhile leaving the rest of the sentence largely unchanged.", "Figure 1 shows an example in which the sentiment of a sentence can be altered by changing a few sentiment-specific phrases but keeping other words fixed.", "With this intuition, we first propose a simple baseline that already outperforms prior adversarial approaches.", "Consider a sentiment transfer (nega-tive to positive) task.", "First, from unaligned corpora of positive and negative sentences, we identify attribute markers by finding phrases that occur much more often within sentences of one attribute than the other (e.g., worst and very disppointed are negative markers).", "Second, given a sentence, we delete any negative markers in it, and regard the remaining words as its content.", "Third, we retrieve a sentence with similar content from the positive corpus.", "We further improve upon this baseline by incorporating a neural generative model, as shown in Figure 1. Our neural system extracts content words in the same way as our baseline, then generates the final output with an RNN decoder that conditions on the extracted content and the target attribute.", "This approach has significant benefits at training time, compared to adversarial networks: having already separated content and attribute, we simply train our neural model to reconstruct sentences in the training data as an auto-encoder.", "We test our methods on three text attribute transfer datasets: altering sentiment of Yelp reviews, altering sentiment of Amazon reviews, and altering image captions to be more romantic or humorous.", "Averaged across these three datasets, our simple baseline generated grammatical sentences with appropriate content and attribute 23% of the time, according to human raters; in contrast, the best adversarial method achieved only 12% .", "Our best neural system in turn outperformed our baseline, achieving an average success rate of 34% .", "Our code and data, including newly collected human reference outputs for the Yelp and Amazon domains, can be found at https://github.com/lijuncen/ Sentiment-and-Style-Transfer .", "We assume access to a corpus of labeled sentences D = { ( x 1 , v 1 ) , . . . , ( x m , v m ) } , where x i is a sentence and v i V , the set of possible attributes (e.g., for sentiment, V = { positive , negative } ).", "We define D v = { x : ( x, v ) D} , the set of sentences in the corpus with attribute v .", "Crucially, we do not assume access to a parallel corpus that pairs sentences with different attributes and the same content.", "Our goal is to learn a model that takes as input ( x, v tgt ) where x is a sentence exhibiting source (original) attribute v src , and v tgt is the target attribute, and outputs a sentence y that retains the content of x while exhibiting v tgt .", "As a motivating example, suppose we wanted to change the sentiment of The chicken was deli-cious. from positive to negative.", "Here the word delicious is the only sentiment-bearing word, so we just need to replace it with an appropriate negative sentiment word.", "More generally, we find that the attribute is often localized to a small fraction of the words, an inductive bias not captured by previous work.", "How do we know which negative sentiment word to insert?", "The key observation is that the remaining content words provide strong cues: given The chicken was . . . , one can infer that a taste-related word like bland fits, but a word like rude does not, even though both have negative sentiment.", "In other words, while the deleted sentiment words do contain non-sentiment information too, this information can often be recovered using the other content words.", "In the rest of this section, we describe our four systems: two baselines (RETRIEVEONLY and TEMPLATEBASED ) and two neural models (DELETEONLY and DELETEANDRETRIEVE ).", "An overview of all four systems is shown in Figure 2. Formally, the main components of these systems are as follows: 1. Delete : All 4 systems use the same procedure to separate the words in x into a set of 1866 (3) Generate output sentence Inputs (2) Retrieve similar sentence with target attribute RetrieveOnly i have had it for a while but barely used it .", "attribute markers a ( x, v src ) and a sequence of content words c ( x, v src ) .", "2. Retrieve : 3 of the 4 systems look through the corpus and retrieve a sentence x tgt that has the target attribute v tgt and whose content is similar to that of x .", "3. Generate : Given the content c ( x, v src ) , target attribute v tgt , and (optionally) the retrieved sentence x tgt , each system generates y , either in a rule-based fashion or with a neural sequence-to-sequence model.", "We describe each component in detail below.", "We propose a simple method to delete attribute markers ( n -grams) that have the most discriminative power.", "Formally, for any v V , we define the salience of an n -gram u with respect to v by its (smoothed) relative frequency in D v : s ( u, v ) = count( u, D v ) + (cid:16)P v 0 V ,v 0 6 = v count( u, D v 0 ) (cid:17) + , (1) where count( u, D v ) denotes the number of times an n -gram u appears in D v , and is the smoothing parameter.", "We declare u to be an attribute marker for v if s ( u, v ) is larger than a specified threshold .", "The attributed markers can be viewed as discriminative features for a Naive Bayes classifier.", "We define a ( x, v src ) to be the set of all source attribute markers in x , and define c ( x, v src ) as the sequence of words after deleting all markers in a ( x, v src ) from x .", "For example, for The chicken was delicious, we would delete delicious and consider The chicken was. . . to be the content (Figure 2, Step 1).", "To decide what words to insert into c ( x, v src ) , one useful strategy is to look at similar sentences with the target attribute.", "For example, negative sentences that use phrases similar to The chicken was. . . are more likely to contain bland than rude.", "Therefore, we retrieve sentences of similar content and use target attribute markers in them for insertion.", "Formally, we retrieve x tgt according to: x tgt = argmin x 0 D v tgt d ( c ( x, v src ) , c ( x 0 , v tgt )) , (2) where d may be any distance metric comparing two sequences of words.", "We experiment with two options:", "(i) TF-IDF weighted word overlap and", "(ii) Euclidean distance using the content embeddings in Section 3.3 (Figure 2, Step 2).", "Finally, we describe how each system generates y (Figure 2, Step 3).", "RETRIEVEONLY returns the retrieved sentence x tgt verbatim.", "This is guaranteed to produce a grammatical sentence with the target attribute, but its content might not be similar to x .", "TEMPLATEBASED replaces the attribute markers deleted from the source sentence a ( x, v src ) with those of the target sentence a ( x tgt , v tgt ) .", "2 This strategy relies on the assumption that if two attribute markers appear in similar contexts , they are roughly syntactically exchangeable.", "For example, love and don't like appear in similar contexts (e.g., i love this place. and i don't like this place. ), and exchanging them is syntactically valid.", "However, this naive swapping of attribute markers can result in ungrammatical outputs.", "DELETEONLY first embeds the content c ( x, v src ) into a vector using an RNN.", "It then concatenates the final hidden state with a learned embedding for v tgt , and feeds this into an RNN decoder to generate y .", "The decoder attempts to produce words indicative of the source content and target attribute, while remaining fluent.", "DELETEANDRETRIEVE is similar to DELETEONLY , but uses the attribute markers of the retrieved sentence x tgt rather than the target attribute v tgt .", "Like DELETEONLY , it encodes c ( x, v src ) with an RNN.", "It then encodes the sequence of attribute markers a ( x tgt , v tgt ) with another RNN.", "The RNN decoder uses the concatenation of this vector and the content embedding to generate y .", "DELETEANDRETRIEVE combines the advantages of TEMPLATEBASED and DELETEONLY .", "Unlike TEMPLATEBASED , DELETEANDRETRIEVE can pick a better place to insert the given attribute markers, and can add or remove function words to ensure grammaticality.", "Compared to DELETEONLY , DELETEANDRETRIEVE has a stronger inductive bias towards using target attribute markers that are likely to fit in the current context.", "Guu et al. (2018) showed that retrieval strategies like ours can help neural generative models.", "Finally, DELETEANDRETRIEVE gives us finer control over the output; for example, we can control the degree of sentiment by deciding whether to add good or fantastic based on the retrieved sentence x tgt .", "We now describe how to train DELETEANDRETRIEVE and DELETEONLY .", "Recall that at training time, we do not have access to ground truth outputs that express the target attribute.", "Instead, we train DELETEONLY to reconstruct the sentences in the training corpus given their content and original attribute value by maximizing: L ( ) = X ( x,v src ) D log p ( x | c ( x, v src ) , v src ); ) .", "(3) For DELETEANDRETRIEVE , we could similarly learn an auto-encoder that reconstructs x from c ( x, v src ) and a ( x, v src ) .", "However, this results in a trivial solution: because a ( x, v src ) and c ( x, v src ) were known to come from the same sentence, the model merely learns to stitch the two sequences together without any smoothing.", "Such a model would fare poorly at test time, when we may need to alter some words to fluently combine a ( x tgt , v tgt ) with c ( x, v src ) .", "To address this train/test mismatch, we adopt a denoising method similar to the denoising auto-encoder (Vincent et al., 2008).", "During training, we apply some noise to a ( x, v src ) by randomly altering each attribute marker in it independently with probability 0 .", "1 .", "Specifically, we replace an attribute marker with another randomly selected attribute marker of the same attribute and word-level edit distance 1 if such a noising marker exists, e.g., was very rude to very rude , which produces a 0 ( x, v src ) .", "Therefore, the training objective for DELETEANDRETRIEVE is to maximize: L ( ) = X ( x,v src ) D log p ( x | c ( x, v src ) , a 0 ( x, v src ); ) .", "(4) 4 Experiments We evaluated our approach on three domains: flip-ping sentiment of Yelp reviews (YELP ) and Amazon reviews (AMAZON ), and changing image captions to be romantic or humorous (CAPTIONS ).", "We compared our four systems to human references and three previously published adversarial approaches.", "As judged by human raters, both of our two baselines outperform all three adversarial methods.", "Moreover, DELETEANDRETRIEVE outperforms all other automatic approaches.", "First, we describe the three datasets we use, which are commonly used in prior works too.", "All datasets are randomly split into train, development, and test sets (Table 1).", "AMAZON Similar to YELP , each example is a sentence from a product review on Amazon, and is labeled as having either positive or negative sentiment (He and McAuley, 2016).", "CAPTIONS In the CAPTIONS dataset (Gan et al., 2017), each example is a sentence that describes an image, and is labeled as either factual, romantic, or humorous.", "We focus on the task of converting factual sentences into romantic and humorous ones.", "Unlike YELP and AMAZON , CAPTIONS is actually an aligned corpusit contains captions for the same image in different styles.", "Our systems do not use these alignments, but we use them as gold references for evaluation.", "CAPTIONS is also unique in that we reconstruct romantic and humorous sentences during training, whereas at test time we are given factual captions.", "We assume these factual captions carry only content, and therefore do not look for and delete factual attribute markers; The model essentially only inserts romantic or humorous attribute markers as appropriate." ]
[ "method", "objective", "abstain", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "abstain", "method", "result", "result", "abstain", "objective", "result", "other", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method" ]
[ "Translating from languages without productive grammatical gender like English into gender-marked languages is a well-known difficulty for machines.", "This difficulty is also due to the fact that the training data on which models are built typically reflect the asymmetries of natural languages, gender bias included.", "Exclusively fed with textual data , machine translation is intrinsically constrained by the fact that the input sentence does not always contain clues about the gender identity of the referred human entities.", "But what happens with speech translation, where the input is an audio signal?", "Can audio provide additional information to reduce gender bias?", "We present the first thorough investigation of gender bias in speech translation, contributing with:", "i) the release of a benchmark useful for future studies, and", "ii) the comparison of different technologies (cascade and end-to-end) on two language directions (English-Italian/French).", "With the exponential popularity of deep learning approaches for a great range of natural language processing (NLP) tasks being integrated in our daily life, the need to address the issues of gender fairness 1 and gender bias has become a growing interdisciplinary concern.", "Present-day studies on a variety of NLP-related tasks, such as sentiment analysis (Kiritchenko and Mohammad, 2018) coreference resolution (Rudinger et al., 2018; Webster et al., 2018; Zhao et al., 2018), visual semantic-role labeling (Zhao et al., 2017) or language modeling (Lu These authors contributed equally. The work by Beatrice Savoldi was carried out during an internship at Fondazione Bruno Kessler. 1 We acknowledge that gender is a multifaceted notion, not necessarily constrained within binary assumptions. However, since speech translation is hindered by the scarcity of available data, we rely on the female/male distinction of gender, as it is linguistically reflected in existing natural data. et al., 2019), attest the existence of a systemic bias that reproduces gender stereotypes discriminating women.", "In translation-related tasks, gender bias arises from the extent through which each language formally expresses the female or male gender of a referred human entity.", "Languages with a grammatical system of gender, such as Romance languages, rely on a copious set of morphological (inflection) and syntactic (gender agreement) devices applying to numerous parts of speech (Hockett, 1958).", "Differently, English is a natural gender language that only reflects distinction of sex via pronouns, inherently gendered words ( boy, girl ) and exceptionally with marked nouns ( actor, actress ).", "For all the other indistinct neutral words, the gender of the referred entity if available is inferred from contextual information present in the discourse, e.g. he/she is a friend .", "Nascent inquiries on machine translation (MT) pointed out that machines tend to reproduce the linguistic asymmetries present in the real-world data they are trained on.", "In the case of gender inequality, this is made apparent by the attribution of occupational roles from gender-neutral linguistic forms into marked ones, where MT often wrongly chooses male-denoting (pro)nouns, e.g. identifying scientist , engineer or doctor as men (Prates et al., 2018; Escude Font and Costa-juss`a, 2019).", "Failing to pick the appropriate feminine form is both a technical and an ethical matter: gender-related errors affect the accuracy of MT systems but, more significantly, a biased system can dangerously perpetuate the under-/misrepresentation of a demographic group (Crawford, 2017).", "Previous studies accounting for MT systems' strengths and weaknesses in the translation of gender shed light on the problem but, at the same time, have limitations.", "On one hand, the existing evaluations focused on gender bias were largely conducted on challenge datasets, which are controlled artificial benchmarks that provide a limited perspective on the extent of the phenomenon and may force unreliable conclusions (Prates et al., 2018; Cho et al., 2019; Escude Font and Costa-juss`a, 2019; Stanovsky et al., 2019).", "On the other hand, the natural corpora built on conversational language that were used in few studies (Elaraby et al., 2018; Vanmassenhove et al., 2018) include only a restricted quantity of not isolated gender-expressing forms, thus not permitting either extensive or targeted evaluations.", "Moreover, no attempt has yet been made to assess if and how speech translation (ST) systems are affected by this particular problem.", "As such, whether ST technologies that leverage audio inputs can retrieve useful clues for translating gender in addition to contextual information present in the discourse, or supply for their lack, remains a largely unexplored question.", "In the light of above, the contributions of this paper are: (1) We present the first systematic analysis aimed to assess ST performance on gender translation.", "To this aim, we compare the state-of-the-art cascaded approach with the emerging end-to-end paradigm, investigating their ability to properly handle different categories of gender phenomena.", "(2) We publicly release MuST-SHE, 2 a multilingual, natural benchmark allowing for a fine-grained analysis of gender bias in MT and ST. MuST-SHE is a subset of the TED-based MuST-C corpus (Di Gangi et al., 2019a) and is available for English-French and English-Italian.", "3 For each language pair, it comprises 1,000 ( audio , transcript , translation ) triplets annotated with qualitatively differentiated and balanced gender-related phenomena.", "(3) We implement a new evaluation method that acknowledges and adapts previous related works to go beyond them and make BLEU scores informative about gender.", "It removes unrelated factors that may affect the overall performance of a system to soundly estimate gender bias.", "On the two language pairs addressed, our comparative evaluation of cascade vs. end-to-end ST systems indicates that the latter are able to better exploit audio information to translate specific gender phenomena, for which the cascade systems require externally-injected information.", "2 MuST-SHE is released under a CC BY NC ND 4.0 International license, and is freely downloadable at ict.fbk.", "eu/must-she .", "Speech translation.", "The task of translating audio speech in one language into text in another language has been traditionally approached with cascade architectures combining automatic speech recognition (ASR) and MT components (Eck and Hori, 2005).", "The main advantage of this pipelined solution is that it can directly plug-in state-of-the-art technology for both components and exploit the wealth of training data available for the two tasks.", "The approach, however, has some drawbacks.", "One is error propagation: sub-optimal transcriptions by the ASR component have significant impact on the final output produced by the MT component.", "To cope with this issue, recent works focused on making MT models more robust to noisy input transcripts (Sperber et al., 2017, 2019; Di Gangi et al., 2019b).", "A second issue, particularly relevant to this research, is the information loss when passing from audio to text representations.", "Even with perfect transcripts, subtle aspects that cannot be grasped from the text only (e.g. speaker's pitch as a clue of his/her gender) can only be reintroduced by injecting external knowledge to support the MT step (Elaraby et al., 2018).", "By avoiding intermediate text representations, direct end-to-end translation from audio to text (Berard et al., 2016) can potentially cope with these limitations.", "However, due to the dearth of training corpora, it still underperforms with respect to the cascaded approach.", "Recent evaluation campaigns (Niehues et al., 2018, 2019) have shown that, although the gap is gradually closing (less than 2 BLEU points), cascade models still represent the state-of-the-art.", "In spite of the steady technological progress, little has so far been done to directly compare the two technologies on specific translation problems like the one addressed in this paper.", "Measuring gender bias.", "Previous attempts to test the production of gender-aware automatic translations solely focused on MT, where a widespread approach involves the creation of challenge datasets focused on specific linguistic phenomena.", "Prates et al. (2018) and Cho et al. (2019) construct template sentences using occupational or sentiment words associated with a gender-neutral pronoun, to be translated into an English gender-specified one ( [x] is a professor: he/she is a professor ).", "Similarly, the Occupations Test (Escude Font and Costa-juss`a, 2019) and Wino MT (Stanovsky et al., 2019) cast human entities into protoor anti-stereotypical gender associations via coreference linking (e.g. the English sentence The janitor does not like the baker because she/he always messes up the kitchen , where the baker is to be translated into Spanish as la panadera or el panadero depending on the English pronoun).", "Although such simple constructions allow for targeted experiments, artificial data characterized by a qualitatively limited variety of phenomena generate constrained environments that may produce biased results.", "As far as studies on naturally occurring data are concerned, Vanmassenhove et al. (2018) estimate MT systems' performance in the realization of speaker's gender agreement on two male and female test sets containing first person singular pronouns.", "This strategy increases the chances to isolate speaker-dependent gendered expressions, but still, the employed BLEU metric does not pointedly grasp the effect of gender translation on the output, as the overall performance is also impacted by other factors.", "Analogously, Elaraby et al. (2018) design a set of agreement rules to automatically recover 300 gender-affected sentences in their corpus, but the evaluation relies on global BLEU scores computed on a bigger set (1,300 sentences) and does not consider male-female related differences.", "Moryossef et al. (2019) use a parser to detect morphological realizations of speakers' gender on a single female-speaker corpus that does not permit inter-gender comparisons.", "In light of above, an ideal test set should consist of naturally occurring data exhibiting a diversified assortment of gender phenomena so to avoid forced predictions with over-controlled procedures.", "Also, a consistent amount of equally distributed feminine and masculine gender realizations need to be identified to disentangle the accuracy of gender translation from the overall model's performance.", "Accordingly, in 3 we present MuST-SHE, a multilingual test set designed for the investigation of gender bias in ST, which, as explained in 4, is used for a targeted gender-sensitive evaluation approach.", "We built MuST-SHE on naturally occurring data retrieved from MuST-C (Di Gangi et al., 2019a), the largest freely available multilingual corpus for ST, which comprises ( audio , transcript , translation ) triplets extracted from TED talks data.", "Besides being multilingual, MuST-C is characterized by high-quality speech and a variety of different speakers that adequately represent women, two aspects that determined its selection among other existing corpora (Post et al., 2013; Kocabiyikoglu et al., 2018; Sanabria et al., 2018).", "As such, MuST-SHE was compiled by targeting in the original dataset linguistic phenomena that entail a gender identification from English into Italian and French, two Romance languages that extensively express gender via feminine or masculine morphological markers on nouns, adjectives, verbs and other functional words (e.g. articles and demonstratives).", "MuST-SHE is compiled with segments that require the translation of at least one English gender-neutral word into the corresponding masculine or feminine target word(s), where such formal distinction semantically conveys and conflates with an actual distinction of sex (Corbett, 1991).", "For instance, the English utterance a good teacher would either become in French un bon enseignant or une bonne enseignante for, respectively, a male or female referent.", "In spoken language data, the human entity that determines gender agreement is either the speaker him/herself ( I am a good teacher ) or another person the speaker is referring to ( he/she is a good teacher ).", "We classify our phenomena of interest in two categories based on where the necessary information to disambiguate gender can be recovered, namely ( Category 1 ) from the audio signal, when gender-agreement only depends on the speaker's gender, which can be captured from intrinsic properties of the audio ( I am a teacher uttered by a man/woman); ( Category 2 ) from the utterance content, where contextual hints such as gender-exclusive words ( mom ), pronouns ( she, his ) and proper nouns ( Paul ) inform about the gender of the referent.", "To gain a better insight into MuST-C linguistic data and capture the features of gender, we initially conducted a qualitative cross-lingual analysis on 2,500 parallel sentences randomly sampled from the corpus.", "The analysis led to the design of an automatic approach aimed to quantitatively and qualitatively maximize the extraction of an assorted variety of gender-marked phenomena belonging to categories 1 and 2. Regular expressions were employed to transform gender-agreement rules into search patterns to be applied to MuST-C.", "Our queries were Form Category 1 : Gender info in audio Speaker Fem.", "designed and adapted to the targeted language pairs, categories, and masculine/feminine forms.", "To specifically match a differentiated range of gender-marked lexical items, we also compiled two series of 50 human-referring adjectives in French and Italian, as well as a list with more than 1,000 English occupation nouns obtained from the US Department of Labour Statistics 4 (Prates et al., 2018).", "For each language direction, the pool of sentence pairs retrieved from MuST-C was manually checked in order to:", "i) remove noise and keep only pairs containing at least one gender phenomenon,", "ii) include all En-It/En-Fr corresponding pairs to create a common multilingual subset, and", "iii) select the remaining pairs ensuring a balanced distribution of categories, feminine/masculine forms, and female/male speakers.", "Once the textual part of MuST-SHE was created, all the corresponding au-4 http://www.bls.gov/emp/tables/ emp-by-detailed-occupation.htm dio segments were manually checked in order to correct possible misalignments.", "The resulting dataset was then manually enriched with different types of information that allow for fine-grained evaluations.", "Annotations include: category, masculine/feminine form, speaker's gender, and all the gender-marked expressions in the reference translation.", "Finally, in order to perform a sound evaluation able to discriminate gender-related issues from other non-related factors that may affect systems' performance, for each correct reference translation (C-REF ) we created an almost identical wrong alternative (W-REF ) in which all the gender-marked words are swapped to their opposite form (details in 4).", "Some examples extracted from MuST-SHE are presented in Table 1. To ensure data quality, the whole dataset was created and annotated by an expert linguist with a background in translation studies, who produced strict and comprehensive guidelines based on the preliminary manual analysis of a sample of MuST-C data (2,500 segments).", "Then, a second linguist independently re-annotated each MuST-SHE segment with the corresponding category and produced an additional wrong reference.", "Being the annotation per category a straightforward task, it resulted in no disagreement for Category 1 and around 0.03% for Category 2. Such few cases were removed from the dataset, which thus contains only segments in complete agreement.", "Disagreements were more common in the wrong references, since the task requires producing subtle variations that can be hard to spot.", "Disagreements, amounting to around 11%, were all oversights and thus reconciled.", "MuST-SHE comprises 2,136 ( audio , transcript , translation ) triplets (1,062 for En-It and 1,074 for En-Fr) uttered by 273 different speakers.", "A common subset of 696 instances allows for comparative evaluations across the two language directions.", "As shown by the statistics in Table 2, the corpus presents a balanced distribution across", "i) masculine and feminine forms, and", "ii) gender phenomena per category.", "Female and male speakers (558/513 for En-It, 577/498 for En-Fr) are substantially balanced.", "The gender of the speaker and of the referred entity in the utterance is the same in Category 1 (where the speakers talk about themselves), while it differs in about 50% of the segments in Category 2 (where they refer to other entities).", "MuST-SHE differs from standard test sets, as it is precisely designed to:", "i) equally distribute gender references as well as speakers, and", "ii) allow for a sound and focused evaluation on the accuracy of gender translation.", "As such, it satisfies the parameters to be qualified as a GBET, Gender Bias Evaluation Testset (Sun et al., 2019), and represents the very first of its kind for ST and MT created on natural data.", "MT evaluation metrics like BLEU (Papineni et al., 2002) or TER (Snover et al., 2006) provide a global score about translation quality as a whole.", "Used as-is, their holistic nature hinders the precise evaluation of systems' performance on an individual phenomenon as gender translation, since the variations of BLEU score are only a coarse and indi-En-It En-Fr Fem Masc Tot.", "rect indicator of better/worse overall performance (Callison-Burch et al., 2006).", "This represents a limitation of recent related works, which over-rely on the results of a BLEU-based quantitative analysis.", "For instance, the BLEU gains obtained by prepending gender tags or other artificial antecedents to the input source, as in Vanmassenhove et al. (2018) and Moryossef et al. (2019), cannot be assuredly ascribed to a better control of gender features.", "To overcome this problem, Moryossef et al. (2019) complement their discussion with a qualitative syntactic analysis, which implies the availability of a parser for the target language and a higher complexity of the whole evaluation protocol.", "Instead, our aim is to keep using BLEU 5 and make the resulting scores informative about systems' ability to produce the correct gender forms.", "To this aim, for each reference c in the corpus we create a wrong one that is identical to c , except for the morphological signals that convey gender agreement.", "In particular, for each gender-neutral English word in the source utterance (e.g. one , great and innovators in the 4 th example of Table 1), the correct translation (containing the French words with masculine inflection un , grands and innovateurs ) is swapped into its opposite gender form (containing feminine-marked words une , grandes and innovatrices ).", "The result is a new set of references that, compared to the correct ones, are wrong only with respect to the formal expression of gender.", "The underlying idea is that, as the two reference sets differ only for the swapped gendered forms, results' differences for the same set of hypotheses produced by a given system can measure its capability to handle gender phenomena.", "In partic-5 Still the de facto standard in MT evaluation in spite of constant research efforts towards metrics that better correlate with human judgements.", "ular, we argue that higher values on the wrong set can signal a potentially gender-biased behaviour.", "In all the cases where the required gender realization is feminine , significantly higher BLEU results computed on the wrong set would signal a bias towards producing masculine forms, and vice versa.", "Although this idea recalls the gender-swapping approach used in previous NLP studies on gender bias (Sun et al., 2019; Lu et al., 2019; Kiritchenko and Mohammad, 2018; Zhao et al., 2018; Cao and Daume III, 2019), in such works it is only applied to pronouns; here we extend it to any gender-marked part of speech.", "In addition to the quantitative BLEU-based evaluation 6 , we also perform a fine-grained qualitative analysis of systems' accuracy in producing the target gender-marked words.", "We compute accuracy as the proportion of gender-marked words in the references that are correctly translated by the system.", "An upper bound of one match for each gender-marked word is applied in order not to reward over-generated terms.", "Besides global accuracy, we also compute scores on both the correct and the wrong reference sets, as well as per category.", "It's worth remarking that the BLEU-based and the accuracy-based evaluations are complementary: the former aims to shed light on system's translation performance with respect to gender phenomena; the latter, which is more discriminative, aims to point to the actual words through which gender is realized.", "Compared to the standard BLEU-based evaluation with correct references only, we expect that the possible differences suggested by its extension with gender swapping will be reflected and amplified by sharper accuracy differences.", "In our experiments, we compare an End2End system with two cascade systems ( Cascade and Cascade+tag ), whose architectures are described below.", "Our End2End system uses the S-transformer architecture, which has proved to work reasonably well for this task (Di Gangi et al., 2019c).", "It is an encoder-decoder architecture that modifies the Transformer architecture (Vaswani et al., 2017) in two aspects.", "First, the audio input in the form of sequences of 40 MFCCs (Davis and Mermelstein, 1980) is processed by a stack of 2D CNNs (LeCun 6 We also computed TER scores and the results are fully in line with the reported BLEU scores. et al., 1998), each followed by batch normalization (Ioffe and Szegedy, 2015) and ReLU nonlinearity.", "Second, the output of the CNNs is processed by 2D self-attention networks to provide a larger context to each element.", "The output of the 2D attention is then summed with the positional encoding and fed to transformer encoder layers.", "In the second part, a distance penalty is added to the non-normalized probabilities in the encoder self-attention networks in order to bias the computation towards the local context.", "To improve translation quality, the End2End systems are trained on the MuST-C and Librispeech (Kocabiyikoglu et al., 2018) corpora using SpecAugment (Park et al., 2019).", "Since Librispeech is a corpus for ASR, we augmented it by automatically translating the original English transcripts into both target languages.", "Translations are performed at character level, using the MT systems integrated in the cascade model.", "Our Cascade systems share the same core (ASR, MT) technology.", "The ASR component is based on the KALDI toolkit (Povey et al., 2011), featuring a time-delay neural network and lattice-free maximum mutual information discriminative sequence-training (Povey et al., 2016).", "The audio data for acoustic modeling include the clean portion of LibriSpeech (Panayotov et al., 2015) ( 460h) and a variable subset of the MuST-C training set ( 450h), from which 40 MFCCs per time frame were extracted; a MaxEnt language model (Alumae and Kurimo, 2010) is estimated from the corresponding transcripts ( 7M words).", "The MT component is based on the Transformer architecture, with parameters similar to those used in the original paper.", "The training data are collected from the OPUS repository, 7 resulting in 70M pairs for En-It and 120M for En-Fr.", "For each language pair, the MT system is first trained on the OPUS data and then fine-tuned on MuST-C training data ( 250K pairs) from which the MuST-SHE segments are removed.", "Byte pair encoding (BPE) (Sennrich et al., 2015) is applied to obtain 50K sub-word units.", "To mitigate error propagation and make the MT system more robust to ASR errors, similarly to (Di Gangi et al., 2019b) we tune it on a dataset derived from MuST-C, which includes both human and automatic transcripts.", "The training set, consisting of ( audio , transcript ) pairs, is split in two equally-sized parts: the first one is used to adapt the ASR system to the TED talk language, while 7 http://opus.nlpl.eu Systems All Feminine Masculine Correct Wrong Diff Correct Wrong Diff Correct Wrong Diff En-It End2End 21.5 19.7 1.8 20.2 19.3 0.9 22.7 20.0 2.7 Cascade 24.1 22.4 1.8 22.8 21.9 0.8 25.5 22.8 2.7 Cascade+Tag 23.8 20.9 2.9 23.0 20.4 2.6 24.5 21.3 3.2 En-Fr End2End 27.9 25.8 2.1 26.3 25.0 1.3 29.5 26.4 3.1 Cascade 32.2 30.1 2.1 30.4 29.4 1.0 33.8 30.8 3.0 Cascade+Tag 32.2 28.6 3.6 31.6 28.0 3.6 32.7 29.2 3.5 Table 3: BLEU scores for En-It and En-Fr on MuST-SHE.", "the second part is transcribed by the tuned ASR system.", "The human transcripts of the first half and the automatic transcripts of the second half are concatenated and used together with their reference translations to fine-tune the MT system.", "This process makes the MT system aware of possible ASR errors and results in more than 2 BLEU points improvement on the MuST-C test set.", "We also train an enhanced version of the Cascade system.", "Similarly to Vanmassenhove et al. (2018), it is informed about speaker's gender by pre-pending a gender token ( < toM > or < toF > ) to each source transcript.", "The gender token is obtained by manually assigning the correct gender label to each speaker in MuST-C.", "This externally-injected knowledge allows the Cascade+Tag system to mimic end-to-end technology by leveraging gender information during translation.", "To check the overall quality of our systems, we compared them with published results on MuST-C test data.", "Our End2End systems (En-It: 21.5, En-Fr: 31.0) outperform all the models proposed in Di Gangi et al. (2019c), which were trained only on MuST-C (En-It: end2end 16.8, cascade 18.9; En-Fr: end2end 26.9, cascade 27.9).", "Our Cascade (En-It: 27.4 En-Fr: 35.5) also outperforms the system described in Indurthi et al. (2019) (En-Fr: 33.7).", "Our results are in line with the findings of IWSLT 2019 (Niehues et al., 2019) showing that the cascade approach still outperforms the direct one, although with a gradually closing gap.", "BLEU .", "Table 3 presents translation results in terms of BLEU score on the MuST-SHE dataset.", "Looking at overall translation quality ( All/Correct col-umn), the results on both language pairs show that the highest performance is achieved by cascade architectures, which are better than End2End by 2.6 points for En-It and 4.3 for En-Fr.", "We do not observe a statistically significant difference between Cascade and Cascade+Tag , suggesting that the injection of gender information into Cascade+Tag does not have visible effects in terms of translation quality, even on a focused dataset like MuST-SHE where each segment contains at least one gender realization.", "Our results thus seem to be in contrast with previous works implementing the same injection approach (Van-massenhove et al., 2018; Elaraby et al., 2018).", "However, looking at the scores' gap between the Correct and the Wrong datasets ( All/Diff col-umn), it becomes evident that the standard evaluation based on BLEU calculated on a single correct reference hides specific relevant aspects in translation.", "In fact, despite the lower overall BLEU scores, for both language pairs End2End performs on par with Cascade as far as gender phenomena are concerned (1.8 on En-It and 2.1 on En-Fr).", "Also, the largest All/Diff value achieved by the enhanced Cascade+Tag supports the results obtained in previous studies (Vanmassenhove et al., 2018; Elaraby et al., 2018), confirming the importance of applying gender-swapping in BLEU-based evaluations focused on gender translation.", "The fact that the All/Diff values are always positive indicates that all the systems perform better on the Correct dataset (i.e. they generate the correct gender-marked words more often than the wrong ones).", "However, examining the results at the level of masculine/feminine word forms, we notice that Diff values are higher on the Masculine subset (where the required gender realization is masculine) than in the Feminine one (where the required gender realization is feminine).", "As discussed in 4.1, this signals a bias of the systems towards producing masculine forms.", "The only exception is the En-Fr Cascade+Tag , where the Diff values remain stable across the two subsets (3.6 and 3.5).", "This absence of bias towards the masculine forms Systems All Feminine Masculine Correct Wrong Diff Correct Wrong Diff Correct Wrong Diff En-It End2End 43.3 16.4 26.9 34.2 24.0 10.2 51.3 9.6 41.7 Cascade 41.1 17.5 23.6 33.7 24.5 9.2 47.6 11.2 36.4 Cascade+Tag 48.0 10.4 37.6 44.7 14.0 30.7 51.0 7.2 43.8 En-Fr End2End 46.0 19.0 27.0 35.8 25.0 13.8 55.3 13.8 41.5 Cascade 49.6 20.5 29.1 39.6 26.2 13.4 58.7 15.2 43.5 Cascade+Tag 57.2 11.3 45.9 53.8 11.8 42.0 60.3 10.7 49.6 Table 4: Accuracy scores for En-It and En-Fr on MuST-SHE.", "is in line with the All/Diff results indicating that this system is the best one in translating gender.", "Although our gender-swapping methodology allows us to measure differences across systems that cannot be observed with standard BLEU evaluations, the results obtained so far may still conceal further interesting differences.", "This can depend on the fact that BLEU works at the corpus level and the small proportion of gender-marked words in MuST-SHE ( 2,000 out of 30,000 total words, avg. 1.8 per sentence) can have limited influence on global measurements.", "To dig into these aspects, our final analysis relies on accuracy, which is exclusively focused on gender-marked words.", "Accuracy .", "The results shown in Table 4 are not only consistent with the BLEU ones, but also highlight differences that were previously indistinguishable.", "While the All/Diff BLEU results for End2End and Cascade were identical on both languages, the All/Diff accuracy scores show that, although End2End performs better than Cascade for En-It, it performs worse for En-Fr.", "Also, with regards to Cascade+Tag , we can see that the Diff value is higher on the Masculine subset, thus showing that also this system is affected by gender bias, although to a lesser extent.", "We now focus on systems' results on the two categories represented in MuST-SHE: Category 1, where the information necessary to disambiguate gender can be recovered from the audio (speaker talking about him/herself) and Category 2, where such information occurs in the utterance content (speaker talking about someone else).", "Results are shown in Table 5.", "As for Category 1 , Diff values show that Cascade performance is the worst on both languages.", "This is due to the fact that its MT component cannot access the speaker's gender information necessary for a correct translation.", "This weakness becomes particularly evident in the Feminine class, where the higher values on the Wrong datasets (leading to negative values in columns Feminine/Diff ) highlight a strong bias towards producing masculine forms.", "Although still negative for the Feminine class, the much better Diff values obtained by End2End show its ability to leverage audio features to correctly translate gender.", "However, the gap with respect to Cascade+Tag by far the best system in Cat.", "1 is still large.", "On one side, End2End might benefit from better audio representations.", "Indeed, as shown in Kabil et al. (2018), the MFCC features used by state-of-the-art models are not the most appropriate for gender recognition.", "On the other side, Cascade+Tag does not only take advantage of huge amounts of data to train its basic components, but it is also an oracle supported by the artificial injection of correct information about speakers' gender.", "In Category 2 , where having direct access to the audio is not an advantage since gender information is present in the textual transcript, results show a different scenario.", "While scores on the Masculine class are not conclusive across languages, on the Feminine class End2End always shows the worst performance.", "This can be explained by the fact that, being trained on a small fraction of the data used by the cascade systems, End2End is intrinsically weaker and more prone to gender mistrans-lations.", "Also, it is noticeable that Cascade+Tag is slightly worse than Cascade , although the MT components are trained on the same amount of data.", "This is due to the dataset design choice (see 3.3) to include 50% of segments where the speaker's gender does not agree with the gender of the phenomenon to translate.", "This feature makes MuST-SHE particularly challenging for systems like End2End and Cascade+Tag since, in these specific cases, speaker's gender information (extracted from the source audio or artificially injected) is not relevant and can introduce noise.", "All in all, translating gender is still an issue in ST and current technologies are affected by gender bias to variable extent.", "Through the analysis made possible by MuST-SHE, we have been able to pinpoint their specific strengths and weaknesses and pave the way for more informed future studies.", "If, like human beings, machine learning is what it eats, the different diet of MT and ST models can help them to develop different skills.", "One is the proper treatment of gender, a problem when translating from languages without productive grammatical gender into gender-marked ones.", "With respect to this problem, by eating parallel texts during training, MT performance is bounded by the statistical patterns learned from written material.", "By eating ( audio , text) pairs, ST has a potential advantage: the possibility to infer speakers' gender from input audio signals.", "We investigated for the first time the importance of this information in ST, analysing the behaviour of cascade (the state of the art in the field) and end-to-end ST technology (the emerging approach).", "To this aim, we created MuST-SHE, a benchmark annotated with different types of gender-related phenomena in two language directions.", "Our evaluation shows that, in spite of lower overall performance, the direct approach can actually exploit audio information to better handle speaker-dependent gender phenomena.", "These are out of reach for cascade solutions, unless the MT step is supplied with external (not always accessible) knowledge about the speaker.", "Back to our title: if, in ST, gender is still in danger, we encourage our community to start its rescue from MuST-SHE and the findings discussed in this paper.", "This work is part of the project End-to-end Spoken Language Translation in Rich Data Conditions, 8 which is financially supported by an Amazon AWS ML Grant.", "We thank our colleague Marco Matas-soni for providing the the automatic transcripts used for our experiments with cascade systems." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "method", "other", "other" ]
[ "We are interested in a novel task, singing voice beautification (SVB).", "Given the singing voice of an amateur singer, SVB aims to improve the intonation and vocal tone of the voice, while keeping the content and vocal timbre.", "Current automatic pitch correction techniques are immature, and most of them are restricted to intonation but ignore the overall aesthetic quality.", "Hence, we introduce Neural Singing Voice Beautifier (NSVB), the first generative model to solve the SVB task, which adopts a conditional variational autoencoder as the backbone and learns the latent representations of vocal tone.", "In NSVB, we propose a novel time-warping approach for pitch correction: Shape-Aware Dynamic Time Warping (SADTW), which ameliorates the robustness of existing time-warping approaches, to synchronize the amateur recording with the template pitch curve.", "Furthermore, we propose a latent-mapping algorithm in the latent space to convert the amateur vocal tone to the professional one.", "To achieve this, we also propose a new dataset containing parallel singing recordings of both amateur and professional versions.", "Extensive experiments on both Chinese and English songs demonstrate the effectiveness of our methods in terms of both objective and subjective metrics.", "Audio samples are available at https://neuralsvb.", "github.io .", "Codes: https://github.", "com/MoonInTheRiver/NeuralSVB .", "The major successes of the artificial intelligent singing voice research are primarily in Singing Voice Synthesis (SVS) (Lee et al., 2019; Blaauw and Bonada, 2020; Ren et al., 2020; Lu et al., 2020; Liu et al., 2021a) and Singing Voice Conversion (SVC) (Sisman and Li, 2020; Li et al., 2021; Wang et al., 2021a).", "However, the Singing Voice Beautification (SVB) remains an important and challenging endeavor for researchers.", "SVB aims to improve the Corresponding Author intonation 1 and the vocal tone of the voice, while keeping the content and vocal timbre 2 .", "SVB is extensively required both in the professional recording studios and the entertainment industries in our daily life, since it is impractical to record flawless singing audio.", "Nowadays in real-life scenarios, SVB is usually performed by professional sound engineers with adequate domain knowledge, who manipulate commercial vocal correction tools such as Melodyne 3 and Autotune 4 (Yong and Nam, 2018).", "Most current automatic pitch correction works are shown to be an attractive alternative, but they may 1) show weak alignment accuracy (Luo et al., 2018) or pitch accuracy (Wager et al., 2020); 2) cause the tuned recording and the reference recording to be homogeneous in singing style (Yong and Nam, 2018).", "Besides, they typically focus on the intonation but ignore the overall aesthetic quality (audio quality and vocal tone) (Rosenzweig et al., 2021; Zhuang et al., 2021).", "To tackle these challenges, we introduce Neural Singing Voice Beautifier (NSVB), the first generative model to solve the SVB task, which adopts a Conditional Variational AutoEncoder (CVAE) (Kingma and Welling, 2014; Sohn et al., 2015) as the backbone to generate high-quality audio and learns the latent representation of vocal tone.", "In NSVB, we dichotomize the SVB task into pitch correction and vocal tone improvement: 1) To correct the intonation, a straightforward method is aligning the amateur recording with the template pitch curve, and then putting them together to resynthesize a new singing sample.", "Previous 1 Intonation refers to the accuracy of pitch in singing.", "2 The differences between the vocal tone and vocal timbre is that: the former represents one's skills of singing, such as airflow controlling ability, muscle strength of vocal folds and vocal placement; the latter represents the identical, overall sound of one's vocal.", "3 https://www.celemony.com/en/start 4 https://www.antarestech.com/ 7970 works (Wada et al., 2017; Luo et al., 2018) implemented this by figuring out the alignment through Dynamic Time Warping (DTW) (Mller, 2007) or Canonical Time Warping (CTW) (Zhou and Torre, 2009).", "We propose a novel Shape-Aware DTW algorithm, which ameliorates the robustness of existing time-warping approaches by considering the shape of the pitch curve rather than low-level features when calculating the optimal alignment path.", "2) To improve the vocal tone, we propose a latent-mapping algorithm in the latent space, which converts the latent variables of the amateur vocal tone to those of the professional ones.", "This process is optimized by maximizing the log-likelihood of the converted latent variables.", "To retain the vocal timbre during the vocal tone mapping, we also propose a new dataset named PopBuTFy containing parallel singing recordings of both amateur and professional versions.", "Besides, thanks to the autoencoder structure, NSVB inherently supports semi-supervised learning, where the additional unpaired, unlabeled 5 singing data could be leveraged to facilitate the learning of the latent representations.", "Extensive experiments on both Chinese and English songs show that NSVB outperforms previous methods by a notable margin, and each component in NSVB is effective, in terms of both objective and subjective metrics.", "The main contributions of this work are summarized as follows: We propose the first generative model NSVB to solve the SVB task.", "NSVB not only corrects the pitch of amateur recordings, but also generates the audio with high audio quality and improved vocal tone, to which previous works typically pay little attention.", "We propose Shape-Aware Dynamic Time Warping (SADTW) algorithm to synchronize the amateur recording with the template pitch curve, which ameliorates the robustness of the previous time-warping algorithm.", "We propose a latent-mapping algorithm to convert the latent variable of the amateur vocal tone to the professional one's, and contribute a new dataset PopBuTFyto train the latent-mapping function.", "5 unpaired, unlabeled means the recordings sung by any people, in any vocal tone without label.", "Singing Voice Conversion (SVC) is a sub-task of Voice Conversion (VC) (Berg-Kirkpatrick and Klein, 2015; Serr et al., 2019; Popov et al., 2021; Liu et al., 2021b), which transforms the vocal timbre (or singer identity) of one singer to that of another singer, while preserving the linguistic content and pitch/melody information (Li et al., 2021).", "Mainstream SVC models can be grouped into three categories (Zhao et al., 2020): 1) parallel spectral feature mapping models, which learn the conversion function between source and target singers relying on parallel singing data (Villavicencio and Bonada, 2010; Kobayashi et al., 2015; Sisman et al., 2019); 2) Cycle-consistent Generative Adversarial Networks (CycleGAN) (Zhu et al., 2017; Kaneko et al., 2019), where an adversarial loss and a cycle-consistency loss are concurrently used to learn the forward and inverse mappings simultaneously (Sis-man and Li, 2020); 3) encoder-decoder models, such as PPG-SVC (Li et al., 2021), which leverage a singing voice synthesis (SVS) system for SVC (Zhang et al., 2020), and auto-encoder (Qian et al., 2019a; Wang et al., 2021b; Yuan et al., 2020) based SVC (Wang et al., 2021a).", "The models of the latter two categories can be utilized with nonparallel data.", "In our work, we aim to convert the intonation and the vocal tone while keeping the content and the vocal timbre, which is quite different from the SVC task.", "Automatic Pitch Correction (APC) works attempt to minimize the manual effort in modifying the flawed singing voice (Yong and Nam, 2018).", "Luo et al. (2018) propose Canonical Time Warping (CTW) (Zhou and Torre, 2009; Zhou and De la Torre, 2012) which aligns amateur singing recordings to professional ones according to the pitch curves only.", "Wager et al. (2020) propose a data-driven approach to predict pitch shifts depending on both amateur recording and its accompaniment.", "Rosenzweig et al. (2021) propose a pitch shift method for Cappella recordings.", "Zhuang et al. (2021) propose a pitch-controllable SVS system to resynthesize the audio with correctly predicted pitch curves.", "Besides modifying pitch, Yong and 7971 Stage 1 Pitch (Aligned in Stage 2 and Inference) VAE Dec VAE Enc Discriminator Pitch Module Z Type equation here.", "The training process consists of 2 stages, and the second stage shares the same pipeline with the inference stage.", "VAE Enc means the encoder of CVAE; VAE Dec means the decoder of CVAE; Mel means the mel-spectrogram; z means the latent variable of the vocal tone; the a / p subscript means the amateur/professional version.", "Nam (2018) propose to modify pitch and energy information to improve the singing expressions of an amateur singing recording.", "However, this method heavily relies on a reference recording, causing the tuned recording and the reference recording to be homogeneous in singing style (Zhuang et al., 2021).", "Our work adopts the non-parametric and data-free pitch correction method like Luo et al. (2018), but improves the accuracy of alignment.", "In this section, we describe the overview of NSVB, which is shown in Figure 1.", "At Stage 1 in the figure, we reconstruct the input mel-spectrogram through the CVAE backbone (Section 3.1) based on the pitch, content and vocal timbre conditions extracted from the input by the pitch encoder, content encoder and timbre encoder, and optimize the CVAE by maximizing evidence lower bound and adversarial learning.", "At Stage 2/Inference in the figure, firstly we infer the latent variable z a based on the amateur conditions; secondly we prepare the amateur content vectors aligned with the professional pitch by SADTW algorithm (Section 3.2); thirdly we map z a to z p by the latent-mapping algorithm (Section 3.3); finally, we mix the professional pitch, the aligned amateur content vectors, and the amateur vocal timbre to obtain a new condition, which is leveraged along with the mapped z p by the decoder of CVAE to generate a new beautified mel-spectrogram.", "The training/inference details and model structure of each component in NSVB are described in Section 3.4 and Section 3.5.", "As shown in Figure 2, to generate audio with high quality and learn the latent representations of vocal tone, we introduce a Conditional Variational AutoEncoder (CVAE) (Kingma and Welling, 2014; Sohn et al., 2015) as the mel-spectrogram generator, with the optimizing objective of maximizing the evidence lower bound (ELBO) of the intractable marginal log-likelihood of mel-spectrogram log p ( x | c ) :", "log p ( x | c ) ELBO ( , ) E z q ( z | x ,c ) (cid:20) log p ( x | z , c ) log q ( z | x , c p ( z )", "where x , c , z denote the input/output mel-spectrogram, the mix of content, vocal timbre and pitch conditions, and the latent variable representing the vocal tone respectively; and denote the model parameters of CVAE encoder and CVAE decoder; q ( z | x , c ) is the posterior distribution approximated by the CVAE encoder; p ( x | z , c ) is the likelihood function that generates mel-spectrograms given latent variable z and condition c ; p ( z ) is the prior distribution of the latent variables z , and we choose the standard normal distribution as p ( z ) for simplification.", "Furthermore, to address the over-smoothing problem (Qian et al., 2019b) in CVAE, we utilize an adversarial discriminator ( D ) (Mao et al., 2017) to refine the output mel-spectrogram: L adv ( , ) = E [( D ( (cid:101) x ) 1) 2 ] , L adv ( D ) = E [( D ( x ) 1) 2 ] + E [ D ( (cid:101) x ) 2 ] , (1) 7972 where x is the ground-truth and (cid:101) x is the output of CVAE.", "The descriptions for the model structure of each component are in Section 3.5.", "To implement the pitch correction, a straightforward method is aligning the amateur recording with the template pitch curve, and then concatenating them to resynthesize a new singing sample with improved intonation.", "Since the source pitch curve of amateur recordings and template one show a high degree of natural correlation along the time axis, applying a proper time-warping algorithm on them is crucial.", "However, original DTW (Mller, 2007) could result in a poor alignment when certain parts of the axis move to higher frequencies, and other parts to lower ones, or vice versa (Sunder-mann and Ney, 2003).", "Luo et al. (2018) adopt an advanced algorithm CTW (Zhou and Torre, 2009), which combines the canonical correlation analysis (CCA) and DTW to extract the feature sequences of two pitch curves, and then apply DTW on them.", "However, the alignment accuracy of CTW leaves much to be desired.", "We elaborate a non-parametric and data-free algorithm, Shape-Aware DTW (SADTW), based on the prior knowledge that the source pitch curve and the template one have analogous local shape contours.", "Specifically, we replace the Euclidean distance in the original DTW distance matrix with the shape context descriptor distance.", "The shape context descriptor of a time point f i in one pitch curve is illustrated in Figure", "3. Inspired by (Mori et al., 2005), we divide the data points around f i into m n bins by m time windows and n angles.", "We calculate the number of all points falling in the k -th bin.", "Then the descriptor for f i is defined as the histogram h i R m n : h i ( k ) = |{ f j = f i , f j bin ( k ) }| , where | | means the cardinality of a set.", "This histogram represents the distribution over relative positions, which is a robust, compact and discriminative descriptor.", "Then, it is natural to use the X 2 -test statistic on this distribution descriptor as the distance of two points f a and f p : C ( a, p ) = 1 2 m n (cid:88) k =1 [ h a ( k ) h p ( k )] 2 h a ( k ) + h p ( k ) , where h a and h p are the normalized histograms corresponding to the point f a from the amateur pitch curve and the point f p from the template pitch curve.", "C ( a, p ) ranges from 0 to 1.", "Finally, we run DTW on the distance matrix C to obtain the alignment with least distance cost between two curves.", "Define a pair of mel-spectrograms ( x a , x p ) : the contents of x a and y p are the same sentence of a song from the same singer 6 , who sings these two recordings using the amateur tone and the professional tone respectively.", "Given the CVAE model, 6 The singers all major in vocal music.", "we can infer the posterior distribution q ( z a | x a , c a ) and q ( z p | x p , c p ) corresponding to x a and x p through the encoder of CVAE.", "To achieve the conversion of vocal tone, we introduce a mapping function M to convert the latent variables from q ( z a | x a , c a ) to q ( z p | x p , c p ) .", "Concretely, we sample a latent variable of amateur vocal tone z a from q ( z a | x a , c a ) , and map z a to M ( z a ) .", "Then, M can be optimized by minimizing the negative log-likelihood of M ( z a ) : L map 1 ( M ) = log q ( M ( z a ) | x p , c p ) .", "Define c p as the mix of 1) the content vectors from the amateur recording aligned by SADTW, 2) vocal timbre embedding encoded by timbre encoder, and 3) template pitch 7 embeddings encoded by pitch encoder.", "To make sure the converted latent variable could work well together with c p to generate a high-quality audio sample (with the correct pitch and improved vocal tone), we send M ( z a ) to the CVAE decoder to generate x , and propose an additional loss: L map 2 ( M ) = x x p 1 + ( D ( x ) 1) 2 , where D has been optimized by Eq.", "There are two training stages for NSVB: in the first training stage, we optimize CVAE by minimizing the following loss function", "and optimize the discriminator ( D ) by minimizing Eq.", "(1).", "Note that, the first stage is the reconstruction process of mel-spectrograms, where any unpaired, unlabeled singing data beyond PopBuTFy could be leveraged to facilitate the learning of the latent representations.", "In the second training stage, we optimize M on the parallel dataset PopBuTFy by minimizing the following loss function L ( M ) = L map 1 ( M ) + L map 2 ( M ) .", ", , and D are not optimized in this stage.", "In inference, the encoder of CVAE encodes x a with the condition c a to predict z a .", "Secondly, we map z a to M ( z a ) , and run SADTW to align the 7 During training, template pitch is extracted from the waveform corresponding to x p .", "amateur recordings with the template pitch curve.", "The template pitch curve can be derived from a reference recording with good intonation or a pitch predictor with the input of music notes.", "Then, we obtain c p defined in Section 3.3 and send M ( z a ) together with c p in the decoder of CVAE to generate x .", "Finally, by running a pre-trained vocoder conditioned on x , a new beautified recording is produced.", "3.5 Model Structure The encoder of CVAE consists of a 1-D convolutional layer (stride=4), an 8-layer WaveNet structure (Oord et al., 2016; Rethage et al., 2018) and 3 1-D convolutional layers (stride=2) with ReLU activation function and batch normalization followed by a mean pooling, which outputs the mean and log scale standard deviation parameters in the posterior distribution of z .", "The decoder of CVAE consists of a 4-layer WaveNet structure and a 1-D convolutional layer, which outputs the mel-spectrogram with 80 channels.", "The discriminator adopts the same structure as (Wu and Luan, 2020), which consists of multiple random window discriminators.", "The latent-mapping function is composed of 2 linear layers to encode the vocal timbre as the mapping condition, and 3 linear layers to map z a .", "The pitch encoder is composed of 3 convolutional layers.", "In addition, given a singing recording, 1) to obtain its content vectors, we train an Automatic Speech Recognition (ASR) model based on Conformer (Gulati et al., 2020) with both speech and singing data, and extract the hidden states from the ASR encoder (viewed as the content encoder) output as the linguistic content information, which are also called phonetic posterior-grams (PPG); 2) to obtain the vocal timbre, we leverage the open-source API resemblyzer 8 as the timbre encoder, which is a deep learning model designed for speaker verification (Wan et al., 2018), to extract the identity information of a singer.", "More details of model structure can be found in Appendix A. 4 Experiments 4.1 Experimental Setup In this section, we first introduce PopBuTFy, the dataset for SVB, and then describe the implementation details in our work.", "Finally, we explain the evaluation method we adopt in this paper.", "Dataset Since there is no publicly available high-quality, unaccompanied and parallel singing dataset for the SVB task, we collect and annotate a dataset containing both Chinese Mandarin and English pop songs: PopBuTFy.", "To collect PopBuTFy for SVB, the qualified singers majoring in vocal music are asked to sing a song twice, using the amateur vocal tone for one time and the professional vocal tone for another.", "Note that some of the amateur recordings are sung off-key by one or more semi-tones for the pitch correction sub-task.", "The parallel setting could make sure that the personal vocal timbre will keep still during the beautification process.", "In all, PopBuTFy consists of 99 Chinese pop songs ( 10.4 hours in total) from 12 singers and 443 English pop songs ( 40.4 hours in total) from 22 singers.", "All the audio files are recorded in a professional recording studio by qualified singers, male and female.", "Every song is sampled at 22050 Hz with 16-bit quantization.", "We randomly choose 6 songs in Chinese and 18 songs in English (from unseen speakers) for validation and test.", "For subjective evaluations, we choose 60 samples in the test set from different singers, half in Chinese and English.", "All testing samples are included for objective evaluations.", "Implementation Details We train the Neural Singing Beautifier on a single 32G Nividia V100 GPU with the batch size of 64 sentences for both 100k steps in Stage 1 and Stage 2 respectively.", "Besides PopBuTFy, we pre-train the ASR model (used for PPG extraction) leveraging the extra speech datasets: AISHELL-3 (Yao Shi et al., 2020) for Chinese and LibriTTS (Zen et al., 2019) for English.", "For the semi-supervised learning mentioned in Section 1 and Section 3.4, we leverage an internal Chinese singing dataset ( 30 hours without labeled vocal tone) in the first training stage described in Section 3.4 for Chinese experiments.", "The output mel-spectrograms of our model are transformed into audio samples using a HiFi-GAN vocoder (Kong et al., 2020) trained with singing data in advance.", "We set the metioned in Section 3.3 to 0 .", "1 .", "We transform the raw waveform with the sampling rate 22050 Hz into mel-spectrograms with the frame size 1024 and the hop size 128.", "We extract F 0 (fundamental frequency) as pitch information from the raw waveform using Parselmouth 9 , following Wu and Luan (2020); Blaauw and Bonada (2020); 9 https://github.com/YannickJadoul/ Parselmouth Ren et al. (2020).", "To obtain the ground truth pitch alignment between the amateur recordings and the professional ones for evaluating the accuracy of pitch alignment algorithm, we run the Montreal Forced Aligner tool (McAuliffe et al., 2017) on all the singing recordings to obtain their alignments to lyrics.", "Then the ground-truth pitch alignment can be derived since the lyrics are shared in a pair of data in PopBuTFy.", "Performance Evaluation We employ both subjective metrics: Mean Opinion Score (MOS), Comparison Mean Opinion Score (CMOS), and an objective metric: Mean Cepstral Distortion (MCD) to evaluate the audio quality on the test-set.", "Besides, we use F0 Root Mean Square Error (F0 RMSE) and Pitch Alignment Accuracy (PAA) to estimate the pitch correction performance.", "For audio, we analyze the MOS and CMOS in two aspects: audio quality (naturalness, pronunciation and sound quality) and vocal tone quality.", "MOS-Q/CMOS-Q and MOS-V/CMOS-V correspond to the MOS/CMOS of audio quality and vocal tone quality respectively.", "More details about subjective evaluations are placed in Appendix C. 4.2 Main Results In this section, we conduct extensive experiments to present our proposed model in regard to 1) the performance of pitch conversion; 2) the audio quality and vocal tone quality.", "Firstly, we provide the comparison among time-warping algorithms in terms of PAA in Table 1.", "Normed DTW means two pitch curves will be normalized before running DTW (Mller, 2007); CTW means the Canonical Time Warping (Zhou and Torre, 2009), which is used for pitch correction in Luo et al. (2018).", "It can be seen that, SADTW surpasses existing methods by a large margin.", "We also visualize an alignment example of DTW , CTW , and SADTW in Figure 4.", "Secondly, to check whether the amateur recordings are corrected to the good intonation after being beautified by NSVB, we calculate the F0 RMSE metric of the amateur recordings and the audio generated by NSVB, and list the results in Table", "2. We can see that F0 RMSE has been improved significantly, which means NSVB successfully achieve pitch correction.", "To thoroughly evaluate our proposed model in audio quality and vocal tone quality, we compare subjective metric MOS-Q, MOS-V and objective metric MCD of audio samples generated by NVSB with the systems including: 1) GT Mel , amateur (A) and professional (P) version, where we first convert ground truth audio into mel-spectrograms, and then convert the mel-spectrograms back to audio using HiFi-GAN introduced in Section 4.1; 2) Baseline : the baseline model for SVB based on WaveNet with the number of parameters similar to NSVB , which", "adopts the same pitch correction method (SADTW) as NSVB does, and takes in the condition c p defined in Section 3.3 to generate the mel-spectrogram optimized by the L 1 distance to x p .", "MCD is calculated using the audio samples of GT Mel P as references.", "The subjective and objective results on both Chinese and English datasets are shown in Table", "3. We can see that 1) NSVB achieves promising results, with MOS-Q being less than those for ground truth professional recordings by only 0.1 and 0.12 on both datasets; 2) NSVB surpasses the GT Mel A in terms of MOS-V by a large margin, which indicates that NSVB successfully accomplishes the vocal tone improvement.", "3) NSVB surpasses the baseline model on all the metrics distinctly, which proves the superiority of our proposed model; 4) GT Mel P , NSVB and Baseline all outperform GT Mel A in terms of MOS-V, which demonstrates that the proposed dataset PopBuTFy is reasonably labeled in respect of vocal tone.", "We conduct some ablation studies to demonstrate the effectiveness of our proposed methods and some designs in our model, including latent-mapping, additional loss L map 2 in the second training stage, and semi-supervised learning with extra unpaired, unlabeled data on Chinese songs.", "We compare audio samples from NSVB with and without latent-mapping in terms of CMOS-V and MCD.", "From Table 4, we can see that the latent-mapping brings CMOS-V and MCD gains, which demonstrates the improvements in vocal tone from latent-mapping in our model.", "We visualize linear-spectrograms of GT Mel A , GT Mel P , NSVB , NSVB w/o mapping in Appendix B. The patterns of high-frequency parts in NVSB samples are comparatively similar to those in GT Mel P samples while NSVB w/o mapping sample resembles GT Mel A samples.", "As shown in Table 5, all the compared metrics show the effectiveness of L map 2 , which means that the additional loss L map 2 is beneficial to optimizing the latent mapping function M , working as a complement to the basic loss L map 1 .", "To illustrate the advantage of the CVAE architecture that allows semi-supervised training, we compare NSVB trained with and without extra unpaired, unlabeled data on Chinese songs.", "The corresponding results are shown in Table 6.", "The compared metrics indicate the advantage of semi-supervised learning, which facilitates the learning of the latent representations for better sample reconstruction (audio quality) and better latent conversion (vocal tone quality).", "In this work, we propose Neural Singing Voice Beautifier, the first generative model for the SVB task, which is based on a CVAE model allowing semi-supervised learning.", "For pitch correction, we propose a robust alignment algorithm: Shape-Aware Dynamic Time Warping (SADTW).", "For vocal tone improvement, we propose a latent mapping algorithm.", "To retain the vocal timbre during the vocal tone mapping, we also propose a new specialized SVB dataset named PopBuTFy containing parallel singing recordings of both amateur and professional versions.", "The experiments conducted on the dataset of Chinese and English songs show that NSVB accomplishes the SVB task (pitch correction and vocal tone improvement), and extensional ablation studies demonstrate the effectiveness of the proposed methods mentioned above.", "This work was supported in part by Zhejiang Natural Science Foundation under Grant LR19F020006, National Natural Science Foundation of China under Grant No.61836002, No.62072397.", "Thank all the co-authors for their wonderful contributions, the enlightening opinions of the participants in the discussion, and the great efforts of singers and data annotators." ]
[ "objective", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "other", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "objective", "objective", "abstain", "objective", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "other", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "other", "other" ]
[ "In order to interpret the communicative intents of an utterance, it needs to be grounded in something that is outside of language ; that is, grounded in world modalities .", "In this paper we argue that dialogue clarification mechanisms make explicit the process of interpreting the communicative intents of the speaker's utterances by grounding them in the various modalities in which the dialogue is situated.", "This paper frames dialogue clarification mechanisms as an understudied research problem and a key missing piece in the giant jigsaw puzzle of natural language understanding.", "We discuss both the theoretical background and practical challenges posed by this problem, and propose a recipe for obtaining grounding annotations.", "We conclude by highlighting ethical issues that need to be addressed in future work.", "Clarifications are crucial to robust dialogues, and pragmatic factors notably those shaped by the world modalities situating the conversation have a key role to play.", "Referring expressions have in vision a modality in which to ground clarifications concerning objects in the world (de Vries et al., 2017); navigation instructions have in movement a modality in which to ground clarifications concerning collaborative wayfinding (Thomason et al., 2019).", "Clarifications grounded in situationally relevant modalities boost the redundancy required to learn to use language without explicit supervision, as they make explicit the process of negotiating the communicative intent .", "But despite its importance, work on clarification remains scattered.", "Humans switch between clarifications grounded in different modalities seamlessly but (we shall argue) systematically.", "Our discussion is based around a general recipe for detecting grounded clarifications ; we work towards this in Section 2 by first reviewing the distinction between perceptual and collaborative grounding, and then discussing clarification mechanisms, Clark (1996)'s action ladder of communication, and Ginzburg, Purver and colleagues (2012)'s classification of clarification phenomena.", "In Section 3 we draw these threads together and present the central idea: Given an utterance U, a subsequent turn is its clarification grounded in modality m if it cannot be preceded by positive evidence of understanding of U in m.", "This provides a unified way to frame clarification mechanisms and their interactions across various modalities; a graphical specification of the recipe it gives rise to can be found in Figure 2 of the supplementary material.", "It covers clarifications grounded in moving, grabbing and changing the physical world: these have traditionally been considered plain-old-questions (Purver et al., 2018), but we view them as useful clarification ingredients.", "1 In Sections 4 and A we test the practical implications of our recipe by identifying and characterizing (ac-cording to their modalities) the clarifications in a corpus of long dialogues in English.", "In Section 5 we turn to the claim that clarifications are rare in dialogue datasets (Ginzburg, 2012), and that current data-hungry algorithms cannot learn them.", "We argue that whether they are rare or not depends on pragmatic factors of the conversation and the modality of the grounded clarification, and discuss the impact of six such factors.", "After presenting potential objections and our responses in Section 6, we conclude in Section 7 by noting ethical issues raised by socioperceptive dialogue systems that will need to be addressed.", "2 1 We are suspicious of the common assumption that requests for information regarding references that are grounded in vision (e.g. the red or the blue jacket? ) are clarifications, whereas requests for information grounded in other modalities are not (e.g. do I take the stairs up or down? ).", "2 See also the supplement on ethical considerations.", "We begin by reviewing the theoretical background on grounding and clarification mechanisms.", "We then examine two schemes proposed to characterize clarifications according to their conversational function: one focuses on the problem of anchoring utterance parameters into the conversational history, the other emphasizes a multimodal ladder of actions co-temporal with dialogue turn-taking.", "We are interested in the potential contributions of both towards a recipe for annotating clarification mechanisms.", "Collaborative grounding is the process of seeking and providing incremental evidence of mutual understanding through dialogue.", "When the speaker believes that the dialogue is on track, positive evidence of understanding is provided in different forms (depending on the channel of communication) such as explicit acknowledgements, and via backchannels such as nods, eye contact, etc.", "Negative evidence of understanding signals that something needs negotiation before the dialogue partners can commit; clarification requests are the prototypical example of negative evidence.", "Collaborative grounding is distinct from perceptual (or symbol) grounding (Harnad, 1990; He et al., 2016; Tan and Bansal, 2019; Lu et al., 2020).", "The perceptual grounding literature deals with capabilities enabling symbols to be linked with perceptions, and is rooted in situationally relevant modalities such as vision.", "Collaborative grounding, on the other hand, deals with the dynamics of conversation (the ongoing exchange of speaker and hearer roles) and is rooted in situationally relevant aspects of socioperception.", "Alikhani and Stone (2020) note several basic mechanisms that contribute to collaborative grounding, including those for dealing with joint attention (Koller et al., 2012; Koleva et al., 2015; Tan et al., 2020), engagement (Bohus and Horvitz, 2014; Foster et al., 2017), turn taking and incremental interpretation (Schlangen and Skantze, 2009; Selfridge et al., 2012; DeVault and Traum, 2013; Eshghi et al., 2015) corrections and clarifications (Villalba et al., 2017; Ginzburg and Fernndez, 2010) and dialogue management (DeVault and Stone, 2009; Selfridge et al., 2012).", "These mechanisms have been studied for different kinds of applications (Denis, 2010; Dzikovska et al., 2010, 2012).", "Both collaborative and perceptual grounding are important ( all relevant modalities are potentially important) and in this paper we bring them together under an umbrella we call grounded clarification .", "Clarification requests (CRs) and their answers are the prototypical clarification mechanisms (CMs), pieces of dialogue that participants use to signal lack of understanding and to trigger negotiation.", "CMs are used in all kinds of dialogue and are in-fluenced by the type of interaction, the dialogue participants, and the context in which the conversation occurs.", "Interest in CMs by the artificial intelligence community dates back to the start of the century, and has typically focused on mechanisms for human-computer dialogue systems (Gab-sdil, 2003; Purver, 2004; Rodrguez and Schlangen, 2004; Rieser and Moore, 2005; Skantze, 2007).", "In sociolinguistics and discourse analysis, on the other hand, the interest in CMs (or repairs , as they are usually called there) has focused on human-human conversation for over three decades now; see (Sche-gloff, 1987) for a representative example.", "How CMs can be learned from data remains understudied.", "Rao and Daum III (2018) rank clarification requests of stackoverflow articles according to their usefulness: a good clarification question is one whose expected answer will be useful, which means that the clarification highlighted important information missing from the initial request for help; we share this view, but differ from Rao and Daum III, in that we focus on CMs and their responses occuring in multiturn dialogue.", "It may seem plausible to expect that clarification requests will be realized as questions; however, corpus studies indicate that their most frequent realization is in declarative form (Jurafsky, 2004).", "Indeed, the form of a clarification request (Rodrguez and Schlangen, 2004) is not a reliable indicator of the function that the clarification request is playing.", "Neither does form unambiguously indicate whether a dialogue contribution is a CR or not.", "The surface form of explicit negotiations of meaning in dialogue are frequently non-sentential utterances (Fer-nndez, 2006; Fernndez et al., 2007).", "These include the prototypical positive and negative evidence of grounding (acknowledgements and clarification requests (Stoyanchev et al., 2013)) but also less-well-known forms such as self-corrections, rejections, and modifiers (Purver, 2004; Purver et al., 2018).", "These observations indicate that we face significant challenges if we want to train a system to seek or supply clarification effectively.", "Ginzburg, Purver and colleagues (henceforth G&P) proposed the first scheme to classify the functions of CRs; see (Purver et al., 2003; Purver, 2006; Ginzburg, 2012).", "The G&P classification uses the categories shown on Table 1. The idea driving this work is that CRs are caused by problems arising during the anchoring of utterance parameters into the conversational history.", "In fact, G&P recognize this issue themselves, pointing out that CRs that do not repeat (part of) the content of the source utterance (that is, the utterance that is being clarified) can exhibit all three readings.", "However, G&P's classification is only ambiguous if only the past, but not the future, conversational history is taken into account .", "It is crucial to analyze the CR response in order to disambiguate the CR category.", "Sometimes the immediate linguistic context gives the clue necessary for disambiguation: whereas a repetition reading permits the responder to the CR to repeat her utterance verbatim, a clausal confirmation usually receives a yes/no answer, and an intended content reading requires the responder to reformulate in some way.", "Hence, the turn of the responder (and the subsequent reaction of the participant originally making the CR) can disambiguate among readings.", "Consider the following example from (Purver, 2004).", "The example shows a case where George's initial clausal interpretation is incorrect (the initiator is not satisfied), and a constituent reading is required (Anon cannot find a value for Spunyarn).", "George: you always had er er say every foot he had with a piece of spunyarn in the wire Anon: Spunyarn?", "George: Spunyarn, yes Anon: What's spunyarn?", "George: Well that's like er tarred rope In other situations, the immediate linguistic context will not be enough (for instance, a reformulation can be a good response to all three types of CRs) and then the whole conversational history might need to be analyzed in order to disambiguate.", "This makes G&P's classification difficult to use in annotation studies where the annotators only get shallow, partial, localized views of the dialogues.", "The second classification we shall examine puts the conversational action modality in the central role; it has been used in formal approaches to handling clarifications in dialogue systems (Gabsdil, 2003; Rodrguez and Schlangen, 2004; Rieser and Moore, 2005).", "This classification is based on the four-level model of conversational action independently developed by (Allwood, 1995) and (Clark, 1996).", "Here, we use Clark's terminology; his model is reproduced in Table 2. L SPEAKER A' S ACTIONSADDRESSEE B' S ACTIONS 4 Propose project w to B Uptake A's proposal w 3 Intend that B does i Recognize i from A 2 Present signal s to B Perceive signal s from A 1 Execute behavior t for B Attend to behavior t from A Table 2: Ladder of actions involved in communication Clark proposed this model in order to move from Austin's controversial classification 3 of speech acts (Austin, 1962) to a ladder of actions which characterizes not only the actions that are performed in language use (as Austin's does) but also their inter-relationships.", "Clark (1996) defines a ladder of actions as a set of co-temporal actions which provide upward causality and downward evidence .", "Let us discuss these using Table 2; we will call the speaker Anna and the addressee Barny.", "Suppose that Anna tells Barny to sit down.", "We might say that Anna is performing just one action: asking 3 For discussion of the controversies around Austin's classification of speech acts see (Clark, 1996) Barny to sit down.", "But it is easy to argue that she is performing four distinct, though co-temporal, actions actions beginning and ending simultaneously.", "These actions are in a causal relation going up the ladder (from level 1 up to level 4): Anna must get Barny to attend her behavior t (level 1) in order to get him to hear the words she is presenting in her signals (level 2).", "Anna must succeed at that in order to get Barny to recognize what she means (level 3), and she must succeed at that in order to get Barny to uptake the project she is proposing (level 4).", "In short, causality (do something in order to get some result) climbs up the ladder; this property Clark calls upward causality .", "The different levels are related to different human modalities.", "We say that level 1 is grounded into socioperception , an ability that humans developed for collaboration that is crucial for achieving joint attention (Tomasello et al., 2005).", "Level 2 is grounded in hearing if we use speech as our communication channel.", "Level 3 is grounded in vision when it involves recognizing referents in the real world.", "Level 4 is grounded in kinesthetic when it involves moving and acting in the real world.", "The classification, along with obstacles that the addressee may face in the various modalities during the interpretation of a conversational action, is shown in Table 3. In the rest of the paper we will refer to these modalities using the level number.", "Humans systematically use the evidence provided by this ladder.", "Observing Barny sitting down is good evidence that he did not refuse to uptake (level 4) but also recognized what Anna intended and identified the chair (level 3).", "That is also evidence that she got Barny to hear her words (level 2), and evidence that she got him to attend to her (level 1).", "That is, evidence trickles down the ladder; Clark calls this the downward evidence property.", "If Barny repeats verbatim what Anna said (e.g. suppose she spoke in Spanish and he repeats the word sientate ), then Anna has good evidence that he heard what she said (level 2).", "However, that is not necessarily evidence that he has recognized her intention; there might be an obstacle in level 3 (for instance, Barny might not know Spanish).", "If there is such an obstacle, she would have completed levels 1 and 2 while failing to complete not only level 3 but also level 4 (it is rather unlikely that Barny would sit down right after hearing Anna and even if he did, this would not be because he was uptaking Anna's project).", "A high level action in the ladder can only be completed by executing all the actions in the lower levels.", "This property Clark calls upward completion .", "If you tell somebody something, you expect a reaction from him.", "If he doesn't answer, you might think that he didn't hear you, that he doesn't want to answer, or that he thinks you are talking to somebody else.", "None of these situations is very agreeable; humans don't like wasting effort, or being ignored.", "In order not to annoy the speaker, the addressee has two options: either he shows evidence in level 4 (and then, by downward evidence, the speaker knows that all the levels succeeded), or he indicates the obstacle in executing the action (in any level).", "Clarifications are the tools that addressees can use to make the obstacle explicit.", "In this section we draw these threads together under the heading grounded clarification .", "First, what is a clarification?", "Our starting proposal, which we will modify, is the following: given an utterance U, a subsequent turn is its clarification if it cannot be preceded by positive evidence of U .", "Note that this proposal implicitly embodies a procedure for annotating clarifications, one which could be crowdsourced: Is this a clarification?", "Check whether it can be preceded by positive evidence!", "Our starting proposal is a modified version of Gabsdil (2003)'s test for CRs.", "Gabsdil says that CRs (as opposed to other kinds of dialogue contributions) cannot be preceded by explicit acknowledgments.", "For example: Lara: There's only two people in the class.", "a) Matthew: Two people?", "b) (*) Matthew: Ok, Two people?", "(BNC, taken from (Purver et al., 2003)) Gabsdil argues that", "(a) in the example above is a CR because", "(b) is odd (we mark odd turns with (*) in examples).", "In", "(b), Matthew first acknowledges Lara's turn and only then indicates that her turn contains information that he finds controversial.", "4 4 This could be a felicitous response, but it would require On the other hand,", "(b) in the example below is fine and hence", "(a) is not a CR: the lieutenant acknowledges the sergeant's turn and then moves on to address what has become the most pressing topic in the conversation: Sergeant: There was an accident sir", "a) Lieutenant: Who is hurt?", "b) Lieutenant: Ok.", "Who is hurt?", "Adapted from (Traum, 2003, p.391) However Gabsdil's original test incorrectly discards cases that we view as CRs.", "Consider the following example: G: I want you to go up the left hand side of it towards the green bay and make it a slightly diagonal line, towards, sloping to the right.", "F: Ok.", "So you want me to go above the carpenter?", "Adapted from (Gabsdil, 2003, p.30) The problem is that the level of positive evidence contributed by F's acknowledgment is ambiguous.", "For instance, the Ok could (conceivably) mean: Ok, so you want to talk to me (level 1).", "Ok, I heard you (level 2).", "Ok, I saw what you are referring to (level 3).", "Ok, I did it (level 4, the highest level).", "Thus we modify Gabsdil's test to make it level-sensitive .", "In order to signal that all the levels have been successful and that no CR related to any of them is expected, the simple acknowledgment needs to be replaced by positive evidence in the highest level.", "This works for Gabsdil's example: G: I want you to go up the left hand side of it towards the green bay and make it a slightly diagonal line, towards, sloping to the right.", "(*) F: Ok, I did it.", "So you want me to go above the carpenter?", "Here So you want me to go above the carpenter?", "is either weird or far more likely to be interpreted as a question about an action that comes after F has successfully followed G's instruction.", "That is: it could be interpreted as F taking the initiative and proposing the next move, rather than as clarifying G's instruction.", "Whether this is plausible would be determined by the following turns.", "More generally, if the addressee wants to uptake the speaker's proposal then he or she has two options: either to give positive evidence at the highest modality (and then, by downward closure, the marked intonation to induce a backtracking effect. speaker knows that all lower levels succeeded) or to explicitly indicate the problem using a clarification (at any level).", "Table 3 illustrates, for each level and modality, possible CRs.", "We are not exhaustive about all the modalities that could happen in reality.", "We list four of them here but there could be more depending on the task.", "This approach to CR identification and classification is useful not only for instructions but also for other types of utterances.", "The following is an extension of Grice's classic implicature example (physical actions are between square brackets): A: I am out of petrol.", "B: There is a garage around the corner.", "A: [A goes to the garage and then meets B again] (*) A: Ok, I got petrol at the garage.", "Do you think the garage was open?", "Adapted from (Grice, 1975, p.311) After acknowledging a contribution at level 4 (which A's Ok, I got petrol at the garage clearly does) it is really hard to go on and ask a CR about that contribution (A's Do you think the garage was open? is a bizarre follow-up it could perhaps be interpreted as sarcastic).", "Thus our modified proposal for identifying clarifications is the following: given an utterance U, a subsequent turn is its clarification grounded in modality m if it cannot be preceded by positive evidence of understanding of U in m.", "5 Like the earlier version, this implicitly embodies a annotation procedure.", "Let's see how it works.", "In this section we evaluate our recipe and the modality-based classification it gives rise to.", "We do so by using it to annotate a small dataset, the SCARE corpus (Stoia, 2007).", "Before delving into the details of the classification, we describe the pragmatic influences that the dialogue participants are under in this dataset.", "The SCARE corpus consists of fifteen English spontaneous dialogues situated in an instruction giving task .", "6 The dialogues vary in length, with a 5 For a detailed graphical specification of our recipe, see Figure 2 in the supplementary material.", "Notice that the utterances are stored in a stack in Figure 2 because the clarification does not need to be immediately after its source.", "While an utterance is at the top of the stack it can be clarified, no matter how many turns in between have happened.", "That way an utterance can be clarified many times.", "6 The corpus is available at http://slate.cse.ohio-state.edu/quake-corpora/scare/.", "minimum of 400 turns and a maximum of 1500; hence, the dialogues are much longer than other datasets grounded in vision and action where dialogues typically have less than 10 turns on average (de Vries et al., 2017; Thomason et al., 2019).", "The dialogues were collected using the QUAKE environment, a first-person virtual reality game (so there is immediate world validation ).", "The task consists of a direction giver (DG) instructing a direction follower (DF) on how to complete several tasks in a simulated game world.", "The corpus contains the collected audio and video, as well as word-aligned transcriptions.", "The DF had no prior knowledge of the world map or tasks and relied on his partner, the DG, to guide him on completing the tasks (so the DPs have asymmetric knowledge of the task ).", "The DG had a map of the world and a list of tasks to complete.", "The partners spoke to each other through headset microphones.", "As the participants collaborated on the tasks, the DG had instant feedback about the DF's location in the simulated world, because the game engine displayed the DF's first person view of the world on both the DG's and DF's computer monitors (so the DPs share a view of the task ).", "Finally, the DPs were punished (they were told they would receive less money for performing the experiment) if they pressed the wrong buttons or put things in the wrong cabinets.", "We present a sample interaction from the SCARE corpus.", "During this dialogue fragment, the dialogue participants were performing one of the tasks of the SCARE experiment specified: hide the rebreather in cabinet 9 .", "The presentation of this dialogue is divided over the two following subsections; the first gives the warm-up necessary for the second.", "Subsection 4.1 illustrates how positive evidence of understanding is provided, and no examples of CRs are presented here.", "Subsection 4.2's goal, on the other hand, is to illustrate CRs in different modalities, so here we focus on negative evidence.", "At the beginning of this dialogue, the DG is instructing the DF to find the rebreather.", "As part of this task, they have to press a button in order to open a door as shown in Figure 1. The figure shows a dialogue fragment and a screenshot of the shared view when the fragment starts.", "The turns which provide positive evidence at levels 3 and 4 DG(1): see that button straight ahead of you?", "DF(2): mhm DG(3): hit that one DF(4): ok Figure 1: Example of the view shared by the dialogue participants and fragment from the SCARE corpus are shown in boldface.", "If evidence for proposal is followed by a turn that is not evidence of uptake (of the proposal) then we say that the turn is a CR.", "The dialogue fragment reproduced below starts when the DG is trying to get the DF to press the button that is straight ahead in their current view; this button opens the cabinet where the rebreather is located.", "As part of this project, the DG first makes sure that the DF identifies this button using the sub-dialogue constituted by (1) and (2).", "Once the button is identified, the short instruction in (3) suf-fices to convey the goal of the joint project, namely hitting this button; this is acknowledged at level 4 in turn (4) when the DF presses the button.", "Now we turn to an extended example, extracted from the SCARE corpus, of clarification requests at different levels.", "Between square brackets we indicate forms of non-linguistic communication.", "The DG utters an instruction in (1).", "In turn (2) the DF makes explicit an obstacle at level 3 that must be solved before putting the rebreather in the cabinet, namely identifying cabinet 9; in doing so he proposes this task.", "In turn (3) the DG proposes to identify cabinet 9 by first identifying its location.", "Turn (4) is evidence of uptake of turn (3) the DG answers his own question but it is also evidence of the proposal: get back to the starting room.", "DG(1): we have to put it in cabinet nine [pause] DF(2): yeah they're not numbered [laughs] DG(3): [laughs] where is cabinet nine DG(4): it's kinda like back where you started so DF(5): ok so I have to go back through here?", "DG(6): yeah DF(7): and around the corner?", "DG(8): right DF(9): and then do I have to go back up the steps?", "DG(10): yeah DF(11): alright this is where we started DG(12): ok so your left ca-[pause] the left one DF(13): so how do I open it?", "DF(14): one of the buttons?", "DG(15): yeah, it's the left one DF(16): makes sense DF(17): alright so we put it in cabinet nine Of the 17 turns, 9 were uttered by the DF and 8 by the DG.", "From the 9 turns by the DF, 5 of them are CRs at level 4 and one at level 3. Turn (2) is a CR of instruction (1).", "Turns (5), (7) and (9) are CRs of instruction (4).", "Utterance (11) shows positive evidence at level 4 of instruction (4) so this instruction cannot be further clarified following the recipe we defined in Section 3. Turns (13) and (14) are CRs of utterance (12).", "The positive evidence at level 4 of instruction (12) is completed by a physical action of the DF in the game world: opening the cabinet by pressing the left button while uttering (16).", "Finally, turn (17) together with the corresponding physical action are positive evidence at level 4 of instruction (1).", "In this section, we identify and discuss a number of pressures that interact in order to determine the number and type of CRs that occur in dialogue; we also explain why it makes sense (although it may seem counter-intuitive at first sight) that too much uncertainty will tend to lower the number of CRs.", "The distribution and types of CRs found in a corpus depend on the characteristics of the task that the dialogues in the corpus are addressing.", "Previous clarification corpus studies (Purver, 2004; Rieser and Moore, 2005; Rodrguez and Schlangen, 2004) have required expensive and detailed annotations by linguists who also evaluated the quality of the datasets.", "Purver (2004) annotates more than 10K turns of the BNC corpus, which contains English dialogue transcriptions of topics of general interest in multiparty dialogue such as meetings.", "These annotations were used to build a dialogue system that could make and understand relevant clarifications related to different modalities (Purver, 2006).", "(Rieser and Moore, 2005) and (Rodrguez and Schlangen, 2004) did similar annotations on task-oriented dialogue corpora.", "(Rieser and Moore, 2005) looked for CRs in a corpus of English task-oriented human-human dialogue called Communicator.", "The corpus consists of travel reservation dialogues between a client a travel agent.", "The interactions occur by phone; the participants do not have a shared view of the task.", "The corpus comprises 31 dialogues of 67 turns each (on average), from which 4.6% of the turns are CRs.", "12% of CRs found were classified as level 4 CRs; such as the following: Client: You know what the conference might be downtown Seattle so I may have to call you back on that.", "Agent: Okay.", "Did you want me to wait for the hotel then?", "In this corpus the world validation is informational not physical as in the Bielefeld data that we turn to now.", "(Rodrguez and Schlangen, 2004) looked for CRs in a corpus of German task-oriented human-human dialogue called Bielefeld.", "The dialogues occur in a instruction giving task for building a model plane.", "The interactions occur face to face; the participants have a shared-view of the task.", "The corpus consists of 22 dialogues, with 180 turns each (on average), from which 5.8% of the turns are CRs.", "22% of CRs found were classified as level 4 CRs, such as the following: DG: Turn it on.", "DF: By pushing the red button?", "We analyzed the SCARE corpus while watching the associated videos and we classified the clarification requests according to the levels of communication using the decision procedure explained in Section 3. 7 We found that 6.5% of the turns are CRs.", "Of these, 65% belong to level 4 of Table 2, and 31% belong to level 3 (most of them related to reference resolution).", "Only 2% of the CRs were acoustic (level 2) since the channel used was very reliable, and another 2% had to do with establishing contact (level 1).", "The SCARE corpus presents slightly more CRs (at 6.5%) than the corpora analyzed in previous work (which reported that 4%-6% of the dialogue turns were CRs).", "Furthermore, in contrast to the 7 We will release our annotations to the research community upon request.", "BNC corpus study (Purver, 2004), most CRs in the SCARE corpus occurred at level 4. What task characteristics might have caused the observed differences?", "We hypothesize that the following six characteristics account for the larger proportion of CRs at level 4 that we find in the SCARE corpus.", "Task oriented dialogues (unlike general interest dialogues) are constrained by the task, thus the hearer may have a better hypothesis of what the problem is with the source utterance.", "He also has a clear motivation for asking for clarifications when the utterance does not fit his model of the task.", "Dialogues situated in an instruction giving task show an asymmetry between the knowledge that the dialogue participants (DPs) have about the task.", "The Direction Giver (DG) knows how the task has to be done and the Direction Follower (DF) doesn't.", "Hence, it is to be expected that the DF will have doubts about the task which (both DPs know) can only be answered by the DG.", "In symmetric dialogues, it might not be clear who has what information and then the DPs might not know who can answer the CRs.", "Immediate world validation seems to play a role as well.", "Dialogues that interleave linguistic actions and informational or physical actions exhibit immediate world validation of the interpretations.", "If an instruction fails in the world, the DF will ask for clarification.", "When the DPs have a shared view of the task, the DP that is acting on the world knows that the other participant is observing him and verifying his actions and then will try to be sure of what he has to do before doing it.", "If he is not sure he will ask.", "Long dialogues tend to increase the percentage of clarifications (more than 100 turns) because DPs prefer to ask questions when they have a good hypothesis to offer.", "The longer the interaction, the more background is shared by the DPs and the easier it will be to come up with a good hypothesis.", "Finally, if there are actions in some modality that are irreversible , then they will clarify more until they are sure of what they have to do.", "Humans switch between clarifications grounded in different modalities seamlessly and we have argued they do so systematically; in effect they do so by following a recipe for grounding classifica-tions.", "We obtained this recipe by granting a role to both perceptual and collaborative grounding in clarification requests.", "This we did by examining Clark (1996)'s action ladder of communication and Ginzburg, Purver and colleagues (2012)'s classification of clarification phenomena, and combining the concept of level taken from the ladder of communication with Gabsdil (2003)'s test for clarification requests.", "We reframed Clark's downward evidence and upwards completion properties for multimodal interactions.", "This gave us the following: given an utterance, a subsequent turn is its clarification grounded in modality m if it cannot be preceded by positive evidence of understanding in m.", "This provides a unified way to frame clarification mechanisms and their interactions across modalities something we view as useful in its own right given the scattered literature on clarification mechanisms.", "However we also suggested that this recipe was suitable for learning from data collected by crowdsourcing.", "We supported this by examining the claim that clarifications are rare in dialogue datasets (Ginzburg, 2012), and that current data-hungry algorithms cannot learn them.", "We argued that whether they are rare or not depends on pragmatic factors of the conversation and the modality of the grounded clarification.", "Moreover, along the way we noted a number of practical issues work with large dialogues, don't just provide annotators with dialogue fragments, take future conversational history into account when annotating that we think could have an important impact on learnability.", "Below we list some possible objections to our proposal.", "We also include our responses in the hope that this will motivate further debate on these issues in the community.", "Objection: I still don't have a feel for how much we will gain from this when it comes to a practical, realistic use case; in particular, for an end-to-end system rather than an NLP pipeline.", "Response: Being able to identify and annotate a turn as a clarification request can help an end-to-end system learn to apply the mechanisms of collaborative grounding to subdialogs, which have rules that differ from modality to modality.", "Objection: The biggest problem I see is that the distinction of the different levels (which the correct annotation relies on) might not be clear-cut (in particular when considering that crowdsourced annotations usually come from non-experts).", "I have no idea what quality we get, nor what inter-annotator agreement figures we can expect.", "previous methodologies for which inter-annotator agreement has been reported in certain corpora: E.g., .70 for the Bielefeld corpus (Rodrguez and Schlangen, 2004), and .75 for the BNC corpus (Purver, 2004).", "Our methodology refines Clark (1996) 4-level classification by grounding each level (previously only described by means of examples) to 4 different modalities relevant for situated dialog: socioperception, hearing, vision and movement.", "This new grounded characterization should improve previous inter-annotator agreement.", "Using our extended methodology we report a .84 kappa for the SCARE.", "Objection: The corpora that are being investigated are all very domain-specific and relatively small in terms of numbers of dialogues (but with a large average number of turns).", "This means that even if we were to obtain annotation quality figures, it would still raise the question of what general conclusions we can draw from this.", "Response: We share this concern; our goal with this paper is to motivate more work in this area.", "We believe that this objection actually lends support to our insistence on the importance of a more fine-grained analysis of grounding mechanisms.", "Our methodology generalizes to domains that ground the communicative intent in the modalities of socioperception, hearing, vision and movement.", "Examples are robots and virtual assistants, where the dialog partners share a sensible environment.", "Our argument is that better conceptualizations of clarification subdialogs are needed so that models are able to identify them, distinguish the different types ruled by the different modalities, and learn the structures that govern them.", "This paper urges the community to address a research gap: how clarification mechanisms can be learned from data.", "We believe that novel research methodologies which highlight the importance of the role of clarification mechanisms in communicative intent are needed for this.", "So we presented an annotation methodology, based on a theoretical analysis of clarification requests, which unified a number of previous accounts.", "But to conclude, a different note.", "As dialogue systems get better at negotiating meaning with clarifications, future work will need to seriously consider how people relate to conversationally-gifted artificial agents.", "Studies of how users feel when interacting with dialogue systems (Brave et al., 2005; Portela and Granell-Canut, 2017) found that systems can have a psychological impact on users; thus it will become increasingly important to consider the risks of users developing social or emotional bonds with more sophisticated system (thereby affecting their well-being in unforseen ways) and of users being emotionally manipulated by them.", "Socioperceptive dialogue systems could turn out to have very sharp teeth indeed.", "We thank the anonymous reviewers for their detailed reviews and insightful comments.", "In this paper we have not trained machine learning models so we have used negligible computing power.", "We have not collected a new dataset so we have not used crowdsourcing.", "The annotation of the SCARE corpus was done by one of the authors and a friend who likes the work and was not economically rewarded.", "As we noted in the papers conclusion, there are important ethical issues that future work on this area will need to consider.", "But there are also more immediate discuss ethical considerations and we turn to these now.", "First, the datasets that we use in this paper are described in (Purver, 2004; Rodrguez and Schlangen, 2004; Rieser and Moore, 2005; Stoia, 2007).", "The dataset in (Purver, 2004) contains spoken British English dialogues collected during meetings.", "The dataset used in (Rieser and Moore, 2005) is a fragment of the Carnegie Mellon Communicator Corpus (Bennett and Rudnicky, 2002), and is in American English.", "In these dialogues, an experienced travel agent is making reservations for trips that people in the Carnegie Mellon Speech Group were taking in the upcoming months.", "There is no information to whether the dialogue participants were rewarded or notified about the dataset collection.", "The dataset in (Rodrguez and Schlangen, 2004) includes dialogues in which one participant gives instructions in German to the other to build a model plane.", "Finally, the Scare corpus (Stoia, 2007) is an American English corpus collected using students at Ohio State University; they were payed to participate in the experiment.", "Future work in this area will need to collect new datasets that reflect the interactions between different types of clarifications in different modalities.", "Usually such collections are crowdsourced, which raises ethical concerning fair wages and number of hits per day.", "We would like to encourage the community to value datasets in languages other than English in order to model different strategies for indicating the source of the clarification (prosody, syntactic construction, etc).", "Last but not least, computing power and carbon footprint should be considered.", "Machine learning models trained on long multimodal dialogue histories may get very big very fast.", "We need models that learn to summarize dialogue histories for the sake of the environment and the budget of low-income researchers." ]
[ "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "method", "other", "abstain", "other", "other", "other", "other", "other", "abstain", "method", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "method", "abstain", "abstain", "method", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain" ]
[ "This study explores the necessity of performing cross-corpora evaluation for grammatical error correction (GEC) models.", "GEC models have been previously evaluated based on a single commonly applied corpus: the CoNLL-2014 benchmark.", "However, the evaluation remains incomplete because the task difficulty varies depending on the test corpus and conditions such as the proficiency levels of the writers and essay topics.", "To overcome this limitation, we evaluate the performance of several GEC models, including NMT-based (LSTM, CNN, and transformer) and an SMT-based model, against various learner corpora (CoNLL-2013, CoNLL-2014, FCE, JFLEG, ICNALE, and KJ).", "Evaluation results reveal that the models' rankings considerably vary depending on the corpus, indicating that single-corpus evaluation is insufficient for GEC models.", "Grammatical error correction (GEC) is the task of correcting various grammatical errors in a given text, which is typically written by non-native speakers.", "Previous studies focused on typical errors such as those in the use of articles (Han et al., 2006), prepositions (Felice and Pulman, 2008), and noun numbers (Nagata et al., 2006).", "Machine translation approaches are being presently applied for GEC (Junczys-Dowmunt et al., 2018; Chollampatt and Ng, 2018; Ge et al., 2018; Junczys-Dowmunt and Grundkiewicz, 2016).", "In these approaches, GEC is treated as a translation problem from the erroneous text to the correct text (Mizumoto et al., 2012; Felice et al., 2014; Junczys-Dowmunt and Grundkiewicz, 2014).", "However, the evaluation of GEC performance is unfortunately not complete because researchers tend to evaluate their models on a single corpus.", "2014) has been recently used for such evaluation.", "Single-corpus evaluation may be insufficient in cases wherein a GEC model generally aims to robustly correct grammatical errors in any written text partly because the task difficulty varies depending on proficiency levels and essay topics.", "Although a model outperforms a baseline in one corpus, the model in another corpus may perform better, leading to different conclusions from what we know.", "This study explores the necessity of performing cross-corpora evaluation for GEC models.", "The performance of four recent models, namely three neural machine translation (NMT)-based models (LSTM, CNN, and transformer) and a statistical machine translation (SMT)-based model is evaluated against six learner corpora (CoNLL-2014, CoNLL-2013 (Ng et al., 2013), FCE (Yannakoudakis et al., 2011), JFLEG ( Napoles et al., 2017), KJ (Nagata et al., 2011), and ICNLAE (Ishikawa, 2013)).", "Evaluation results show that the models' rankings considerably vary depending on the corpus.", "Empirical results reveal that models must be evaluated using multiple corpora from different perspectives.", "(cid:15) We first explore the necessity of performing cross-corpora evaluation for GEC models.", "(cid:15)", "We empirically show that the single-corpus evaluation may be unreliable.", "(cid:15)", "Our source code is published for cross-corpora evaluation so that researchers in the community can adequately and easily evaluate their models based on multiple corpora.", "1 2 Related Work We are motivated by the issue of robustness in the parsing community.", "This field pre-1 https://github.com/tomo-wb/GEC_CCE 1310 viously focused on improving parsing accuracy on Penn Treebank (Marcus et al., 1993).", "However, robustness was largely improved by evaluation using multiple corpora including Ontonotes (Hovy et al., 2006) and Google Web Treebank (Petrov and McDonald, 2012).", "A situation similar to this might also occur in GEC.", "In other words, evaluation in GEC has relied heavily on the CoNLL-2014 benchmark, which implies that the field is overdeveloping on this dataset.", "Other corpora are used for evaluation, such as KJ (Mizumoto et al., 2012) and JFLEG (Sakaguchi et al., 2017; Junczys-Dowmunt et al., 2018; Chollampatt and Ng, 2018; Ge et al., 2018; Xie et al., 2018).", "However, these corpora still depend on one or at most two corpora.", "Cross-corpora evaluation is discussed herein using six corpora, namely CoNLL-2014, CoNLL-2013, FCE, JFLEG, KJ, and ICNALE.", "The following conditions were considered when selecting corpora: (cid:15) The corpus must be used at least once in the GEC community.", "(cid:15)", "Based on the hypothesis that writers' proficiency affects the error distribution of any given text, we add a corpus with relatively low proficiency (KJ) compared to CoNLL-2014.", "CoNLL-2014 (Ng et al., 2014) , the official dataset of CoNLL-2014 shared task, is a collection of essays written by students at the National University of Singapore and is commonly used as test data for the CoNLL-2014 benchmark.", "This dataset contains only two essay topics.", "CoNLL-2013 (Ng et al., 2013) , the official dataset of CoNLL-2013 shared tasks, is commonly used as the development data for the CoNLL-2014 benchmark and contains only two essay topics.", "Cambridge ESOL First Certificate in English (FCE) (Yannakoudakis et al., 2011) is a dataset containing 1,244 examination scripts of the Cambridge FCE examination.", "Topics and first languages (L1s) in the dataset are diversified because it contains essays for 10 topics written by nonnative speakers from various countries.", "JHU FLuency-Extended GUG Corpus (JF-LEG) (Napoles et al., 2017) contains approximately 1,500 sentences from an English proficiency test.", "It contains sentences written by learners of the English language having various L1s and proficiency levels.", "Konan-JIEM Learner Corpus (KJ) (Nagata et al., 2011) contains 233 essays written on 10 topics by students of a Japanese college, which are manually error-tagged and shallow-parsed.", "International Corpus Network of Asian Learners of English, Written Essays (IC-NALE) (Ishikawa, 2013) contains essays written by college and graduate students from ten Asian countries/regions (China, Hong Kong, Indonesia, Japan, Korea, Pakistan, the Philippines, Singapore, Taiwan, and Thailand).", "The original ICNALE is not error annotated.", "Therefore, we sampled a total number of 1,736 sentences, which are manually annotated with grammatical errors based on KJ 's annotation scheme.", "Table 1 summarizes the properties of these corpora.", "Let N and M denote the total number of source words and sentences in a corpus, respectively.", "Word error rate (WER) is defined as follows: WER = Mm =1 d ( X m ; Y m ) Mm =1 N m where X m denotes each source sentence, Y m denotes each corrected sentence, and d ( X m ; Y m ) denotes the edit distance between X m and Y m using dynamic programming.", "The following conclusions are derived: (1) CoNLL-2014 has narrow coverage of topics, proficiency and L1s compared with other corporas such as JFLEG and FCE.", "(2) Several learner corpora are available for the evaluation of GEC models.", "These corpora can help investigate the performance of GEC models under different conditions.", "The following factors are considered while selecting our model.", "the aforementioned factors: LSTM : We use a bi-directional LSTM in the encoder and an LSTM with an attention mechanism in the decoder.", "Both the encoder and the decoder comprise two layers.", "The LSTM hidden state and word embedding sizes are set to be 500.", "CNN : We follow the previous study ( Chollampatt and Ng, 2018), namely a fully convolutional encoderdecoder architecture with seven convolutional layers.", "The hyperparam-eters used in a previous study are used herein (Chollampatt and Ng, 2018).", "Transformer : Transformer is the self-attention-based model proposed by Vaswani et al. (2017).", "Six layers are used for both the encoder and decoder along with eight attention heads.", "The word embedding size is set to 1024 dimensions, and the size of position-wise feed-forward networks is set to 4096 dimensions at each inner layer.", "SMT : We essentially follow the idea used in a previous study ( Junczys-Dowmunt and Grundkiewicz, 2016), with some key differences.", "Specifically, we only use English Wikipedia for language model training and only the NUS Corpus of Learner English (NUCLE) and the Lang-8 Learner Corpora (Lang-8) for translation model training to make the experimental settings equal in all models.", "We use two public datasets, namely Lang-8 (Mizumoto et al., 2011) and NUCLE (Dahlmeier et al., 2013), for training.", "Our pre-processing and experimental setup is similar to that reported previously (Chollampatt and Ng, 2018).", "In particular, a subset of NUCLE (5.4K) is utilized as the development data for selecting the model; the remaining subset (1.3M) is utilized as the training data.", "All the models are trained, tuned, and tested in the same way.", "The models are tested on each test data shown in Table", "1. As an evaluation metric, we use F 0 : 5 score computed by applying the MaxMatch scorer (Dahlmeier and Ng, 2012) and GLEU (Napoles et al., 2015).", "We determine the average F 0 : 5 and average GLEU scores of the four models, which are trained with different random initializations, following a previously reported approach ( Chollampatt and Ng, 2018).", "Figure 1 shows the performance of each model sorted from best to worst based on their F 0 : 5 score, revealing that the performance substantially varies depending on the corpus.", "For example, the performance of the transformer ranges from the score of F 0 : 5 , which is as low as 36.20 on CoNLL-2013, to as high as 60.06 on JFLEG.", "Notably, their rankings also considerably vary.", "Transformer performs best on CoNLL-2014.", "However, it exhibits third-best performance among FCE, KJ, and ICNALE; LSTM outperforms the other models by a large margin of up to 5.3 F 0 : 5 points.", "Some examples of the model outputs are presented in Table 2 and Table", "3. Some situations are successfully corrected using transformer (Table 2), whereas it failed to perform in other situations (Table 3).", "The reason for difference in the model rankings cannot be generally stated because it is influenced by various factors such as the learner's proficiency, essay topic, and L1.", "The experimental results show, however, that discussions based on the performance on CoNLL-2014 may only hold under certain conditions.", "rankings on FCE show different trends in Figure 1 and Figure", "2. This is partly because F 0 : 5 and GLEU evaluate different perspectives of the models.", "Furthermore, evaluation data and metric must be appropriately set depending on the factors that need to be evaluated in the model.", "Experimental results indicate that the benchmark single-corpus evaluation is not robust; however, more diverse corpora remain undetermined.", "Both JFLEG and FCE can be diverse corpora because they contain examination scripts written by language learners from all over the world.", "JFLEG is particularly designed to contain more diverse corpus for developing and evaluating GEC models ( Napoles et al., 2017).", "If a diverse single-corpus evaluation suffices, the rankings of the models will remain the same.", "However, experimental results have shown that the model rankings on both JFLEG and FCE are different (Figure 1).", "Thus, single-corpus evaluation is deemed weak regardless of its diversity.", "This study discusses the importance of evaluating GEC models from various perspectives using multiple corpora.", "Multi-perspective evaluation does not necessarily mean using multiple corpora.", "Many aspects in a corpus can be used for analysis, such as the proficiency of the writers, essay topics, and the writer 's native language.", "As a case study, we evaluate and analyze the models regarding the essay WER.", "Table 4 shows the performance (in precision, recall, and F 0 : 5 ) of all the models when WER is the lowest (7.64 % for ICNALE) and the highest (20.86 % for JFLEG).", "Transformer and LSTM outperform all the other models in the highest and the lowest error-rated corpora, respectively.", "Experimental results show that LSTM and transformer may be more precision-oriented and recall-oriented, respectively.", "Further, precision-oriented models have an advantage over recall-oriented models when a given text contains several errors, and vice versa.", "This knowledge enables choosing a model based on the task that has to be completed.", "This study explored the necessity of performing cross-corpora evaluation for GEC models, for which the performance of several GEC models was investigated against various learner corpora.", "Empirical evaluation results revealed that the model performance and rankings considerably vary depending on the corpus, suggesting that a single-corpus evaluation can be unreliable.", "Therefore, cross-corpora evaluation should be applied to GEC models.", "We also published our source code for the cross-corpora evaluation framework so that researchers in the community can adequately and easily evaluate their models based on multiple corpora.", "Our future study will further examine the robustness of several existing evaluation metrics and explore new metrics appropriate for cross-corpora and/or cross-domain evaluation.", "We are grateful to the members of the Tohoku University Natural Language Processing Laboratory as well as the anonymous reviewers for their insightful comments and suggestions." ]
[ "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "other", "abstain", "abstain", "objective", "objective", "result", "objective", "method", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "objective", "other" ]
[ "The timings of spoken response offsets in human dialogue have been shown to vary based on contextual elements of the dialogue.", "We propose neural models that simulate the distributions of these response offsets, taking into account the response turn as well as the preceding turn.", "The models are designed to be integrated into the pipeline of an incremental spoken dialogue system (SDS).", "We evaluate our models using offline experiments as well as human listening tests.", "We show that human listeners consider certain response timings to be more natural based on the dialogue context.", "The introduction of these models into SDS pipelines could increase the perceived naturalness of interactions.", "1 1 Introduction The components needed for the design of spoken dialogue systems (SDSs) that can communicate in a realistic human fashion have seen rapid advancements in recent years (e.g. Li et al. (2016); Zhou et al. (2018); Skerry-Ryan et al. (2018)).", "However, an element of natural spoken conversation that is often overlooked in SDS design is the timing of system responses.", "Many turn-taking components for SDSs are designed with the objective of avoiding interrupting the user while keeping the lengths of gaps and overlaps as low as possible e.g. Raux and Eskenazi (2009).", "This approach does not emulate naturalistic response offsets, since in human-human conversation the distributions of response timing offsets have been shown to differ based on the context of the first speaker's turn and the context of the addressee's response (Sacks et al., 1974; Levinson and Torreira, 2015; Heeman and Lunsford, 2017).", "It has also been shown that listeners have different anticipations about upcoming 1 Our code is available at https://github.com/ mattroddy/RTNets .", "responses based on the length of a silence before a response (Bogels et al., 2019).", "If we wish to realistically generate offsets distributions in SDSs, we need to design response timing models that take into account the context of the user's speech and the upcoming system response.", "For example, offsets where the first speaker's turn is a backchannel occur in overlap more frequently (Levinson and Torreira, 2015).", "It has also been observed that dispreferred responses (responses that are not in line with the suggested action in the prior turn) are associated with longer delays (Kendrick and Torreira, 2015; Bogels et al., 2019).", "Overview We propose a neural model for generating these response timings in SDSs (shown in Fig. 1).", "The response timing network (RTNet) operates using both acoustic and linguistic features extracted from user and system turns.", "The two main components are an encoder, which encodes the system response h z , and an inference network, which takes a concatenation of user features ( x n ) and h z .", "RTNet operates within an incremental SDS framework (Schlangen and Skantze, 2011) where information about upcoming system responses may be available before the user has finished speaking.", "RTNet also functions independently of higher-level turn-taking decisions that are traditionally made by the dialogue manager (DM) component.", "Typically, the DM decides when the system should take a turn and also supplies the natural language generation (NLG) component with a semantic representation of the system response (e.g. intents, dialogue acts, or an equivalent neural representation).", "Any of the system response representations that are downstream from the DM's output representation (e.g. lexical or acoustic features) can potentially be used to generate the response encoding.", "Therefore, we assume that the decision for the system to take a turn has already been made by the DM and our objective is to predict (on a frame-by-frame basis) the appropriate time to trigger the system turn.", "It may be impractical in an incremental framework to generate a full system response and then re-encode it using the response encoder of RTNet.", "To address this issue, we propose an extension of RTNet that uses a variational autoencoder (VAE) (Kingma and Welling, 2014) to train an interpretable latent space which can be used to bypass the encoding process at inference-time.", "This extension (RTNet-VAE) allows the benefit of having a data-driven neural representation of response encodings that can be manipulated without the overhead of the encoding process.", "This representation can be manipulated using vector algebra in a flexible manner by the DM to generate appropriate timings for a given response.", "Our model's architecture is similar to VAEs with recurrent encoders and decoders proposed in Bowman et al. (2016); Ha and Eck (2018); Roberts et al. (2018).", "Our use of a VAE to cluster dialogue acts is similar to the approach used in Zhao et al. (2017).", "Our vector-based representation of dialogue acts takes inspiration from the attribute vectors' used in Roberts et al. (2018) for learning musical structure representations.", "Our model is also related to continuous turn-taking systems (Skantze, 2017) in that our model is trained to predict future speech behavior on a frame-by-frame basis.", "The encoder uses a multiscale RNN architecture similar to the one proposed in Roddy et al. (2018) to fuse information across modalities.", "Models that intentionally generate responsive overlap have been proposed in DeVault et al. (2011); Dethlefs et al. (2012).", "While other models have also been proposed that generate appropriate response timings for fillers (Nakanishi et al., 2018; Lala et al., 2019) and backchannels (Morency et al., 2010; Meena et al., 2014; Lala et al., 2017).", "This paper is structured as follows: First, we present how our dataset is structured and our training objective.", "Then, in sections 2.1 and 2.2 we present details of our two models, RTNet and RTNet-VAE.", "Section 2.3 presents our input feature representations.", "In section 2.4 we discuss our training and testing procedures.", "In sections 3.1 and 3.2 we analyze the performance of both RTNet and RTNet-VAE.", "Finally, in section 4 we present the results of a human listener test.", "Dataset Our dataset is extracted from the Switchboard-1 Release 2 corpus (Godfrey and Hol-liman, 1997).", "Switchboard has 2438 dyadic telephone conversations with a total length of approximately 260 hours.", "The dataset consists of pairs of adjacent turns by different speakers which we refer to as turn pairs (shown in Fig. 2).", "Turn pairs are automatically extracted from orthographic annotations using the following procedure: We extract frame-based speech-activity labels for each speaker using a frame step-size of 50ms.", "The frame-based representation is used to partition each person's speech signal into interpausal units (IPUs).", "We define IPUs as segments of speech by a person that are separated by pauses of 200ms or greater.", "IPUs are then used to automatically extract turns , which we define as consecutive IPUs by a speaker in which there is no speech by the other speaker in the silence between the IPUs.", "A turn pair is then defined as being any two adjacent turns by different speakers.", "The earlier of the two turns in a pair is considered to be the user turn and the second is considered to be the system turn .", "of the ground truth start time.", "The target labels in each turn pair are derived from the ground truth speech activity labels as shown in Fig. 2.", "Each 50 ms frame has a label y { 0 , 1 } , which consists of the ground truth voice activity shifted to the left by one frame.", "As shown in the figure, we only include frames in the span R in our training loss.", "We define the span R as the frames from the beginning of the last IPU in the user turn to the frame immediately prior to the start of the system turn.", "We do not predict at earlier frames since we assume that at these mid-turn-pauses the DM has not decided to take a turn yet, either because it expects the user to continue, or it has not formulated one yet.", "As mentioned previously in section 1, we design RTNet to be abstracted from the turn-taking decisions themselves.", "If we were to include pauses prior to the turn-final silence, our response generation system would be additionally burdened with making turn-taking decisions, namely, classifying between mid-turn-pauses and end-of-turn silences.", "We therefore make the modelling assumption that the system's response is formulated at some point during the user's turn-final IPU.", "To simulate this assumption we sample an index RSTART from the span of R using a uniform distribution.", "We then use the reduced set of frames from RSTART to REND in the calculation of our loss.", "Encoder The encoder of RTNet (shown in Fig. 3) fuses the acoustic and linguistic modalities from a system response using three bi-directional LSTMs.", "Each modality is processed at independent timescales and then fused in a master Bi-LSTM which operates at the linguistic temporal rate.", "The output of the master Bi-LSTM is a sequence of encodings h 0 , h 1 , ...h I , where each encoding is a concatenation of the forward and backward hidden states of the master Bi-LSTM at each word index.", "The linguistic Bi-LSTM takes as input the sequence of 300-dimensional embeddings of the to-kenized system response.", "We use three special tokens: SIL, WAIT, and NONE.", "The SIL token is used whenever there is a gap between words that is greater than the frame-size (50ms).", "The WAIT and NONE tokens are inserted as the first and last tokens of the system response sequence respectively.", "The concatenation [ h 0 ; h 1 ; h I ] is passed as input to a RELU layer (we refer to this layer as the reduction layer ) which outputs the h z encoding.", "The h z encoding is used (along with user features) in the concatenated input to the inference network.", "Since the WAIT embedding corresponds to the h 0 output of the master Bi-LSTM and the NONE embedding corresponds to h I , the two embeddings serve as triggering symbols that allow the linguistic and master Bi-LSTM to output relevant information accumulated in their cell states.", "The acoustic Bi-LSTM takes as input the sequence of acoustic features and outputs a sequence of hidden states at every 50ms frame.", "As shown in Fig. 3, we select the acoustic hidden states that correspond to the starting frame of each linguistic token and concatenate them with the linguistic hidden states.", "Since there are no acoustic features available for the WAIT and NONE tokens, we train two embeddings to replace these acoustic LSTM states (shown in purple in Fig. 3).", "The use of acoustic embeddings results in there being no connection between the WAIT acoustic embedding and the first acoustic hidden state.", "For this reason we include h 1 in the [ h 0 ; h 1 ; h I ] concatenation, in order to make it easier for information captured by the the acoustic bi-LSTM to be passed through to the final concatenation.", "Inference Network The aim of our inference network is to predict a sequence of output probabilities Y = [ y RSTART , y RSTART +1 , ..., y N ] using N o t r i gh t no w bu t I ha v e done uh R ed < SILTOK > C r o ss w o r k < NONETOK > < WAIT TOK > Acoustic Linguistic Master Figure 3: The encoder is three stacked Bi-LSTMs.", "a response encoding h z , and a sequence of user features X = [ x 0 , x 1 , ..., x N ] .", "We use a a single-layer LSTM (shown in Fig. 2) which is followed by a sigmoid layer to produce the output probabilities: [ h n ; c n ] = LSTM inf ([ x n ; h z ] , [ h n 1 ; c n 1 ]) y n = ( W h h n + b h ) Since there are only two possible output values in a generated sequence { 0,1 } , and the sequence ends once we predict 1, the inference network can be considered an autoregressive model where 0 is passed implicitly to the subsequent time-step.", "To generate an output sequence, we can sample from the distribution p ( y n = 1 | y RSTART = 0 , y RSTART +1 = 0 , ..., y n 1 = 0 , X 0: n , h z ) using a Bernoulli random trial at each time-step.", "For frames prior to RSTART the output probability is fixed to 0, since RSTART is the point where the DM has formulated the response.", "During training we minimize the binary cross entropy loss ( LBCE ) between our ground truth objective and our output predictions Y .", "Motivation A limitation of RTNet is that it may be impractical to encode system turns before triggering a response.", "For example, if we wish to apply RTNet using generated system responses, at run-time the RTNet component would have to wait for the full response to be generated by the NLG, which would result in a computational bottleneck.", "If the NLG system is incremental, it may also be desirable for the system to start speaking before the entirety of the system response has been generated.", "VAE To address this, we bypass the encoding stage by directly using the semantic representation output from the DM to control the response timing encodings.", "We do this by replacing the reduction layer with a VAE (Fig. 4).", "To train the VAE, we use the same concatenation of encoder hidden states as in the RTNet reduction layer ( [ h 0 ; h 1 ; h I ] ).", "We use a dimensionality reduction RELU layer to calculate h reduce , which is then split into and components via two more RELU layers.", "is passed through an exponential function to produce , a non-negative standard deviation parameter.", "We sample the latent variable z with the standard VAE method using , , and a random vector from the standard normal distribution N (0 , I ) .", "A dimensionality expansion RELU layer is used to transform z into the response encoding h z , which is the same dimensionality as the output of the encoder: h reduce = RELU( W reduce [ h 0 ; h 1 ; h I ] + b reduce ) = RELU( W h reduce + b ) = RELU( W h reduce + b ) = exp( 2 ) z = + (cid:12) N (0 , I ) h z = RELU( W expand z + b expand ) We impose a Gaussian prior over the latent space using a Kullback-Liebler (KL) divergence loss term: LKL = 1 2 N z (1 + 2 exp( )) The LKL loss measures the distance of the generated distribution from a Gaussian with zero mean and unit variance.", "As we increase the value of w KL we increasingly enforce the Gaussian prior on the latent space.", "In doing so our aim is to learn a smooth latent space in which similar types of responses are organized in similar areas of the space.", "Latent Space During inference we can skip the encoding stage of RTNet-VAE and sample z directly from the latent space on the basis of the input semantic representation from the dialogue manager.", "Our sampling approach is to approximate the distribution of latent variables for a given response-type using Gaussians.", "For example, if we have a collection of labelled backchannel responses (and their corresponding z encodings) we can approximate the distribution of p ( z | label = backchannel ) using an isotropic Gaussian by simply calculating backchannel and backchannel , the maximum likelihood mean and standard deviations of each of the z dimensions.", "These vectors can also be used to calculate directions in the latent space with different semantic characteristics and then interpolate between them.", "Linguistic Features We use the word annotations from the ms-state transcriptions as linguistic features.", "These annotations give us the timing for the starts and ends of all words in the corpus.", "As our feature representation, we use 300 dimensional word embeddings that are initialized with GloVe vectors (Pennington et al., 2014) and then jointly optimized with the rest of the network.", "In total there are 30080 unique words in the annotations.", "We reduced the embedding number down to 10000 by merging embeddings that had low word counts with the closest neighbouring embedding (calculated using cosine distance).", "We also introduce four additional tokens that are specific to our task: SIL, WAIT, NONE, and UNSPEC.", "SIL is used whenever there is a silence.", "WAIT and NONE are used at the start and end of all the system encodings, respectively.", "The use of UNSPEC (unspecified) is shown in Fig. 5.", "UNSPEC was introduced to represent temporal information in the linguistic embeddings.", "We approximate the processing delay in ASR by delaying the annotation by 100 ms after the ground truth frame where the user's word ended.", "This 100 ms delay was proposed in Skantze (2017) as a necessary assumption to modelling linguistic features in offline continuous systems.", "However, since voice activity detection (VAD) can supply an estimate of when a word has started, we propose that we can use this information to supply the network with the UNSPEC embedding 100ms after the word has started.", "Acoustic Features We combine 40 log-mel fil-terbanks, and 17 features from the GeMAPs feature set (Eyben et al., 2016).", "The GeMAPs features are the complete set excluding the MFCCs (e.g. pitch, intensity, spectral flux, jitter, etc.).", "Acoustic features were extracted using a 50ms framestep.", "Training and Testing Procedures The training, validation, and test sets consist of 1646, 150, 642 conversations respectively with 151595, 13910, and 58783 turn pairs.", "The test set includes all of the conversations from the NXT-format annotations (Calhoun et al., 2010), which include references to the Switchboard Dialog Act Corpus (SWDA) (Stolcke et al., 2000) annotations.", "We include the entirety of the NXT annotations in our test set so that we have enough labelled dialogue act samples to analyse the distributions.", "We used the following hyperparameter settings in our experiments: The inference, acoustic, linguistic, and master LSTMs each had hidden sizes of 1024, 256, 256, and 512 (respectively).", "We used a latent variable size of 4, a batch size of 128, and L2 regularization of 1e-05.", "We used the Adam optimizer with an initial learning rate of 5e-04.", "We trained each model for 15000 iterations, with learning rate reductions by a factor of 0.1 after 9000, 11000, 13000, and 14000 iterations.", "While we found that randomizing RSTART during training was important for the reasons given in Section 2, it presented issues for the stability and reproducibility of our evaluation and test results for LBCE and LKL .", "We therefore randomize during training and sampling, but when calculating the test losses (reported in Table 1) we fix RSTART to be the first frame of the user's turn-final IPU.", "We also calculate the mean absolute error (MAE), given in seconds, from the ground truth response offsets to the generated output offsets.", "When sampling for the calculation of MAE, it is necessary to increase the length of the turn pair since the response time may be triggered by the But Uhhh Why?", "sampling process after the ground truth time.", "We therefore pad the user's features with 80 extra frames in which we simulate silence artificially using acoustic features.", "During sampling, we use the same RSTART randomization process that was used during training, rather than fixing it to the start of the user's turn-final IPU.", "For each model we perform the sampling procedure on the test set three times and report the mean error in Table 1.", "Best Fixed Probability To the best of our knowledge, there aren't any other published models that we can directly compare ours to.", "However, we can calculate the best performance that can be achieved using a fixed value for y .", "The best possible fixed y for a given turn pair is: y tp = 1 ( REND RSTART ) / FrameLength .", "The best fixed y for a set of turn pairs is given by the expected value of y tp in that set: y fixed = E [ y tp ] .", "This represents the best performance that we could achieve if we did not have access to any user or system features.", "We can use the fixed probability model to put the performance of the rest of our models into context.", "RTNet Performance The offset distribution for the full RTNet model is shown in Fig. 6a.", "This # Model LBCELKLMAE Details 1 FullModel 0.1094 0.4539 NoVAE 2 FixedProbability 0.1295 1.4546 FixedProbability 3 NoEncoder 0.1183 0.4934 EncoderAblation 4 OnlyAcoustic 0.1114 0.4627 5 OnlyLinguistic 0.1144 0.4817 6 OnlyAcoustic 0.1112 0.5053 InferenceAblation 7 OnlyLinguistic 0.1167 0.4923 8 w KL = 0 .", "baseline RTNet model is better able to replicate many of the features of the true distribution in comparison with predicted offsets using the best possible fixed probability shown in Fig. 6b.", "The differences between the baseline and the fixed probability distributions are reflected in the results of rows 1 and 2 in Table 1.", "In Fig. 6a, the model has the most trouble reproducing the distribution of offsets between -500 ms and 0 ms. This part of the distribution is the most demanding because it requires that the model anticipate the user's turn-ending.", "From the plots it is clear that our model is able to do that to a large degree.", "We observe that after the user has stopped speaking (from 0 seconds onward) the generated distribution follows the true distribution closely.", "To look in more detail at how the system models the offset distribution we can investigate the generated distributions of labelled response dialogue acts in our test set.", "Fig. 7 shows plots of backchannels vs. statements (Fig. 7a), and yes vs. no (Fig.7b) responses.", "In the second rows, we can see that the full model is able to accurately capture the differences in the contours of the true distributions.", "For example, in the no dialogue acts, the full model accurately generates a mode that is delayed (relative to yes dialogue acts).", "Encoder Ablation The performance of the response encoder was analysed in an ablation study, with results in rows 3 through 5 of Table 1.", "Without the response encoder, there is a large decrease in performance, relative to the full model.", "From looking at the encoders with only acoustic and linguistic modalities, we can see that the results benefit more from the acoustic modality than the linguistic modality.", "If we consider the impact of the encoder in more detail, we would expect that the network would not be able to model distributional differences between different types of DA responses without an encoder.", "This is confirmed in the fourth rows of Fig. 7, where we show the generated distributions without the encoder.", "We can see that without the encoder, the distributions of the all of the dialogue act offsets are almost exactly the same.", "Inference Network Ablation In rows 6 and 7 of Table 1 we present an ablation of the inference network.", "We can see that removing either the acoustic or linguistic features from the user's features is detrimental to the results.", "An interesting irregular-1.5 1.0 0.5 0.0 0.5 1.0 1.5 Offset (Seconds) TruePredicted", "acts using two different w KL settings.", "ity is observed in the results for the model that uses only acoustic features (row 6): the MAE is unusually high, relative to the LBCE .", "In all other rows, lower LBCE corresponds to lower MAE.", "However, row 6 has the second lowest LBCE , while also having the second highest MAE.", "In order to examine this irregularity in more detail, we look at the generated distributions from the inference ablation, shown in Fig. 8.", "We observe that the linguistic features are better for predicting the mode of the distribution whereas the acoustic features are better at modelling the -100 ms to +150 ms region directly preceding the mode.", "Since word embeddings are triggered 100 ms after the end of the word, the linguistic features can be used to generate modal offsets in the 150 ms to 200 ms bin.", "We propose that, in the absence of linguistic features, there is more uncertainty about when the user's turn-end has occurred.", "Since the majority of all ground-truth offsets occur after the user has finished speaking, the unusually high MAE in row 6 could be attributed to this uncertainty in whether the user has finished speaking.", "RTNet-VAE Performance In rows 8 through 12 of Table 1 we show the results of our experiments with RTNet-VAE with different settings of w KL .", "As w KL is increased, the LBCE loss increases while the LKL loss decreases.", "Examining some example distributions of dialogue acts generated by RTNet-VAE using w KL = 10 4 (shown in the fifth rows of Fig. 7) we can see that RTNet-VAE is capa-1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 Offset (Seconds) 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Interpolated Distributions agree-accept reject interpolated Figure 10: Interpolated distributions ble of generating distributions that are of a similar quality to those generated by RTNet (shown in the second row).", "We also observe that RTNet-VAE using w KL = 10 4 produces competitive results, in comparison to the full model.", "These observations suggest that the inclusion of the VAE in pipeline does not severely impact the overall performance.", "In Fig. 9 we show the latent variable z generated using RTNet-VAE and plotted using t-SNE (van der Maaten and Hinton, 2008).", "To show the benefits of imposing the Gaussian prior, we show plots for with w KL = 0 .", "0 and w KL = 10 3 .", "The plots show the two-dimensional projection of four different types of dialogue act responses: statements (sd), no (nn), yes (ny), and backchannels", "(b).", "We can observe that for both settings, the latent space is able to organize the responses by dialogue act type, even though it is never explicitly trained on dialogue act labels.", "For example, in both cases, statements (shown in blue) are clustered at the opposite side of the distribution from backchannels (shown in red).", "However, in the case of w KL = 0 .", "0 there are holes in the latent space.", "For practical applications such as interpolation of vector representations of dialogue acts (discussed in the next paragraph), we would like a space that does not contain any of these holes since they are less likely to have semantically meaningful interpretations.", "When the Gaussian prior is enforced (Fig. 9b) we can see that the space is smooth and the distinctions between dialogue acts is still maintained.", "Latent Space Applications As mentioned in Section 2.2, part of the appeal in using the VAE in our model is that it enables us to discard the response encoding stage.", "We can exploit the smoothness of the latent space to skip the encoding stage by sampling directly from the trained latent space.", "We can approximate the distribution of latent variables for individual dialogue act response types using isotropic Gaussians.", "This enables us to effi-ciently represent the dialogue acts using mean and standard-deviation vectors, a pair for each dialogue act.", "Fig. 7 shows examples of distributions generated using Gaussian approximations of the latent space distributions in the final rows.", "We can see that the generated outputs have similar properties to the true distributions.", "We can use the same parameterized vector representations to interpolate between different dialogue act parameters to achieve intermediate distributions.", "This dimensional approach is flexible in that we give the dialogue manager (DM) more control over the details of the distribution.", "For example, if the objective of the SDS was to generate an agree dialogue act, we could control the degree of agreement by interpolating between disagree and agree vectors.", "Figure 10 shows an example of a generated interpolated distribution.", "We can see that the properties of the interpolated distribution (e.g. mode, kurtosis) are perceptually in between the reject and accept distributions.", "It has shown that response timings vary based on the semantic content of dialogue responses and the preceding turn (Levinson and Torreira, 2015), and that listeners are sensitive to these fluctuations in timing (Bogels and Levinson, 2017).", "However, the question of whether certain response timings within different contexts are considered more realistic than others has not been fully investigated.", "We design an online listening test to answer two questions: (1) Given a preceding turn and a response, are some response timings considered by listeners to be more realistic than others?", "(2) In cases where listeners are sensitive to the response timing, is our model more likely to generate responses that are considered realistic than a system that generates a modal response time?", "Participants were asked to make A/B choices between two versions of a turn pair, where each version had a different response offset.", "Participants were asked: Which response timing sounds like it was produced in the real", "conversation? The turn pairs were drawn from our dataset and were limited to pairs where the response was either dispreferred or a backchannel .", "We limited the chosen pairs to those with ground truth offsets that were either clas-1.5 1.0 0.5 0.0 0.5 1.0 1.5 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Mode = +157 ms Early / Late Cutoffs", "We classified offsets as early, modal, or late by segmenting the distribution of all of the offsets in our dataset into three partitions as shown in Fig. 11a.", "The cutoff points for the early and late offsets were estimated using a heuristic where we split the offsets in our dataset into two groups at the mode of the distribution (157 ms) and then used the median values of the upper (+367 ms) and lower (-72 ms) groups as the cutoff points.", "We selected eight examples of each dialogue act (four early and four late ).", "We generated three different versions of each turn pair: true , modal , and opposite .", "If the true offset was late, the opposite offset was the mean of the early offsets (-316 ms).", "If the true offset was early, the opposite offset was the mean of the late offsets (+760 ms).", "We had 25 participants (15 female, 10 male) who all wore headphones.", "We performed binomial tests for the significance of a given choice in each question.", "For the questions in the first half of the test, in which we compared true vs. opposite offsets, 10 of the 16 comparisons were found to be statistically significant ( p < 0 . 05 ).", "In all of the significant cases the true offset was was considered more realistic than the opposite .", "In reference to our first research question, this result supports the conclusion that some responses are indeed considered to be more realistic than others.", "For the questions in the second half of the test, in which we compared true vs. modal offsets, six out of the 16 comparisons were found to be statistically significant.", "Of the six significant preferences, three were a preference for the true offset, and three were a preference for the modal offset.", "To investigate our second research question, we looked at the offset distributions generated by our model for each of the six significant preferences, shown in Fig. 11b.", "For the turn pairs where listeners preferred nonmodal offsets (top row), the distributions generated by our system deviate from the mode into the preferred area (highlighted in yellow).", "In pairs where listeners preferred modal offsets (bottom row) the generated distributions tend to have a mode near the overall dataset mode (shown in the green line).", "We can conclude, in reference to our second question, that in instances where listeners are sensitive to response timings it is likely that our system will generate response timings that are more realistic than a system that simply generates the mode of the dataset.", "In this paper, we have presented models that can be used to generate the turn switch offset distributions of SDS system responses.", "It has been shown in prior studies (e.g. (Bogels et al., 2019)) that humans are sensitive to these timings and that they can impact how responses are perceived by a listener.", "We would argue that they are an important element of producing naturalistic interactions that is often overlooked.", "With the advent of commercial SDS systems that attempt to engage users over extended multi-turn interactions (e.g. (Zhou et al., 2018)) generating realistic response behaviors is a potentially desirable addition to the overall experience.", "The ADAPT Centre for Digital Content Technology is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund." ]
[ "abstain", "objective", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "method", "result", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "method", "result", "result", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "method", "method", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "other" ]
[ "Conventional approaches to relation extraction usually require a fixed set of pre-defined relations.", "Such requirement is hard to meet in many real applications, especially when new data and relations are emerging incessantly and it is computationally expensive to store all data and re-train the whole model every time new data and relations come in.", "We formulate such a challenging problem as lifelong relation extraction and investigate memory-efficient incremental learning methods without catastrophically forgetting knowledge learned from previous tasks.", "We first investigate a modified version of the stochastic gradient methods with a replay memory, which surprisingly outperforms recent state-of-the-art lifelong learning methods.", "We further propose to improve this approach to alleviate the forgetting problem by anchoring the sentence embedding space.", "Specifically, we utilize an explicit alignment model to mitigate the sentence embedding distortion of the learned model when training on new data and new relations.", "Experiment results on multiple benchmarks show that our proposed method significantly outperforms the state-of-the-art lifelong learning approaches.", "The task of relation detection/extraction aims to recognize entity pairs' relationship from given contexts.", "As an essential component for structured information extraction, it has been widely used in downstream tasks such as automatic knowledge-based completion (Riedel et al., 2013) and question answering (Yih et al., 2015; Yu et al., 2017).", "Existing relation detection methods always assume a closed set of relations and perform onceCo-mentoringCodeanddataset can be found in this repository: https://github.com/hongwang600/Lifelong_Relation_Detection and-for-all training on a fixed dataset.", "While making the evaluation straightforward, this setting clearly limits the usage of these methods in realistic applications, where new relations keep emerging over time.", "To build an evolving system which automatically keeps up with the dynamic data, we consider a more practical lifelong learning setting (also called continual learning ) (Ring, 1994; Thrun, 1998; Thrun and Pratt, 2012), where a learning agent learns from a sequence of tasks, where each of them includes a different set of relations.", "In such scenarios, it is often infeasible to combine the new data with all previous data and re-train the model using the combined dataset, especially when the training set for each task is huge.", "To enable efficient learning in such scenarios, recent lifelong learning research (Kirkpatrick et al., 2016; Lopez-Paz and Ranzato, 2017) propose to learn the tasks incrementally, while at the same time preventing catastrophic forgetting (Mc-Closkey and Cohen, 1989; Ratcliff, 1990; McClelland et al., 1995; French, 1999), i.e., the model abruptly forgets knowledge learned on previous tasks when learning on the new task.", "Current lifelong learning approaches address such challenge by either preserving the training loss on previously learned tasks (GEM) (Lopez-Paz and Ran-zato, 2017), or selectively dimming the updates on important model parameters (EWC) (Kirk-patrick et al., 2016).", "These methods usually involve adding additional constraints on the model's parameters or the updates of parameters by utilizing stored samples.", "Despite the effectiveness of these methods on simple image classification tasks, there is little research validating the practical usage of these methods in realistic NLP tasks.", "In fact, when applying these methods to our relation extraction task, we observe that they underperform a simple baseline that updates the model parameters (i.e., learning by SGD) with a mix of stored samples from previous tasks and new samples from the incoming task.", "We further test this simple baseline on commonly used continual learning benchmarks and get similar observations.", "In this work, we thoroughly investigate two existing continual learning algorithms on the proposed lifelong relation extraction task.", "We observe that recent lifelong learning methods only operate on the models' parameter space or gradient space, and do not explicitly constraint the feature or embedding space of neural models.", "As we train the model on the new task, the embedding space might be distorted a lot, and become infeasible for previous tasks.", "We argue that the embedding space should not be distorted much in order to let the model work consistently on previous tasks.", "To achieve this, we propose an alignment model that explicitly anchors the sentence embeddings derived by the neural model.", "Specifically, the alignment model treats the saved data from previous tasks as anchor points and minimizes the distortion of the anchor points in the embedding space in the lifelong relation extraction.", "The aligned embedding space is then utilized for relation extraction.", "Experiment results show that our method outperforms the state-of-the-art significantly in accuracy while remaining efficient.", "The main contributions of this work include: We introcduce the lifelong relation detection problem and construct lifelong relation detection benchmarks from two datasets with large relation vocabularies: SimpleQuestions (Bordes et al., 2015) and FewRel (Han et al., 2018).", "We propose a simple memory replay approach and find that current popular methods such as EWC and GEM underperform this method.", "We propose an alignment model which aims to alleviate the catastrophic forgetting problem by slowing down the fast changes in the embedding space for lifelong learning.", "Generic definition of lifelong learning problems In lifelong learning, there is a sequence of K tasks {T (1) , T (2) , . . . , T ( K ) } .", "Each task T ( k ) is a conventional supervised task, with its own label set L ( k ) and training/validation/testing data ( T ( k ) train , T ( k ) valid , T ( k ) test ), each of which is a set of labeled instances { ( x ( k ) , y ( k ) ) } .", "Note that x ( k ) is the input data of the context and candidate relations, and y ( k ) is the ground-truth label.", "The goal of lifelong learning is to learn a classification model f .", "At each step k , f observes the task T ( k ) , and optimizes the loss function on its training data with a loss function (cid:96) ( f ( x ) , y ) .", "At the same time, we require the model f learned after step k could still perform well on the previous k 1 tasks.", "That is, we evaluate the model by using the average accuracy of k tasks at each step as 1 k (cid:80) kj =1 acc f,j .", "To make f perform well on the previous tasks, during the lifelong learning process, we usually allow the learner to maintain and observe a memory M of samples from the previous tasks.", "Practically, with the growth of the number of tasks, it is difficult to store all the task data 1 .", "Therefore, in lifelong learning research, the learner is usually constrained on the memory size, denoted as a constant B .", "Thus at each step k , the learner is allowed to keep training samples from {T ( j ) | j = 1 , . . . , k 1 } with size less or equal to B .", "Lifelong relation detection In this paper we introduce a new problem, lifelong relation detection .", "Relation detection is an important task that aims to detect whether a relation exists between a pair of entities in a paragraph.", "In many real-world scenarios, relation detection naturally forms a lifelong learning problem because new relation types emerge as new knowledge is constantly being discovered in various domains.", "For example, in the Wikidata (Vrandecic and Krotzsch, 2014) knowledge graph, the numbers of new items and properties are constantly increasing 2 .", "So we need to keep collecting data and updating the model over time in order to handle newly added relations.", "The problem of lifelong relation detection has the same definition as above with only one difference: during prediction time, we hope to know whether an input paragraph contains any relation observed before.", "Therefore at time k , given an input x from task j (cid:48) <k , instead of predicting an y L ( j (cid:48) ) , we predict y ( k ) (cid:83) kj =1 L ( j ) .", "That says, the candidate label set is expanding as the learner observes more tasks, and the difficulty of each previous task is increasing over time as well.", "1 Even the data can be stored, it is unrealistic to make full usage of the stored data.", "For example, random sampling from all previous task data (e.g., for the methods in Section 4) will become statistically inefficient.", "https://www.wikidata.org/wiki/ Wikidata:News", "Lifelong MNIST MNIST is a dataset of handwriting ten digits (LeCun, 1998), where the input for each sample is an image, and the label is the digit the image represents.", "Two variants of the MNIST dataset were proposed for lifelong learning evaluation.", "One is MNIST Permutations (Kirkpatrick et al., 2016), where a task is created by rearranging pixels according to a fixed permutation.", "K different permutations are used to generate K tasks.", "Another variant is MNIST Rotations (Lopez-Paz and Ranzato, 2017), where each task is created by rotating digits by a fixed angle.", "K angles are chosen for creating K tasks.", "In our experiments, we follow (Lopez-Paz and Ranzato, 2017) to have K = 20 tasks for each benchmark.", "Lifelong CIFAR CIFAR (Krizhevsky and Hinton, 2009) is a dataset used for object recognition, where the input is an image, and the label is the object the image contains.", "Lifelong CIFAR100 (Re-buffi et al., 2017a) is a variant of CIFAR-100 (CI-FAR with 100 classes) by dividing 100 classes into K disjoint subsets.", "Each task contains samples from 100 K classes in one subset.", "Following (Lopez-Paz and Ranzato, 2017), we have K = 20 tasks, where each of them has 5 labels.", "Lifelong FewRel FewRel (Han et al., 2018) is a recently proposed dataset for few-shot relation detection.", "There are 80 relations in this dataset.", "We choose to create a lifelong benchmark based on FewRel because there are a sufficient number of relation labels.", "We extract the sentence-relation pairs from FewRel and build our lifelong FewRel benchmark as follows.", "Each sample contains a sentence with the ground-truth relation it refers, and a set of 10 randomly chosen false relations from all the whole relations set.", "The model is required to distinguish the right relation from the candidates.", "We apply K-Means over the averaged word embeddings of the relation names and divide 80 relations into 10 disjoint clusters.", "This results in 10 tasks in this benchmark, and each task contains relations from one cluster.", "Candidate relations will be masked if they do not appear in the history tasks.", "Lifelong SimpleQuestions SimpleQuestions is a KB-QA dataset containing single-relation questions (Bordes et al., 2015).", "(Yu et al., 2017) created a relation detection dataset from SimpleQuestions that contains samples of question-relation pairs.", "For each sample, a candidate set of relations is also provided.", "Similar to lifelong FewRel, we divide relations into 20 disjoint clusters by using K-Means.", "This results in 20 tasks, and each task contains relations from one cluster.", "Catastrophic forgetting is one of the biggest obstacles in lifelong learning.", "The problem is particularly severe in neural network models, because the learned knowledge of previous tasks is stored as network weights, while a slight change of weights when learning on the new task could have an unexpected effect on the behavior of the models on the previous tasks (French, 1999).", "Currently, the memory-based lifelong learning approaches, which maintain a working memory of training examples from previous tasks, are proved to be one of the best solutions to the catastrophic forgetting problem.", "In this section, we first propose a memory-based lifelong learning approach, namely Episodic Memory Replay (EMR), which uses the working memory by sampling stored samples to replay in each iteration of the new task learning.", "Surprisingly, such a straightforward approach with a clear motivation was never used in previous research.", "We first compare EMR with the state-of-the-art memory-based algorithm Gradient Episodic Memory (GEM).", "We also show that the EMR outperforms GEM on many benchmarks, suggesting that it is likely to be among the top-performed lifelong learning algorithms, and it should never be ignored for comparison when developing new lifelong learning algorithms.", "EMR is a modification over stochastic gradient descent algorithms.", "It replays randomly sampled data from memory while training on a new task, so the knowledge of previous tasks could be retained in the model.", "After training on each task k , EMR selects several training examples to store in the memory M , denoted as M (cid:84) T ( k ) train .", "3 3 (Rebuffi et al., 2017b) propose to dynamically change the size of memory set for each task during training.", "To handle the scalability, EMR stochastically replays the memory.", "Specifically, when training on task k with each mini-batch D ( k ) train T ( k ) train , EMR samples from the memory M to form a second mini-batch D ( k ) replay M .", "Then two gradient steps are taken on the two mini-batches of D ( k ) train and D ( k ) replay .", "Note that EMR could work with any stochastic gradient optimization algorithm, such as SGD, Adagrad, AdaDelta, and Adam, to optimize the model f with the mixed mini-batches.", "We try two variations of D ( k ) replay sampling: first, task-level sampling , which samples from one previous task j each time, i.e., D ( k ) replay M (cid:84) T ( j ) train .", "Second, sample-level sampling , which samples all over the memory, i.e., D ( k ) replay M .", "The two approaches differ in the task instance sampling probability.", "The task-level approach assumes a uniform distribution over tasks, while the sample-level approach has a marginal distribution on tasks that is proportional to the number of their training data in M .", "4 When tasks are balanced like MNIST and CIFAR, or when the stored data in the memory for different tasks are balanced, the two approaches become equivalent.", "However, the sample-level strategy could sometimes make the code implementation more difficult: for some lifelong learning benchmarks such as MNIST Rotation, MNIST Permutation, and CIFAR-100 used in (Lopez-Paz and Ranzato, 2017), the tasks could differ from each other in the input or output distribution, leading to different computation graphs for different training examples.", "From our preliminary study, the task-level approach could always give results as good as those of the sample-level approach on our lifelong relation detection benchmarks (see Table 1) , so in our experiments in Section 6 we always use the task-level approach.", "In this part, we will first thoroughly introduce a state-of-the-art memory-based lifelong learning algorithm called Gradient Episodic Memory (GEM) (Lopez-Paz and Ranzato, 2017), and then compare EMR with it in both time complexity and", "followup work and this paper all use fixed sets, and we will investigate the usage of dynamic sets in future work.", "4 The two approaches hence favor different evaluation metrics the former fits macro averaging better and the latter fits micro averaging better.", "Gradient Episodic Memory (GEM) The key idea of GEM (Lopez-Paz and Ranzato, 2017) is to constrain the new task learning with previous task data stored in memory.", "Specifically, it constrains the gradients during training with the following operation.", "When training on task k , for each mini-batch D ( k ) train T ( k ) train , it first computes the gradient g ( k ) train on D ( k ) train , and the average gradients on the stored data of each previous task j , denoted as g ( j ) task .", "More concretely, we define g ( j ) task = (cid:80) i (cid:48) (cid:96) ( f ( x ( j ) i (cid:48) ) , y ( j ) i (cid:48) ) |M (cid:84) T ( j ) train | , where j<k , (cid:96) ( ) is the loss function, and ( x ( j ) i (cid:48) , y ( j ) i (cid:48) ) M (cid:84) T ( j ) train , i.e. ( x ( j ) i (cid:48) , y ( j ) i (cid:48) ) is a training instance in T ( j ) that was stored in memory M .", "Then the model f is updated along the gradient g that solves the following problem: min g || g g ( k ) train || 2 s.t. (cid:104) g, g ( j ) task (cid:105) 0 , j = 1 , . . . , k 1 .", "g is the closest gradient to the gradient on the current training mini-batch, g ( k ) train , without decreasing performance on previous tasks much since the an-gle between g and g ( j ) task is smaller than 90 .", "Time Complexity One difference between EMR and GEM is that EMR deals with unconstrained optimization and does not require the gradient projection, i.e., solving g .", "But since the model f is deep networks, empirically the time complexity is mainly dominated by the computation of forward and backward passes.", "We analyze the time complexity as below: In task k , suppose the mini-batch size is | D | and the memory replay size is m , our EMR takes | D | + m forward/backward passes in each training batch.", "Note that m is a fixed number and set to be equal to the number of instances stored for each previous task in our experiments.", "While for GEM, it needs to compute the gradient of all the data stored in the memory M , thus | D | + |M| for-ward/backward passes are taken.", "Its complexity is largely dominated by the size |M| (upper bounded by the budget B ).", "When the budget B is large, with the number of previous tasks increases, M grows linearly, and GEM will become infeasible.", "Superior Empirical Results of EMR The EMR algorithm is much simpler compared to the GEM.", "However, one interesting finding of this paper is that the state-of-the-art GEM is unnecessarily more complex and more inefficient, because EMR, a simple stochastic gradient method with memory replay, outperforms it on several benchmarks.", "The results are shown in Table 1. The numbers are the average accuracy, i.e. 1 k (cid:80) kj =1 acc f,j , at last time step.", "For both algorithms, the training data is randomly sampled to store in the memory, following (Lopez-Paz and Ranzato, 2017).", "On lifelong relation detection, the EMR outperforms GEM on both of our created benchmarks.", "To further show its generalizability, we apply the EMR to previous lifelong MNIST and CIFAR benchmarks and compare to the results in (Lopez-Paz and Ranzato, 2017) with all the hyperparameters set as the same.", "Still, EMR performs similarly to GEM except for the MNIST Rotation benchmark.", "5 From the above results, we learned the lesson that previous lifelong learning approaches actually fail to show improvement compared to doing memory replay in a stochastic manner.", "We hypothesise that GEM performs worse when there is positive transfer among tasks, making the gradient projection an inefficient way to use gradients computed from memory data.", "Therefore, in the next section, we start with the basic EMR and focus on more efficient usage of the historical data.", "5 Even on MNIST Rotation, it has achieved a competitive result, since the conventional training on shuffled data from all the tasks in this benchmark gives 0 .", "83 according to (Lopez-Paz and Ranzato, 2017).", "Based on our basic EMR, this section proposes our solution to lifelong relation detection.", "We improve the basic EMR with two motivations: (1) previous lifelong learning approaches work on the parameter space.", "However, the number of parameters in a deep network is usually huge.", "Also, deep networks are highly non-linear models, and the parameter dimensions have complex interactions, making the Euclidean space of parameters not a proper delegate of model behavior (French, 1999).", "That is, a slight change in parameter space could affect the model prediction unexpectedly.", "The above two reasons make it hard to maintain deep network behaviors on previous tasks with constraints or Fisher information.", "Therefore, we propose to alleviate catastrophic forgetting in the hidden space (i.e., the sentence embedding space).", "(2) for each task, we want to select the most informative samples to store in the memory, instead of random sampling like in (Lopez-Paz and Ran-zato, 2017).", "Therefore the budget of memory can be better utilized.", "This section introduces our approach which performs lifelong learning in the embedding space, i.e., the Embedding Aligned EMR (EA-EMR).", "In EA-EMR, for each task k , besides storing the original training data ( x ( k ) , y ( k ) ) in the memory M , we also store the embeddings of x ( k ) .", "In the future after a new task is trained, the model parameters are changed thus the embeddings for the same ( x ( k ) , y ( k ) ) would be different.", "Intuitively, a lifelong learning algorithm should allow such parameter changes but ensure the changes do not distort the previous embedding spaces too much.", "Our EA-EMR alleviates the distortion of embedding space with the following idea: if the embedding spaces at different steps are not distorted much, there should exist a simple enough transformation a (e.g., a linear transformation in our case) that could transform the newly learned embeddings to the original embedding space, without much performance degeneration on the stored instances.", "So we propose to add a transformation a on the top of the original embedding and learn the basic model f and the transformation a automatically.", "Specifically, at the k -th task, we start with the model f ( k 1) , and the transformation a ( k 1) , that trained on the previous k 1 tasks.", "We want to learn the basic model f and the transformation a such that the performance on the new task and stored instances are optimized without distorting the previous embedding spaces much.", "min f ( ) ,a ( ) (cid:88) ( x,y ) D ( k ) train (cid:96) ( a ( f ( x )) , y )+ (cid:88) ( x,y ) D ( k ) replay (cid:16) (cid:96) ( a ( f ( x )) , y ) + (cid:107) a ( f ( x )) a ( k 1) ( f ( k 1) ( x )) (cid:107) 2 (cid:17) We propose to minimize the above objective through two steps.", "In the first step, we optimize the basic model f by: min f ( ) (cid:88) ( x,y ) D ( k ) train (cid:83) D ( k ) replay (cid:96) (cid:16) a ( k 1) ( f ( x )) , y (cid:17) This step mainly focuses on learning the new task without performance drop on the stored samples.", "In second step, we optimize a to keep the embedding space of the current task and restore the previous embedding space of all stored samples: min a ( ) (cid:88) ( x,y ) D ( k ) train (cid:107) a ( f ( x )) a ( k 1) ( f ( x )) (cid:107) 2 + (cid:88) ( x,y ) D ( k ) replay (cid:107) a ( f ( x )) a ( k 1) ( f ( k 1) ( x )) (cid:107) 2 Embedding Alignment on Relation Detection Model We introduce how to add embedding alignment to relation detection models.", "The basic model we use is a ranking model that is similar to HR-BiLSTM (Yu et al., 2017).", "Two BiLSTMs (Hochreiter and Schmidhuber, 1997) are used to encode the sentence and relation respectively given their GloVe word embedding (Pennington et al., 2014).", "Cosine similarity between the sentence and relation embedding is computed as the score.", "Relation with maximum score is predicted by the model for the sentence.", "Ranking loss is used to train the model 6 .", "This base model is our model f , which is trained on a new task k at each step and results in an updated model f ( k ) .", "Our proposed approach (Figure 1) inserts an alignment model a to explicitly align to embedding space for stored instances and maintain the embedding space of the current task.", "Note that the label y (the relation here) also has embedding, so it needs to pass through the alignment model a as well.", "6 Though the basic model is simple, it achieves reasonable results on the two datasets when training with all the data, i.e., 0 .", "837 on FewRel and 0 .", "927 on SimpleQuestions.", "When the budget of memory is relatively smaller, how to select previous samples will greatly affect the performance.", "Ideally, in order to make the memory best represents a previous task, we hope to choose diverse samples that best approximate the distribution of task data.", "However, distribution approximation itself is a hard problem and will be inefficient due to its combinatorial optimization nature.", "Therefore, many recent works such as GEM ignore this step and randomly select samples from each task to store in the memory.", "Rebuffi et al. (2017b) proposed to select exemplars that best approximate the mean of the distribution.", "This simplest distribution approximation does not give an improvement in our experiments because of the huge information loss.", "Therefore, we propose a better approach of sample selection by clustering over the embedding space from the model, and choose one representative from each cluster to store in the memory.", "More specifically, The embedding after alignment model is used to represent the input because the model makes prediction based on that.", "Then we apply K-Means (the number of clusters equals the budget given to the specific task) to cluster all the samples of the task.", "For each cluster, we select the sample closest to the centroid to store in the memory.", "We leave more advanced approaches of representative sample selection and their empirical comparison to future work.", "We conduct experiments on our lifelong benchmarks: lifelong SimpleQuestions (Bordes et al., 2015) and lifelong FewRel (Han et al., 2018) to compare our proposed methods EA-EMR, EA-EMR without Selection (EA-EMR NoSel), EA-EMR without Alignment (EA-EMR noAlign), and EMR with the following baselines.", "EWC (Kirkpatrick et al., 2016), which slows down updates on important parameters by adding L 2 regularization of parameter changes to the loss.", "GEM (Lopez-Paz and Ranzato, 2017), which projects the gradient to benefit all the tasks so far by keeping a constraint for each previous task.", "On both FewRel and SimpleQuestions, the epoch to train on each task is set to be 3 .", "Learning rate for the basic model is set to be 0 .", "001 .", "The hidden size of LSTM is set to be 200 .", "The batch size is set to be 50 .", "For each sample in the memory, 10 candidate relations is randomly chosen from all observed relations to alleviate the problem that new relations are emerging incessantly.", "Parameters for our model and baselines are set as follows.", "For EA-EMR and EA-EMR NoSel, when training the alignment model, the learning rate is set to be 0.0001, and the training epoch is set to be 20 and 10 for FewRel and SimpleQuestions respectively.", "For AGEM, 100 samples are Method FewRel SimpleQuestions Whole Avg Whole Avg Origin 0.189 0.208 0.632 0.569 Baselines GEM 0.492 0.598 0.841 0.796 AGEM 0.361 0.425 0.776 0.722 EWC 0.271 0.302 0.672 0.590 Ours Full EA-EMR 0.566 0.673 0.878 0.824 w/o Selection 0.564 0.674 0.857 0.812 w/o Alignment 0.526 0.632 0.869 0.820 w/o Alignment but keep 0.545 0.655 0.871 0.813 the architecture EMR Only 0.510 0.620 0.852 0.808 Table 2: This table shows the accuracy on the whole testing data (Whole column), and average accuracy on all observed tasks (Avg column) after the last time step.", "randomly chosen from all the previous tasks to form a constraint.", "For EWC, we set the balancing parameter = 100 .", "For GEM and EMR related methods, memory size of each task is set to be 50 .", "6.2 Lifelong Relation Detection Results Evaluation Metrics We use two metrics to evaluate the performance of the model: Average performance on all seen tasks after time step k , which highlights the catastrophic problem: ACC avg = 1 k k (cid:88) i =1 acc f,i Accuracy on the whole testing data of all tasks: ACC whole = acc f,D test Results on FewRel and SimpleQuestions We run each experiment 5 times independently by shuffling sequence of tasks, and the average performance is reported.", "The average accuracy over all observed tasks during the whole lifelong learning process is presented in Figure 2, and the accuracy on the whole testing data during the process is shown in Appendix A.1.", "We also list the result at last step in Table 2. From the results, we can see that EWC and GEM are better than the Origin baseline on both two datasets, which indicates that they are able to reduce the catastrophic forgetting problem.", "However, our EA-EMR perform significantly better than these previous state-of-the-arts.", "The proposed EMR method itself achieves better results than all baselines on both datasets.", "The ablation study shows that both the selection and the alignment modules help on both tasks.", "The Effect of Embedding Alignment To investigate the effect of our embedding alignment approach, we conduct two ablation studies as below: First, we remove both the alignment loss in equation 5.1, as well as the alignment module a , which results in significant drop on most of the cases (the line w/o Alignment in Table 2).", "Second, to make sure that our good results do not come from introducing a deeper model with the module a , we propose to only remove the embedding alignment loss, but keep everything else unchanged.", "That means, we still keep the module a and the training steps, with the only change on replacing the loss in step 2 with the one in step 1 (the line w/o Alignment but keep the architecture in Table 2).", "We can see that this decreases the performance a lot.", "The above results indicate that by explicitly doing embedding alignment, the performance of the model can be improved by alleviating the distortion of previous embedding space.", "Comparison of Different Sample Selection Strategies Here we compare different selection methods on lifelong FewRel and SimpleQuestions.", "EMR Only randomly choose samples.", "(Re-buffi et al., 2017b) propose to choose samples that can best approximate the mean of the distribution.", "We compare their sampling strategy (denoted as iCaRL) with our proposed method (K-Means) which encourages to choose diverse samples by choosing the central sample of the cluster in the embedding space.", "From the results in Table 3, we can see that our method outperforms iCaRL and the random baseline.", "While iCaRL is not significantly different from the random baseline.", "Lifelong Learning without Catastrophic Forgetting Recent lifelong learning research mainly focuses on overcoming the catastrophic forgetting phenomenon (French, 1999; McCloskey and Cohen, 1989; McClelland et al., 1995; Ratcliff, 1990), i.e., knowledge of previous tasks is abruptly forgotten when learning on a new task.", "Existing research mainly follow two directions: the first one is memory-based approach (Lopez-Paz and Ranzato, 2017; Anonymous, 2019), which saves some previous samples and optimizes a new task with a forgetting cost defined on the saved samples.", "These methods have shown strength in alleviating catastrophic forgetting, but the computational cost grows rapidly with the number of previous tasks.", "The second direction is to consolidate parameters that are important to previous tasks (Kirkpatrick et al., 2016; Liu et al., 2018; Ritter et al., 2018; Zenke et al., 2017).", "For example, Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2016) slows down learning on weights that are important to previous tasks.", "These methods usually do not need to save any previous data and only train on each task once.", "But their abilities to overcome catastrophic forgetting are limited.", "Lifelong Learning with Dynamic Model Architecture There is another related direction on dynamically changing the model structure (i.e., adding new modules) in order to learn the new task without interfering learned knowledge for previous tasks, such as (Xiao et al., 2014; Rusu et al., 2016; Fernando et al., 2017).", "These approaches could successfully prevent forgetting.", "However, they do not suit many lifelong settings in NLP.", "First, it cannot benefit from the positive transfer between tasks.", "Second, the size of the model grows dramatically with the number of observed tasks, which makes it infeasible for real-world problems where there are a lot of tasks.", "Remark It is worth noting that the term lifelong learning is also widely used in (Chen et al., 2015; Chen, 2015; Shu et al., 2016, 2017), which mainly focus on how to represent, reserve and extract knowledge of previous tasks.", "These works belong to a research direction different from lifelong learning without catastrophic forgetting.", "In this paper, we introduce lifelong learning into relation detection, and find that two state-of-the-art lifelong learning algorithms, GEM and EWC, are outperformed by a simple memory replay method EMR on many benchmarks.", "Based on EMR, we further propose to use embedding alignment to alleviate the problem of embedding space distortion, which we think is one reason that causes catastrophic forgetting.", "Also, we propose to choose diverse samples to store in the memory by conducting K-Means in the model embedding space.", "Experiments verify that our proposed methods significantly outperform other baselines.", "This research was supported in part by a UCSB Chancellor's Fellowship, an IBM Faculty Award, and a DARPA Grant D18AP00044 funded under the DARPA YFA program.", "The authors are solely responsible for the contents of the paper, and the opinions expressed in this publication do not re-flect those of the funding agencies." ]
[ "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "other", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "objective", "method", "objective", "abstain", "abstain", "result", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "result", "objective", "objective", "objective", "other", "other" ]
[ "Conditioned dialogue generation suffers from the scarcity of labeled responses.", "In this work, we exploit labeled non-dialogue text data related to the condition, which are much easier to collect.", "We propose a multi-task learning approach to leverage both labeled dialogue and text data.", "The 3 tasks jointly optimize the same pre-trained Transformer conditioned dialogue generation task on the labeled dialogue data, conditioned language encoding task and conditioned language generation task on the labeled text data.", "Experimental results show that our approach outperforms the state-of-the-art models by leveraging the labeled texts, and it also obtains larger improvement in performance comparing to the previous methods to leverage text data.", "General conversational models pre-trained on large text data (Radford et al., 2018; Devlin et al., 2018) or human-to-human conversation data (Zhang et al., 2019; Bao et al., 2019) have shown excellent performance in generating fluent and diverse responses.", "In addition to general conversation, we are more and more faced with the problem of conditioned conversation that tunes the dialogue toward a specific style or domain.", "For example, we might specify a condition as the vocabulary frequently used by a person and ask the system to mimic the speaking style of the person, or a topic-related vocabulary and ask the chatbot to discuss the given topic.", "Conditioned response generation has been extensively explored using RNN-based sequence-to-sequence models, under different conditions, e.g. persona (Li et al., 2016b), topic (Xing et al., 2017), emotion (Zhou et al., 2018), situations (Sato et al., 2017), and so on.", "However, only a few existing studies considered using pre-training based models (Zheng et al., 2019; Lin et al., 2019).", "The basic idea in these previous works is to utilize a parametric vector to represent a condition and then use it in the decoder for conditioned generation.", "However, the key issue in conditioned dialogue generation is the availability of labeled responses (Zhou and Wang, 2018), and pre-training on unlabeled text or dialogue data does not help much.", "Therefore, the motivation of this work is to leverage labeled text (non-dialogue) data that are much easier to collect than labeled dialogue data as supplement.", "These data can be, for example, texts written by the same person (for a persona condition), within the same topic domain (for a topic condition), etc.", "The idea is inspired by response style transfer (Luan et al., 2017; Niu and Bansal, 2018), which uses a text corpus to learn a style and then transfer the style to dialogue.", "Based on their success, we assume that the labeled text data can contribute to create better representations of conditions and better utilization of conditions in natural language generation.", "In this work, we propose a multi-task learning approach to leverage both labeled dialogue and text data.", "We use 3 tasks to jointly optimize the same pre-trained Transformer conditioned dialogue generation task on the labeled dialogue data, conditioned language encoding task and conditioned language generation task on the labeled text data.", "Our assumption is that the two other tasks can help in our final goal of conditioned dialogue generation: conditioned language generation is the base of conditioned response generation, and conditioned language encoding using bi-directional attention can efficiently encode condition-related expressions and lead to better condition representations.", "We apply different input representations, self-attention masks, and random mask strategies to differentiate the 3 tasks.", "Regardless of these differences, the training objectives of these tasks are essentially the same, i.e. masked language modeling, and thus we can mix up 2 types of data / 3 tasks in one training batch, which prevents us from having the catastrophic forgetting problem (Phang et al., 2018).", "To efficiently leverage labeled data, first, our approach incorporates all types of data within the same framework, avoiding introducing ad hoc model components which are usually needed in some response style transfer methods in order to leverage extra texts.", "Second, we propose TF-IDF based masking which selects more condition-related tokens to mask, so that the model can exploit the labeled text data more for condition-related expressions rather than the general language features already captured by the pre-trained models.", "Third, for conditioned generation, we propose a non-parametric attention-based gating mechanism , which chooses between generating a general word (necessary for general function words) or a condition-related word at each position.", "We expect it to be more efficient than a parametric gating.", "Experimental results show that these approaches all bring improvements.", "Our approach is generalizable.", "In spite of many different labels, a condition essentially specifies some preferences on words, phrases, and sentence structures in the generated responses.", "Thus, a general approach can be instantiated to a specific case as long as the corresponding labeled dialogue data are available.", "We will run experiments with two instantiated models for personaand topic-related dialogue.", "Additionally, we will empirically show that our approach is robust and can even work with condition labels predicted by a classification model, e.g. LDA for topic labels.", "The contributions in this work are as follows 1 : We propose a simple and efficient multi-task learning approach based on pre-trained Transformer that leverages different labeled data, i.e., dialogue and text, for conditioned response generation.", "The experiments under two different conditions personaand topic-based dialogue, show that our approach outperforms the state-of-the-art models by leveraging labeled texts even when the labels are predicted by a model.", "Our approach obtains larger improvement in performance comparing to the existing methods to leverage text data, based on extra auto-encoder or sequential fine-tuning.", "1 The code is available at https://github.com/ zengyan-97/MultiT-C-Dialog .", "2.1 Conditioned Dialogue Generation We categorize the related existing works into 3 categories.", "(1) Response generation conditioned on latent variables, where no extra annotations of dialogues is required (Serban et al., 2017; Shen et al., 2018; Gu et al., 2018; Chen et al., 2019; Gao et al., 2019; Bao et al., 2020).", "(2) Loosely-conditioned response generation, where a label designating the type of the response is required.", "For example, persona labels (Li et al., 2016b) designate the speaking styles of the responses, and topic labels (Xing et al., 2017; Dziri et al., 2019) or emotion labels (Li et al., 2017; Zhou et al., 2018; Rashkin et al., 2019) specify topic-related or emotion-related vocabularies.", "These studies usually utilize a parametric vector to encode a label, which is then used in the decoder to guide the generation.", "(3) Strictly-conditioned response generation, where extra knowledge is required to determine the content of the response, such as a persona profile (Zhang et al., 2018; Urbanek et al., 2019), a situation description (Rashkin et al., 2018; Urbanek et al., 2019), or a wikipedia paragraph (Galley et al., 2019; Dinan et al., 2018), which are used to ground the response.", "The ability to strictly-conditioned generation is important, but these dialogues only count for a small frac-tion of open-domain conversation (Zheng et al., 2019).", "In many other cases, we are in the situation of loosely-conditioned dialogue.", "Furthermore, the state-of-the-art strictly-conditioned method (Wolf et al., 2019) can be easily added in other models as well (Shuster et al., 2019; Madotto et al., 2020), which simply concatenates the extra knowledge with the dialogue history as the model input.", "In this work, we focus on loosely-conditioned response generation 2 .", "We will show that our approach is robust and can work with different types of labels including those predicted by a classification model, e.g. LDA for topic labels.", "Therefore, our method is compatible to generation conditioned on latent variables by borrowing power of a classification model.", "In this work, we do not touch on strictly-conditioned generation.", "However, this ability can be easily equipped as mentioned.", "Style transfer in dialogue aims to learn the style of a text corpus and then incorporate the style in dia-2", "logue generation.", "The transfer is usually between two styles, e.g. rude and polite, or adding a style to general dialogues.", "To leverage the text corpus, Luan et al. (2017) jointly trains a seq2seq response generator and an extra auto-encoder, and Niu and Bansal (2018) trains an extra style classifier first to guild the response generator using reinforcement learning.", "These works show that text data contain rich information about how to generate a specific type of texts, which inspire us to exploit the labeled text data in conditioned dialogue generation to alleviate the data scarcity issue.", "Style transfer is usually between two given styles.", "In contrast, conditioned dialogue generation could work with hundreds of condition labels simultaneously.", "As we will show in our experiments, the style transfer methods that utilize additional models, e.g. auto-encoder, to leverage text corpus are unscalable and inefficient for conditioned dialogue.", "In contrast, our approach that leverages labeled text data without using ad hoc models and makes a tighter integration of labeled text data with labeled dialogue data can more directly impact the conditioned dialogue generation.", "We assume that we have two types of training data: a labeled dialogue corpus containing (dialogue history, condition, response) samples, and a labeled text corpus consisting of (condition, text) samples.", "Notice that the condition is any categorical label that indicates a type of responses or texts.", "Our goal is to generate a response y that exhibits the desired characteristics of the type of responses given a dialogue history x and a condition c : y = arg max y P ( y | x, c ) (1) The Transformer in our work uses bi-directional attention on the source side to encode the dialogue history, and left-to-right attention on the target side to generate the response.", "Such a transformer can be initialized from BERT(Devlin et al., 2018), Roberta(Liu et al., 2019), UniLM (Dong et al., 2019), or the models pre-trained on large-scale unlabeled dialogue data e.g. PLATO (Bao et al., 2019) and Blender (Roller et al., 2020).", "In this work, we focus on efficiently leveraging labeled data, i.e. dialogue and text.", "Figure 1 (Left) shows the overview of our approach.", "In this subsection, we introduce the basic components of Transformer.", "Masked multi-head attention is also applied in our condition-aware transformer block.", "The input representation H 0 R n d h , where n is the input length and d h = 768 is the hidden dimension, is the sum of token embedding, position embedding, and type embedding at each position.", "We apply type embeddings to introduce a separation between source side and target side as shown in Figure 1 (Left) in order to warrant different treatments in the model.", "Then, H 0 is encoded into hidden representations of i -th layer H i = [ h i 1 , ..., h in ] using multi-layer transformer blocks: H i = Trans i ( H i 1 ) i [1 , L ] .", "The core component of a transformer block is the masked multihead attention, whose outputs, i.e. contextualized representations, C i = [ c i 1 , ..., c in ] , are computed via: C i = Concat( head 1 , ..., head h ) .", "Specifically, head j = softmax( Q j K Tj d k + M ) V j (2) where Q j , K j , V j R n d k are obtained by transforming H i 1 R n d h using W iQj , W iKj , W iVj R d h d k respectively.", "The self-attention mask matrix M R n n (with M ij { 0 , } ) determines whether a position can attend to other positions: M ij = 0 allows the i -th position to attend to j -th position and M ij = prevents from it.", "Our approach jointly optimizes three tasks that apply different self-attention masks as shown in Figure 1 (Left).", "For conditioned dialogue generation task, the self-attention mask allows bidirectional attention on the source side to fully encode dialogue history and left-to-right attention on the target side to generate conditioned responses.", "For the labeled text data, we randomly choose between conditioned language encoding and conditioned language generation task.", "The two tasks use bi-directional attention and left-to-right attention respectively.", "The language encoding objective, i.e. Masked Language Modeling (MLM), is used in BERT, which has shown stronger ability than the auto-regressive objective used in GPT (Devlin et al., 2018).", "Therefore, we expect conditioned language encoding is more helpful to learn condition-related expressions (especially with the TF-IDF masking strategy which we will introduce) than the two generation tasks that employ the auto-regressive objective.", "In this subsection, we introduce position-wise condition bias that aims to determine how much condition information should be utilized to bias word generation probability at a position.", "The core component to calculate the bias is a non-parametric attention-based gating mechanism as shown in Figure 1 (Right).", "Other gate mechanisms usually employ parametric linear layers to calculate weights.", "We assume a non-parametric attention based method could be more training-efficient, which is important since labeled data are usually limited.", "We will empirically confirm its effectiveness compared to other gating methods.", "Specifically, given a training sample (x, c, y) or (c, text), the condition label c is encoded using two sets of parameters: one parametric vector works as the key k c R d h and another one works as the value v c R d h .", "Additionally, there is a general condition label g with a parametric vector k g as its key and a zero vector v g as its value.", "The former corresponds to conditioned generation, while the latter to the general dialogue that generates words only based on dialogue history.", "At each position, the model determines an attention weight to each choice.", "More attention to c means that the position is more tuned to the condition.", "More specifically, for each condition-aware transformer block as shown in Figure 1(Right), given C i = [ c i 1 , ..., c in ] as queries, the condition biases B i = [ b i 1 , ..., b in ] are calculated by: B i = softmax( C i K Tb d k + M b ) V b (3) where K b = [ k c , k g ] and V b = [ v c , v g ] .", "The calculation is non-parametric.", "We use the matrix M b R n 2 to prevent adding condition bias to positions on the source side because the condition only influences the target side (the labeled response or text).", "We jointly optimize three tasks: conditioned dialogue generation on labeled dialogue, conditioned language encoding and conditioned language generation on labeled text.", "As discussed in Section 3.1, conditioned language encoding is expected to be very helpful to learn condition-related expressions.", "A specific self-attention mask is required for each task, while the objectives of three tasks are essentially the same some tokens of the target side (labeled response or text) are randomly masked, and the final hidden vectors HL corresponding to the masked tokens are fed into an output softmax over the vocabulary to predict the expected tokens.", "Therefore, we can mix up 2 types of data (3 different tasks) in one training batch, and the loss is averaged in a batch.", "This thus prevents us from having the catastrophic forgetting problem (Phang et al., 2018).", "This problem is usually observed using a sequential fine-tuning process, i.e. first fine-tuning on labeled texts and then on conditioned dialogue data, which will erase the effect of the previous steps of training.", "When using labeled dialogue data, we want the model to learn to generate conditioned but more importantly coherent responses.", "Thus, we uniformly sample the tokens on the target side to mask.", "Differently, when exploiting labeled text data, we only want the model to generate condition-related expressions.", "Therefore, we introduce TF-IDF Based Masking for the labeled text data to speed up the learning process we sample tokens to mask according to their TF-IDF values counted on the entire corpus.", "We will empirically show its effectiveness.", "We use two labeled dialogue datasets, and we created two smaller training sets (500K labeled texts and 250K labeled dialogues), which are summarized in Table 1.", "We anticipate that when labeled dialogue data are limited, the benefit of leveraging labeled text data will be larger.", "Persona Reddit We filtered the Reddit data from 2015 to 2019 that is provided by a third party 3 .", "Reddit data is a natural source of dialogue with multiple users a post may have multiple comments by different users.", "Following Li et al. (2016b), we consider each user as a distinct persona.", "We extract (post, user, comment) tuples, where user is the label of the user who makes the comment.", "We further filtered the data based on sentence length and users: sentences with more than 30 words or less than 4 words are removed, and we only keep comments from the 2000 most active users so that we can collect enough data for each user.", "As a result, each user has 1291 samples (comments) on average.", "To build the labeled text corpus, we collect extra posts or comments on Reddit from the same user that have no overlap with the dialogue data these extra texts are intended to reflect the general writing style of the user.", "Topic-related Dialogue Dziri et al. (2019) provides a high-quality 3-turns conversational dataset for topic aware response generation 4 .", "Along with each (history, target) pair, there is a topic label and dozens of topic words that are predicted by LDA 3 https://files.pushshift.io/reddit/ 4 https://github.com/nouhadziri/THRED model.", "The dataset contains 9.2M samples, from which we sample 3M (history, topic, target) tuples as the labeled dialogue corpus.", "To construct the labeled text data, we sample other 3M tuples and only keep their (topic, target) parts.", "Note that the topic labels are generated by LDA, and thus it is difficult to obtain the labeled text data from other sources.", "We choose two strong baselines specifically designed for personalized response generation and two others for topic-aware generation.", "Additionally, we choose some state-of-the-art pre-trained Transformers.", "Speaker Model (Li et al., 2016b) a seq2seq model using four LSTM layers.", "Given a user label, the decoder transforms it into a user embedding and use it to generate a personalized response.", "MT-Speaker an approach jointly trains a Speaker Model and a conditioned auto-encoder with shared decoder parameters, which is adapted from a style transfer approach (Luan et al., 2017).", "This approach also leverages the labeled text data.", "TA-Seq2Seq (Xing et al., 2017) and THRED (Dziri et al., 2019) these models utilize topic words instead of topic labels predicted by the LDA model.", "TA-Seq2Seq leverages the topic information by a joint attention mechanism and a biased generation probability.", "THRED is built based on HRED and incorporates topic words via a hierarchical joint attention mechanism.", "C-Trans-ED (Zheng et al., 2019) an encoder-decoder transformer framework initialized with GPT parameters.", "The decoder dynamically merges features from the dialogue history and the condition.", "This model is based on the code of ConvAI2 champion (Dinan et al., 2019).", "et al. (2019).", "We add a condition embedding to the input representation to enable conditioned generation.", "BERT fine-tuning the pre-trained model (Devlin et al., 2018) on the dialogue datasets.", "The encoder and decoder share the parameters.", "When encoding, the model uses bi-directional attention.", "When decoding, it uses left-to-right attention.", "We implement the speaker model and MT-Speaker model based on OpenNMT 5 .", "Other models are directly taken from the available open-source code.", "Hyper-parameters are set following the original papers.", "Since our baselines utilize GPT or BERT, we use BERT (base, uncased) to initialize our model for fair comparison.", "It is however possible to build our model upon more powerful pre-trained models such as Roberta(Liu et al., 2019).", "We do hyper-parameter search based on perplexity on the validation set for: the number of condition-aware transformer blocks in { 2 , 6 , 12 } , the mix-up rate of labeled dialogues and texts in { 3:1, 1:1 } , and whether using conditioned language encoding task.", "We report experimental results with 2, 3:1, and using conditioned language encoding respectively.", "The warm-up proportion is set to 0.1.", "25% tokens of the target side are randomly masked.", "During decoding the beam size is 10, and we prevent duplicated bigrams.", "We fine-tune all the parameters end-to-end for four epochs on two P100 GPUs.", "With in total 6M training samples, each epoch takes twelve hours.", "The fine-tuning model only has (2 C + 1) d h additional parameters, where C is the number of different condition labels.", "Other details are given in Appendix A. 4.4 Evaluation Automatic Metrics We choose some widely used metrics in the literature 6 : BLEU (Papineni et al., 2002) with n=1,2,3; ROUGE-L longest common subsequence based statistics; CIDEr (Vedantam et al., 2015) utilizing TF-IDF weighting for each n-gram; and Distinct (Li et al., 2016a) indicating the proportion of unique n-grams (n=1,2) in the entire set of generated responses to evaluate response diversity.", "Two-sided t-test is used for statistical significance test.", "Response Appropriateness Furthermore, we conduct manual evaluation on the best models according to the automatic metrics.", "We only manually evaluate the model performance on large-scale datasets 7 .", "We ask human evaluators to rate a response in { 0 , 1 , 2 } .", "A score of 0 means that the response might have flaw in fluency and logic or be incoherent.", "Special cases are for example completely coping from the dialogue history as the output, and a bland response such as I don't know what you mean.", "A score of 1 represents a coherent but generic response.", "2 represents a coherent and informative response.", "We also do a pair-wise evaluation to compare two models and indicate which one is better.", "The evaluation is based on 200 random samples.", "Each generated response is rated by three annotators.", "The inter-rater annotation agreement in Cohen's kappa (Cohen, 1960) is 0 .", "441 on average, which indicates moderate agreement.", "Condition Consistency We observe that automatic metrics fail to evaluate condition consistency since BERT that does not consider conditions outperforms C-Trans-ED and C-Trans-Dec.", "Thus, we perform manual evaluation on the condition consistency.", "A generated response is rated in { 0 , 1 , 2 } .", "The scores 0, 1 and 2 mean respectively that the response is inconsistent to the condition, somehow related, and consistent.", "However, if the response has flaw in fluency or logic, it will get a score of 0.", "For Topic Dialogue, it is easy to measure whether a generated response is in the topic.", "However, for persona consistency, it is difficult for a human evaluator to know the speaking style of each user.", "Thus, before evaluation we first automatically determine those frequently used words by a user in responses and show them to the annotators to help their evaluations.", "Table 2 and 3 gives automatic evaluation results, and Table 4 gives human evaluation results.", "Appendix B shows some generated responses.", "The results can be summarized as follow: BERT vs. Trans-ED & Trans-Dec C-Trans-Dec has a clear advantage over C-Trans-ED in almost all automatic metrics, which can also 7 We did not manually evaluate the results with small datasets due to its high cost.", "be observed in their generated responses.", "Fine-tuning BERT without considering conditions outperforms C-Trans-Dec on most similarity metrics such as BLEU.", "We explain this by the fact that bi-directional attention could enable a model to better encode dialogue history, and thus to generate responses more similar to the ground truth.", "The ablation model using w/o ctext is fine-tuning C-BERT (with our condition-aware transformer blocks) on labeled dialogue data.", "The performance of w/o ctext is similar to C-Trans-Dec's, with a slight advantage in condition consistency and small disadvantage in response appropriateness.", "These results show that our approach is built upon a strong base model.", "As mentioned, other pre-trained models can also be used.", "With Condition When large Persona Dialogue is available, w/o ctext (i.e. C-BERT) outperforms BERT in almost all automatic metrics.", "However, we observe that when only small-scale labeled dialogue data are available, all three conditioned models perform worse than BERT.", "This shows that the model cannot learn the condition-related features well from the limited labeled dialogue data.", "Thus, it is important to leverage the labeled texts that are easier to collect, and the results on small-scale Persona Reddit show that our multi-task learning approach significantly outperforms BERT on similarity metrics such as BLEU and CIDEr.", "For Topic Dialogue, the labels are given by LDA model.", "LDA is an unsupervised method and the predicted condition labels can be very noisy.", "Nev-Model Persona Topic Appropriateness Consistency Appropriateness Consistency Score Pair-wise Score Pair-wise Score Pair-wise Score Pair-wise C-Trans-Dec 0.96 (28%, 39%) 0.85 (20%, 39%) 0.77 (26%, 34%) 0.71 (21%, 31%) BERT 0.77 (11%, 40%) 0.78 (22%, 43%) 0.55 (17%, 40%) 0.46 (16%, 40%) Ours 1.15 -1.24 -0.83 -0.80 w/o ctext 0.91 (26%, 39%) 0.90 (23%, 38%) 0.73 (27%, 35%) 0.72 (23%, 30%) Table 4: Human evaluation of generated responses on appropriateness and condition consistency.", "ertheless, similarly, with large data C-BERT outperforms BERT in all metrics, but when only small-scale labeled dialogue data are available, C-BERT performs worse than BERT in terms of BLEU.", "The result again shows the importance of exploiting labeled texts, and our approach is the best on small-scale Topic Dialogue.", "Leveraging Labeled Texts In general, our approach significantly outperforms all baselines and w/o ctext that do not exploit labeled text data, either with large-scale or small-scale data.", "With small-scale data, our approach outperforms BERT while w/o ctext itself cannot achieve this, which shows that conditioned dialogue generation can be helped by extra labeled text data.", "On Topic Dialogue, with such noisy labels, our model leveraging the labeled texts still produces the best performance, which confirms the robustness of our multi-task learning approach to work with different types of labels.", "The human evaluation on appropriateness and condition consistency further confirms the effectiveness of our approach.", "Not all methods utilizing extra labeled text can obtain such performance improvement as we did.", "MT-Speaker that employs an extra auto-encoder Model BLEU-1 BLEU-2 Dist-2 Single Gate 13.880 (*) 4.853 (/) 0.090 (**) Double Gates 13.988 (*) 4.889 (/) 0.094 (*) Attn.", "does not gain much improvement over Sp-Model.", "This result shows that using additional model components to leverage labeled texts is inefficient for conditioned dialogue generation.", "Furthermore, Two-Step FT that first fine-tuning on labeled texts and then on labeled dialogue data does not always produce good performance.", "It achieves comparable performance to our approach on large-scale datasets, but on small-scale datasets it can even perform worse than w/o ctext (Table 2).", "This result shows that with small-scale dataset, it is better to avoid sequential fine-tuning because first fine-tuning on labeled texts will erase the effect of the previous step of pre-training.", "Furthermore, we investigate how the ratio of the size of labeled text data to the size of dialogue data influence model performance.", "As shown in Figure 2, given 1M labeled text data, when the ratio is less than 6 .", "7 , our approach performs better than Two-Step FT.", "However, when labeled text corpus is much larger than dialogue corpus, sequential fine-tuning is better.", "We assume that with large labeled text corpus the pre-trained language model can be tuned to conditioned language generation.", "Besides, the final task in sequential fine-tuning is purely conditioned dialogue generation, which is expected to achieve better performance on dialogue than a multi-task learning approach.", "However, in real application situations, one cannot always expect that a large labeled text corpus as supplement for the dialogue data is available.", "TF-IDF Masking and Attention Gating We assumed that the general language features have already been captured by the pre-trained models.", "Thus, to better utilize labeled text data, we mask more condition-related words using TF-IDF based masking.", "Our ablation study confirms that TF-IDF masking brings improvement in almost all automatic metrics although the improvement might not always be statistically significant.", "Our attention gating is a non-parametric gating mechanism to fuse the condition into the decoder.", "We expected it to be efficient, which is particularly important when labeled data are limited.", "Here, we compare it with two common parametric gating mechanisms: 1) setting a single gate on C i to get a weight; 2) setting gates on both C i and v c to get two weights.", "Then, we combine the weighted C i and v c to get C (cid:48) i as in our attention gating.", "Experimental results in Table 5 confirm that our method is more efficient.", "When only small-scale labeled data are available, the model with attention gating generates responses that are significantly more similar to the ground-truth.", "In this paper, we examined the data scarcity issue of conditioned dialogue generation.", "Pre-training on unlabeled text or dialogue data is not helpful to conditioned generation.", "Thus, we exploited labeled text data that are easier to collect than labeled dialogues.", "We expected these data can contribute to better representations of conditions and better use the conditions in natural language generation, which complement what is lacking in the pre-trained models.", "To leverage these two types of data, we proposed a simple and efficient multi-task learning approach.", "Three tasks are considered: conditioned dialogue generation task on the labeled dialogue data, conditioned language encoding task and conditioned language generation task on the labeled text data.", "We conducted experiments under persona and topic conditions.", "Experimental results show that our approach outperforms the state-of-the-art models by leveraging labeled texts, and it also obtains larger improvement in performance comparing to the previous methods leveraging text data." ]
[ "abstain", "method", "objective", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "method", "objective", "method", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "result", "result", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "other", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "objective", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "other", "abstain", "result", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "method", "result" ]
[ "In recent years, we have seen a colossal effort in pre-training multilingual text encoders using large-scale corpora in many languages to facilitate cross-lingual transfer learning.", "However, due to typological differences across languages, the cross-lingual transfer is challenging.", "Nevertheless, language syntax, e.g., syntactic dependencies, can bridge the typological gap.", "Previous works have shown that pre-trained multilingual encoders, such as mBERT (Devlin et al., 2019), capture language syntax, helping cross-lingual transfer.", "This work shows that explicitly providing language syntax and training mBERT using an auxiliary objective to encode the universal dependency tree structure helps cross-lingual transfer.", "We perform rigorous experiments on four NLP tasks, including text classification, question answering, named entity recognition, and task-oriented semantic parsing.", "The experiment results show that syntax-augmented mBERT improves cross-lingual transfer on popular benchmarks, such as PAWS-X and MLQA, by 1.4 and 1.6 points on average across all languages.", "In the generalized transfer setting, the performance boosted significantly, with 3.9 and 3.1 points on average in PAWS-X and MLQA.", "Cross-lingual transfer reduces the requirement of labeled data to perform natural language processing (NLP) in a target language, and thus has the ability to avail NLP applications in low-resource languages.", "However, transferring across languages is challenging because of linguistic differences at levels of morphology, syntax, and semantics.", "For example, word order difference is one of the crucial factors that impact cross-lingual transfer (Ah-mad et al., 2019).", "The two sentences in English and Hindi, as shown in Figure 1 have the same Work done during internship at Facebook AI.", "meaning but a different word order (while English has an SVO ( Subject-Verb-Object ) order, Hindi follows SOV).", "However, the sentences have a similar dependency structure, and the constituent words have similar part-of-speech tags.", "Presumably, language syntax can help to bridge the typological differences across languages.", "In recent years, we have seen a colossal effort to pre-train Transformer encoder (Vaswani et al., 2017) on large-scale unlabeled text data in one or many languages.", "Multilingual encoders, such as mBERT (Devlin et al., 2019) or XLM-R (Con-neau et al., 2020) map text sequences into a shared multilingual space by jointly pre-training in many languages.", "This allows us to transfer the multilingual encoders across languages and have found effective for many NLP applications, including text classification (Bowman et al., 2015; Conneau Q English How many members of the Senate are elected?", "et al., 2018), question answering (Rajpurkar et al., 2016; Lewis et al., 2020), named entity recognition (Pires et al., 2019; Wu and Dredze, 2019), and more.", "Since the introduction of mBERT, several works (Wu and Dredze, 2019; Pires et al., 2019; K et al., 2020) attempted to reason their success in cross-lingual transfer.", "In particular, Wu and Dredze (2019) showed that mBERT captures language syntax that makes it effective for cross-lingual transfer.", "A few recent works (Hewitt and Manning, 2019; Jawahar et al., 2019; Chi et al., 2020) suggest that BERT learns compositional features; mimicking a tree-like structure that agrees with the Universal Dependencies taxonomy.", "However, fine-tuning for the downstream task in a source language may not require mBERT to retain structural features or learn to encode syntax.", "We argue that encouraging mBERT to learn the correlation between syntax structure and target labels can benefit cross-lingual transfer.", "To support our argument, we show an example of question answering (QA) in Figure 2. In the example, mBERT predicts incorrect answers given the Spanish language context that can be corrected by exploiting syntactic clues.", "Utilizing syntax structure can also benefit generalized cross-lingual transfer (Lewis et al., 2020) where the input text sequences belong to different languages.", "For example, answering an English question based on a Spanish passage or predicting text similarity given the two sentences as shown in Figure 1. In such a setting, syntactic clues may help to align sentences.", "In this work, we propose to augment mBERT with universal language syntax while fine-tuning on downstream tasks.", "We use a graph attention network (GAT) (Velickovic et al., 2018) to learn structured representations of the input sequences that are incorporated into the self-attention mechanism.", "We adopt an auxiliary objective to train GAT such that it embeds the dependency structure of the input sequence accurately.", "We perform an evaluation on zero-shot cross-lingual transfer for text classification, question answering, named entity recognition, and task-oriented semantic parsing.", "Experiment results show that augmenting mBERT with syntax improves cross-lingual transfer, such as in PAWS-X and MLQA, by 1.4 and 1.6 points on average across all the target languages.", "Syntax-augmented mBERT achieves remarkable gain in the generalized cross-lingual transfer; in PAWS-X and MLQA, performance is boosted by 3.9 and 3.1 points on average across all language pairs.", "Furthermore, we discuss challenges and limitations in modeling universal language syntax.", "We release the code to help future works.", "1 2 Syntax-augmented Multilingual BERT Multilingual BERT (mBERT) (Devlin et al., 2019) enables cross-lingual learning as it embeds text sequences into a shared multilingual space.", "mBERT is fine-tuned on downstream tasks, e.g., text classification using monolingual data and then directly employed to perform on the target languages.", "This refers to zero-shot cross-lingual transfer.", "Our main idea is to augment mBERT with language syntax for zero-shot cross-lingual transfer.", "We employ graph attention network (GAT) (Veli ckovi c et al., 2018) to learn syntax representations and fuse them into the self-attention mechanism of mBERT.", "1 https://github.com/wasiahmad/Syntax-MBERT In this section, we first briefly review the transformer encoder that bases mBERT ( 2.1), and then describe the graph attention network (GAT) that learns syntax representations from dependency structure of text sequences ( 2.2).", "Finally, we describe how language syntax is explicitly incorporated into the transformer encoder ( 2.3).", "Transformer encoder (Vaswani et al., 2017) is composed of an embedding layer and stacked encoder layers.", "Each encoder layer consists of two sublayers, a multi-head attention layer followed by a fully connected feed-forward layer.", "We detail the process of encoding an input token sequence ( w 1 , . . . , w n ) into a sequence of vector representations H = [ h 1 , . . . , h n ] as follows.", "Embedding Layer is parameterized by two embedding matrices the token embedding matrix W e RU d model and the position embedding matrix W p RU d model (where U is the vocabulary size and d model is the encoder output dimension).", "An input text sequence enters into the model as two sequences: the token sequence ( w 1 , . . . , w n ) and the corresponding absolute position sequence ( p 1 , . . . , p n ).", "The output of the embedding layer is a sequence of vectors { x i } ni =1 where x i = w i W e + p i W p .", "The vectors are packed into matrix H 0 = [ x 1 , . . . , x n ] R n d model and fed to an L -layer encoder.", "Multi-head Attention allows to jointly attend to information from different representation subspaces, known as attention heads .", "Multi-head attention layer composed of h attention heads with the same parameterization structure.", "At each attention head, the output from the previous layer H l 1 is first linearly projected into queries, keys, and values as follows.", "Q = H l 1 W Ql , K = H l 1 W Kl , V = H l 1 W Vl , where the parameters W Ql , W Kl R d model d k and W Vl R d model d v are unique per attention head.", "Then scaled dot-product attention is performed to compute the output vectors { o i } ni =1 R n d v .", "Attention ( Q, K, V, M, d k ) = softmax (cid:18) QKT + M d k (cid:19) V, (1) where M R n n is the masking matrix that determines whether a pair of input positions can attend Figure 3: A simplified illustration of the multi-head self-attention in the graph attention network wherein each head attention is allowed between words within distance from each other in the dependency graph.", "each other.", "In classic multi-head attention, M is a zero matrix (all positions can attend each other).", "The output vectors from all the attention heads are concatenated and projected into d model dimension using the parameter matrix W o R hd v d model .", "Finally the vectors are passed through a feed-forward network to output H l R n d model .", "We embed the syntax structure of the input token sequences using their universal dependency parse.", "A dependency parse is a directed graph where the nodes represent words, and the edges represent dependencies (the dependency relation between the head and dependent words).", "We use a graph attention network (GAT) (Veli ckovi c et al., 2018) to embed the dependency tree structure of the input sequence.", "We illustrate GAT in Figure 3. Given the input sequence, the words ( w i ) and their part-of-speech tags ( pos i ) are embedded into vectors using two parameter matrices: the token embedding matrix W e and the part-of-tag embedding W pos .", "The input sequence is then encoded into an input matrix G 0 = [ g 1 , . . . , g n ] , where g i = w i W e + pos i W pos R d model .", "Note that token embedding matrix W e is shared between GAT and the Transformer encoder.", "Then G 0 is fed into an LG -layer GAT where each layer generates word representations by attending their adjacent words.", "where D is the distance matrix and D ij indicates the shortest path distance between word i and j in the dependency graph structure.", "Typically in GAT, is set to 1; allowing attention between adjacent words only.", "However, in our study, we find setting to [2, 4] helpful for the downstream tasks.", "Finally, the vector representations from all the attention heads (as in Eq.", "(2)) are concatenated to form the output representations G l R n kd g , where k is the number of attention heads employed.", "The goal of the GAT encoder is to encode the dependency structure into vector representations.", "Therefore, we design GAT to be light-weight; consisting of much less parameters in comparison to Transformer encoder.", "Note that, GAT does not employ positional representations and only consists of multi-head attention; there is no feed-forward sublayer and residual connections.", "Dependency Tree over Wordpieces and Special Symbols mBERT tokenizes the input sequence into subword units, also known as wordpieces.", "Therefore, we modify the dependency structure of linguistic tokens to accommodate wordpieces.", "We introduce additional dependencies between the first subword (head) and the rest of the subwords (de-pendents) of a linguistic token.", "More specifically, we introduce new edges from the head subword to the dependent subwords.", "The inputs to mBERT use special symbols: [CLS] and [SEP].", "We add an edge from the [CLS] token to the root of the dependency tree and the [SEP] tokens.", "We want the Transformer encoder to consider syntax structure while performing the self-attention between input sequence elements.", "We use the syntax representations produced by GAT (outputs from the last layer, denoting as G ) to bias the self-attention.", "O = Attention ( Q + G G Ql , K + G G Kl , V, M, d k ) , where G Ql , G Kl R d kdg d k are new parameters that learn representations to bias the self-attention.", "We consider the addition terms ( G G Ql , G G Kl ) as syntax-bias that provide syntactic clues to guide the self-attention.", "The high-level intuition behind the syntax bias is to attend tokens with a specific part-of-speech tag sequence or dependencies.", "2 Syntax-heads mBERT employs h (=12) attention heads and the syntax representations can be infused into one or more of these heads, and we refer them as syntax-heads .", "In our experiments, we observed that instilling structural information into many attention heads degenerates the performance.", "For the downstream tasks, we consider one or two syntax-heads that gives the best performance.", "3 Syntax-layers refers to the encoder layers that are infused by syntax representations from GAT.", "mBERT has a 12-layer encoder and our study finds considering all of the layers as syntax-layers beneficial for cross-lingual transfer.", "We jointly fine-tune mBERT and GAT on downstream tasks in the source language (English in this work) following the standard procedure.", "However, the task-specific training may not guide GAT to encode the tree structure.", "Therefore, we adopt an auxiliary objective that supervises GAT to learn representations which can be used to decode the tree structure.", "More specifically, we use GAT's output representations G = [ g 1 , . . . , g n ] to predict the tree distance between all pairs of words ( g i , g j ) and the tree depth || g i || of each word w i in the input sequence.", "Following Hewitt and Manning (2019), we apply a linear transformation 1 R m kd g to compute squared distances as follows.", "d 1 ( g i , g j ) 2 = ( 1 ( g i g j )) T ( 1 ( g i g j )) .", "The parameter matrix 1 is learnt by minimizing: min 1 (cid:88) s 1 n 2 (cid:88) i,j | dist ( w i , w j ) 2 d ( g i , g j ) 2 | , where s denotes all the text sequences in the training corpus.", "Similarly, we train another parameter matrix 2 to compute squared vector norms, d 2 ( g i ) = ( 2 g i ) T ( 2 g i ) that characterize the tree 2 In example shown in Figure 2, token dependencies: [en: root has has members 315], and [es: root formada hay senadores 315] or corresponding part-of-speech tag sequence [VERB VERB NOUN NUM]) may help mBERT to predict the correct answer.", "3 This aligns with the findings of Hewitt and Manning (2019) as they showed 64 or 128 dimension of the contextual representations are sufficient to capture the syntax structure.", "depth of the words.", "We train GAT's parameters and 1 , 2 by minimizing the loss: L = L task + ( L dist + L depth ) , where is weight for the tree structure prediction loss.", "Pre-training GAT Unlike mBERT's parameters, GAT's parameters are trained from scratch during task-specific fine-tuning.", "For low-resource tasks, GAT may not learn to encode the syntax structure accurately.", "Therefore, we utilize the universal dependency parses (Nivre et al., 2019) to pre-train GAT on the source and target languages.", "Note that, the pre-training objective for GAT is to predict the tree distances and depths as described above.", "To study syntax-augmented mBERT's performance in a broader context, we perform an evaluation on four NLP applications: text classification, named entity recognition, question answering, and task-oriented semantic parsing.", "Our evaluation focuses on assessing the usefulness of utilizing universal syntax in the zero-shot cross-lingual transfer.", "Text Classification We conduct experiments on two widely used cross-lingual text classification tasks:", "(i) natural language inference and", "(ii) paraphrase detection.", "We use the XNLI (Conneau et al., 2018) and PAWS-X (Yang et al., 2019) datasets for the tasks, respectively.", "In both tasks, a pair of sentences is given as input to mBERT.", "We combine the dependency tree structure of the two sentences by adding two edges from the [CLS] token to the roots of the dependency trees.", "Named Entity Recognition is a structure prediction task that requires to identify the named entities mentioned in the input sentence.", "We use the Wikiann dataset (Pan et al., 2017) and a subset of two tasks from CoNLL-2002 (Tjong Kim Sang, 2002) and CoNLL-2003 NER (Tjong Kim Sang and De Meulder, 2003).", "We collect the CoNLL datasets from XGLUE (Liang et al., 2020).", "In both datasets, there are 4 types of named entities: Person, Location, Organization, and Miscellaneous.", "4 Question Answering We evaluate on two crosslingual question answering benchmarks, MLQA (Lewis et al., 2020), and XQuAD (Artetxe et al., 2020).", "We use the SQuAD dataset (Rajpurkar et al., 2016) for training and validation.", "In the QA task, the inputs are a question and a context passage that consists of many sentences.", "We formulate QA as a multi-sentence reading comprehension task; jointly train the models to predict the answer sentence and extract the answer span from it.", "We concatenate the question and each sentence from the context passage and use the [CLS] token representation to score the candidate sentences.", "We adopt the confidence method from Clark and Gardner (2018) and pick the highest-scored sentence to extract the answer span during inference.", "We provide more details of the QA models in Appendix.", "Task-oriented Semantic Parsing The fourth evaluation task is cross-lingual task-oriented semantic parsing.", "In this task, the input is a user utterance and the goal is to predict the intent of the utterance and fill the corresponding slots.", "We conduct experiments on two recently proposed benchmarks:", "(i) mTOP (Li et al., 2021) and", "(ii) mATIS++ (Xu et al., 2020).", "We jointly train the BERT models as suggested in Chen et al. (2019).", "We summarize the evaluation task benchmark datasets and evaluation metrics in Table 1. 4 Miscellaneous entity type covers named entities that do not belong to the other three types Model en ar bg de el es fr hi ru tr ur vi zh ko ja nl pt AVG Classification XNLI (Conneau et al., 2018) [1] 80.8 64.3 68.0 70.0 65.3 73.5 73.4 58.9 67.8 60.9 57.2 69.3 67.8 --67.5 mBERT 81.8 63.8 68.0 70.7 65.4 73.8 72.4 59.3 68.4 60.7 56.7 68.6 67.8 --67.5 + Syn.", "score for the question answering (QA) datasets (for other datasets, see Table 1).", "We train and evaluate mBERT on the same pre-processed datasets and considers its performance as the baseline (denoted by mBERT rows in the table) for syntax-augmented mBERT (denoted by + Syn. rows in the table).", "Bold-faced values indicate that the syntax-augmented mBERT is statistically significantly better (by paired bootstrap test, p < 0.05) than the baseline .", "We include results from published works ([1]: Hu et al. (2020), [2]: Liang et al. (2020), and [3]: Lewis et al. (2020)) as a reference.", "Except for the QA datasets, all our results are averaged over three different seeds.", "We collect the universal part-of-speech tags and the dependency parse of sentences by pre-processing the datasets using UDPipe.", "5 We fine-tune mBERT on the pre-processed datasets and consider it as the baseline for our proposed syntax-augmented mBERT.", "We extend the XTREME framework (Hu et al., 2020) that is developed based on transformers API (Wolf et al., 2020).", "We use the same hyper-parameter setting for mBERT models, as suggested in XTREME.", "For the graph at-5 https://ufal.mff.cuni.cz/udpipe/2 tention network (GAT), we set LG = 4 , k = 4 , and d g = 64 (resulting in 0.5 million parame-ters).", "We tune 6 (shown in Eq.", "(3)) and (weight of the tree structure prediction loss) in the range [1 , 2 , 4 , 8] and [0 . 5 1 . 0] , respectively.", "We detail the hyper-parameters in the Appendix.", "6 We observed that the value of depends on the downstream task and the source language.", "For example, a larger value is beneficial for tasks taking a pair of text sequences as inputs, while a smaller value results in better performances for tasks taking single text input.", "Experiments on PAWS-X using each target language as the source language indicate that should be set to a larger value for source language with longer text sequences (e.g., Arabic) and vice versa.", "We aim to address the following questions.", "1. Does augmenting mBERT with syntax improve (generalized) cross-lingual transfer?", "2. Does incorporating syntax benefit specific languages or language families?", "3. Which NLP tasks or types of tasks get more benefits from utilizing syntax?", "Experiment results to compare mBERT and syntax-augmented mBERT are presented in Table 2. Overall, the incorporation of language syntax in mBERT improves cross-lingual transfer for the downstream tasks, in many languages by a significant margin ( p < 0 . 05 , t-test).", "The average performances across all languages on XNLI, PAWS-X, MLQA, and mTOP benchmarks improve significantly (by at least 1 point).", "On the other benchmarks: Wikiann, CoNLL, XQuAD, and mATIS++, the average performance improvements are 0.5, 0.2, 0.8, and 0.7 points, respectively.", "Note that the performance gains in the source language (English) for all the datasets except Wikiann is 0.3.", "This indicates that cross-lingual transfer gains are not due to improving the downstream tasks, but instead, language syntax helps to transfer across languages.", "In the generalized cross-lingual transfer setting (Lewis et al., 2020), the input text sequences for the downstream tasks (e.g., text classification, QA) may come from different languages.", "As shown in Figure 2, given the context passage in English, a multilingual QA model should answer the question written in Spanish.", "Due to the parallel nature of the existing benchmark datasets: XNLI, PAWS-X, MLQA, and XQuAD, we evaluate mBERT and its' syntax-augmented variant on the generalized crosslingual transfer setting.", "The results for PAWS-X and MLQA are presented in Table 3 (results for the other datasets are provided in Appendix).", "In both text classification and QA benchmarks, we observe significant improvements for most language pairs.", "In the PAWS-X text classification task, language pairs with different typologies (e.g., en-ja, en-zh) have the most gains.", "When Chinese (zh) or Japanese (ja) is in the language pairs, the performance is boosted by at least 4.5%.", "The dataset characteristics explain this; the task requires modeling structure, context, and word order information.", "On the other hand, in the XNLI task, the performance gain pattern is scattered, and this is perhaps syntax plays a less significant role in the XNLI task.", "The largest improvements result when the languages of the premise and hypothesis sentences belong to { Bulgarian, Chinese } and { French, Arabic } .", "In both QA datasets, syntax-augmented mBERT boosts performance when the question and context languages are typologically different except the Hindi language.", "Surprisingly, we observe a large performance gain when questions in Spanish and German are answered based on the English context.", "Based on our manual analysis on MLQA, we suspect that although questions in Spanish and German are translated from English questions (by human), the context passages are from Wikipedia that often are not exact translation of the corresponding English passage.", "Take the context passages in Figure 2 as an example.", "We anticipate that syntactic clues help a QA model in identifying the correct answer span when there are more than one semantically equivalent and plausible answer choices.", "Impact on Languages We study if fine-tuning syntax-augmented mBERT on English (source language) impacts specific target languages or families of languages.", "We show the performance gains on the target languages grouped by their families in four downstream tasks in Figure 4. There is no observable trend in the overall performance improvements across tasks.", "However, the XNLI curve weakly indicates that when target languages are typologically different from the source language, there is an increase in the transfer performance (comparing left half to the right half of the curve).", "Impact of Pre-training GAT Before fine-tuning syntax-augmented mBERT, we pre-train GAT on the 17 target languages (discussed in 2.4).", "In our experiments, we observe such pre-training boosts semantic parsing performance, while there is a little gain on the classification and QA tasks.", "We also observe that pre-training GAT diminishes the gain of fine-tuning with the auxiliary objective (predicting the tree structure).", "We hypothesize that pre-training or fine-tuning GAT using auxiliary objective helps when there is limited training data.", "For example, semantic parsing benchmarks have a small number of training examples, while XNLI has many.", "As a result, the improvement due to pre-training or fine-tuning GAT in the semantic parsing tasks is significant, and in the XNLI task, it is marginal.", "A natural question is, instead of using GAT, why we do not modify attention heads in mBERT to embed the dependency structure (as shown in Eq. 3).", "We observed a consistent performance drop across all the tasks if we intervene in self-attention (blocking pair-wise attention).", "We anticipate fusing GAT encoded syntax representations helps as it adds bias to the self-attention.", "For future works, we suggest exploring ways of adding structure bias, e.g., scaling attention weights based on dependency structure (Bugliarello and Okazaki, 2020).", "Among the evaluation datasets, Wikiann consists of sentence fragments, and the semantic parsing benchmarks consist of user utterances that are typically short in length.", "Sorting and analyzing the performance improvements based on sequence lengths suggests that the utilization of dependency structure has limited scope for shorter text sequences.", "However, part-of-speech tags help to identify span boundaries improving the slot filling tasks.", "In this work, we assume we have access to an off-the-shelf universal parser, e.g., UDPipe (Straka and Strakova, 2017) or Stanza (Qi et al., 2020) to collect part-of-speech tags and the dependency structure of the input sequences.", "Relying on such a parser has a limitation that it may not support all the languages available in benchmark datasets, e.g., we do not consider Thai and Swahili languages in the benchmark datasets.", "There are a couple of challenges in utilizing the universal parsers.", "First, universal parsers tokenize the input sequence into words and provide part-of-speech tags and dependencies for them.", "The tokenized words may not be a part of the input.", "7 As a result, tasks requiring extracting text spans (e.g., QA) need additional mapping from input tokens to words.", "Second, the parser's output word sequence is tokenized into wordpieces that often results in 7 For example, in the German sentence Wir gehen zum kino (we are going to the cinema), the token zum is decomposed into words zu and dem.", "Encoding Syntax for Language Transfer Universal language syntax, e.g., part-of-speech (POS) tags, dependency parse structure, and relations are shown to be helpful for cross-lingual transfer (Kozhevnikov and Titov, 2013; Prazak and Konopk, 2017; Wu et al., 2017; Subburathinam et al., 2019; Liu et al., 2019; Zhang et al., 2019; Xie et al., 2020; Ahmad et al., 2021).", "Many of these prior works utilized graph neural networks (GNN) to encode the dependency graph structure of the input sequences.", "In this work, we utilize graph attention networks (GAT) (Velickovic et al., 2018), a variant of GNN that employs the multihead attention mechanism.", "Syntax-aware Multi-head Attention A large body of prior works investigated the advantages of incorporating language syntax to enhance the self-attention mechanism (Vaswani et al., 2017).", "Existing techniques can be broadly divided into two types.", "The first type of approach relies on an external parser (or human annotation) to get a sentence's dependency structure during inference.", "This type of approaches embed the dependency structure into contextual representations (Wu et al., 2017; Chen et al., 2017; Wang et al., 2019a,b; Zhang et al., 2019, 2020; Bugliarello and Okazaki, 2020; Sachan et al., 2021; Ahmad et al., 2021).", "Our proposed method falls under this category; however, unlike prior works, our study investigates if fusing the universal dependency structure into the self-attention of existing multilingual encoders help cross-lingual transfer.", "Graph attention networks (GATs) that use multi-head attention has also been adopted for NLP tasks (Huang and Carley, 2019) also fall into this category.", "The second category of approaches does not require the syntax structure of the input text during inference.", "These approaches are trained to predict the dependency parse via supervised learning (Strubell et al., 2018; Deguchi et al., 2019).", "In this work, we propose incorporating universal language syntax into multilingual BERT (mBERT)", "8 This happen for languages, such as Arabic as parsers normalize the input that lead to inconsistent characters between input text and the output tokenized text.", "by infusing structured representations into its multihead attention mechanism.", "We employ a modified graph attention network to encode the syntax structure of the input sequences.", "The results endorse the effectiveness of our proposed approach in the cross-lingual transfer.", "We discuss limitations and challenges to drive future works.", "We thank Yuqing Tang for his insightful comments on our paper and anonymous reviewers for their helpful feedback.", "We also thank UCLA-NLP group for helpful discussions and comments.", "In today's world, the number of speakers for some languages is in billions, while it is only a few thousands for many languages.", "As a result, a few languages offer large-scale annotated resources, while for many languages, there are limited or no labeled data.", "Due to this disparity, natural language processing (NLP) is extremely challenging in the low-resourced languages.", "In recent years, cross-lingual transfer learning has achieved significant improvements, enabling us to avail NLP applications to a wide range of languages that people use across the world.", "However, one of the challenges in crosslingual transfer is to learn the linguistic similarity and differences between languages and their correlation with the target NLP applications.", "Modern transferable models are pre-trained on unlabeled humongous corpora such that they can learn language syntax and semantic and encode them into universal representations.", "Such pre-trained models can benefit from explicit incorporation of universal language syntax during fine-tuning for different downstream applications.", "This work presents a thorough study to analyze the pros and cons of utilizing Universal Dependencies (UD) framework that consists of grammar annotations across many human languages.", "Our work can broadly impact the development of cross-lingual transfer solutions and making them accessible to people across the globe.", "In this work, we discuss the limitations and challenges in utilizing universal parsers to benefit the pre-trained models.", "Among the negative aspects of our work is the lack of explanation that why some languages get more benefits over others due to universal syntax knowledge incorporation." ]
[ "result", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "objective", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "other", "method", "other", "other", "other", "other", "objective", "other", "other", "other", "objective", "abstain", "abstain", "method", "objective", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain" ]
[ "When evaluating an answer choice for Reading Comprehension task, other answer choices available for the question and the answers of related questions about the same paragraph often provide valuable information.", "In this paper, we propose a method to leverage the natural language relations between the answer choices, such as entailment and contradiction, to improve the performance of machine comprehension.", "We use a stand-alone question answering (QA) system to perform QA task and a Natural Language Inference (NLI) system to identify the relations between the choice pairs.", "Then we perform inference using an Integer Linear Programming (ILP)-based relational framework to re-evaluate the decisions made by the standalone QA system in light of the relations identified by the NLI system.", "We also propose a multitask learning model that learns both the tasks jointly.", "Given an input text and a set of related questions with multiple answer choices, the reading comprehension (RC) task evaluates the correctness of each answer choice.", "Current approaches to the RC task quantify the relationship between each question and answer choice independently and pick the highest scoring option.", "In this paper, we follow the observation that when humans approach such RC tasks, they tend to take a holistic view ensuring that their answers are consistent across the given questions and answer choices.", "In this work we attempt to model these pragmatic inferences, by leveraging the entailment and contradiction relations between the answer choices to improve machine comprehension.", "To help clarify these concepts, consider the following examples: How can the military benefit from the existence of the CIA?", "c 1 : They can use them c 2 : These agencies are keenly attentive to the military's strategic and tactical requirements ( (cid:55) ) c 3 : The CIA knows what intelligence the military requires and has the resources to obtain that intelligence ( (cid:51) ) The above example contains multiple correct answer choices, some are easier to capture than others.", "For example, identifying that c 3 is true might be easier than c 2 based on its alignment with the input text.", "However, capturing that c 3 entails c 2 allows us to predict c 2 correctly as well.", "Classification of the answer in red (marked (cid:55) ) could be corrected using the blue (marked (cid:51) ) answer choice.", "Q1: When were the eggs added to the pan to make the omelette?", "c 11 : When they turned on the stove c 12 : When the pan was the right temperature ( (cid:51) ) Q2: Why did they use stove to cook omelette?", "c 21 : They didn't use the stove but a microwave c 22 : Because they needed to heat up the pan ( (cid:55) ) Similarly, answering Q1 correctly helps in answering Q2.", "Our goal is to leverage such inferences for machine comprehension.", "Our approach contains three steps.", "First, we use a stand-alone QA system to classify the answer choices as true/false.", "Then, we classify the relation between each pair of choices for a given question as entailment , contradiction or neutral .", "Finally, we re-evaluate the labels assigned to choices using an Integer Linear Programming based inference procedure.", "We discuss different training protocols and representation choices for the combined decision problem.", "An overview is in figure", "1. We empirically evaluate on two recent datasets, MultiRC (Khashabi et al., 2018) and SemEval-2018 task-11 (Ostermann et al., 2018) and show that it improves machine comprehension in both.", "Yatskar (2018) showed that a high performance on these datasets could be achieved without necessarily achieving the capability of making commonsense inferences.", "Trischler et al. (2016b), Kumar et al. (2016), Liu and Perez (2017), Min et al. (2018) and Xiong et al. (2016) proposed successful models on those datasets.", "To address this issue, new QA datasets which require commonsense reasoning have been proposed (Khashabi et al., 2018; Ostermann et al., 2018; Mihaylov et al., 2018).", "Using common sense inferences in Machine Comprehension is a far from solved problem.", "There have been several attempts in literature to use inferences to answer questions.", "Most of the previous works either attempt to infer the answer from the given text (Sachan and Xing, 2016; Sun et al., 2018) or an external commonsense knowledge base (Das et al., 2017; Mihaylov and Frank, 2018; Bauer et al., 2018; Weissenborn et al., 2017).", "While neural models can capture some dependencies between choices through shared representations, to the best of our knowledge, inferences capturing the dependencies between answer choices or different questions have been not explicitly modeled.", "Formally, the task of machine comprehension can be defined as: given text P and a set of n related questions Q = { q 1 , q 2 , . . . , q n } each having m choices C = { c i 1 , c i 2 , . . . , c im } q i Q , the task is to assign true/false value for each choice c ij .", "Our model consists of three separate systems, one for each step, namely, the stand-alone question answering (QA) system, the Natural Language Inference (NLI) system and the inference framework connecting the two.", "First, we assign a true/false label to each question-choice pair using the standalone QA system along with an associated confi-dence score s 1 .", "Consequently, we identify the natural language relation (entailment, contradiction or neutral) between each ordered pair of choices for a given question, along with an associated con-fidence score s 2 .", "Then, we use a relational framework to perform inference using the information obtained from the stand-alone QA and the NLI systems.", "Each of the components is described in detail in the following sub-sections.", "We further propose a joint model whose parameters are trained jointly on both the tasks.", "The joint model uses the answer choice representation generated by the stand-alone QA system as input to the NLI detection system.", "The architecture of our joint model is shown in figure", "2. 3.1.1 Stand-alone QA system We use the TriAN-single model proposed by Wang et al. (2018) for SemEval-2018 task-11 as our stand-alone QA system.", "We use the implementation 1 provided by Wang et al. (2018) for our experiments.", "The system is a tri-attention model that takes passage-question-choice triplet as input and produces the probability of the choice being true as its output.", "Our NLI system is inspired from decomposable-attention model proposed by Parikh et al. (2016).", "We modified the architecture proposed in Parikh et al. (2016) to accommodate the question-choice pairs as opposed to sentence pairs in the original model.", "We added an additional sequence-attention layer for the question-choice pairs to allow for the 1 https://github.com/intfloat/ commonsense-rc Figure 2: Architecture of the Joint Model representation of both the answer choice and the question.", "Sequence-attention is defined in Wang et al. (2018) as: Att seq ( u , { v i } ni =1 ) = n (cid:88) i =1 i v i i = softmax i ( f ( W 1 u ) T f ( W 1 v i )) (1) where u and v i are word embeddings, W 1 is the associated weight parameter and f is non-linearity.", "Self-attention is Att seq of a vector onto itself.", "The embedding of each word in the answer choice is attended to by the sequence of question word embeddings.", "We use pre-trained GloVe (Pennington et al., 2014) embeddings to represent the words.", "The question-attended choices are then passed through the decomposable-attention layer proposed in Parikh et al. (2016).", "We use Deep Relational Learning (DRaiL) framework proposed by Zhang et al. (2016) to perform", "the final inference.", "The framework allows for declaration of predicate logic rules to perform relational inference.", "The rules are scored by the con-fidence scores obtained from the stand-alone QA and the NLI systems.", "DRaiL uses an Integer Linear Programming (ILP) based inference procedure to output binary prediction for each of the choices.", "We use the following constraints for our inference:", "On the MultiRC dataset, we use the dependencies between the answer choices for a given question.", "On SemEval dataset, we use the dependencies between different questions about the same paragraph.", "The design of our joint model is motivated by the two objectives: 1) to obtain a better representation for the question-choice pair for NLI detection and 2) to leverage the benefit of multitask learning.", "Hence, in the joint model, choice representation from stand-alone QA system is input to the decomposable-attention layer of the NLI system.", "The joint model takes two triplets ( p , q i , c i ) and ( p , q j , c j ) as input.", "It outputs a true / false for each choice and an NLI relation (entailment, contradiction or neutral) between the choices.", "The representations for passage, question and choice are obtained using Bi-LSTMs.", "The hidden states of the Bi-LSTM are concatenated to generate the representation.", "This part of the model is similar to TriAN model proposed in Wang et al. (2018).", "The choice representations of c i and c j are passed as input to the decomposable attention layer proposed in Parikh et al. (2016).", "The architecture of the joint model is shown in figure", "2. 3.4 Training We train the stand-alone QA system using the MultiRC and SemEval datasets for respective experiments.", "We experiment with 2 different training settings for the NLI system.", "In the first setting, we use SNLI dataset (Bowman et al., 2015) to train the NLI system.", "The sequence-attention layer is left untrained during this phase.", "Hence, we only use the answer choice and do not consider the question for NLI detection.", "Self-Training: Subsequently, to help the system adapt to our settings, we devise a self-training protocol over the RC datasets to train the NLI system.", "Self-training examples for the NLI system were obtained using the following procedure: if the SNLI-trained NLI model predicted entailment and the gold labels of the ordered choice pair were true true , then the choice pair is labeled as entailment .", "Similarly, if the SNLI-trained NLI model predicted contradiction and the gold labels of the ordered choice pair were true false , then the choice pair is labeled as contradiction .", "This is noisy labelling as the labels do not directly indicate the presence of NLI relations between the choices.", "The NLI model was additionally trained using this data.", "To train the joint model we use ordered choice pairs, labeled as entailment if the gold labels are true true and labeled as contradiction if the gold labels are true false .", "This data was also used to test the effectiveness of the self-training procedure.", "The results on the development set of MultiRC dataset are in table", "1. The NLI model trained on SNLI dataset achieves 55 .", "11% accuracy.", "Training the NLI model on the data from MultiRC data increases the overall accuracy to 66 .", "31% .", "Further discussion about self-training is provided in section 5.", "We perform experiments in four phases.", "In the first phase, we evaluate the stand-alone QA system.", "In the second phase, we train the NLI system on SNLI data and evaluate the approach shown in figure", "1. In the third phase, we train the NLI system using the self-training data.", "In the fourth phase, we evaluate the proposed joint model.", "We evaluate all models on MultiRC dataset.", "The results are shown in table", "2. We evaluate the joint model on SemEval dataset, shown in table", "3. 4.1 Datasets We use two datasets for our experiments, MultiRC dataset 2 and the SemEval 2018 task 11 dataset 3 .", "MultiRC dataset consisted of a training and development set with a hidden test set.", "We split the given training set into training and development sets and use the given development set as test set.", "Each question in the MultiRC dataset has approximately 5 choices on average.", "Multiple of them may be true for a given question.", "The training split of MultiRC consisted of 433 paragraphs and 4 , 853 questions with 25 , 818 answer choices.", "The development split has 23 paragraphs and 275 questions with 1 , 410 answer choices.", "Test set has 83 paragraphs and 953 questions with 4 , 848 answer choices.", "SemEval dataset has 2 choices for each question, exactly one of them is true .", "The training set consists of 1 , 470 paragraphs with 9 , 731 questions.", "The development set has 219 paragraphs with 1 , 411 questions.", "And the test set has 430 paragraphs with 2 , 797 questions.", "For MultiRC dataset, we use two metrics for evaluating our approach, namely EM 0 and EM 1 .", "EM 0 refers to the percentage of questions for which all the choices have been correctly classi-fied.", "EM 1 is the the percentage of questions for which at most one choice is wrongly classified.", "For the SemEval dataset, we use accuracy metric.", "Results of our experiments are summarized in tables 2 & 3. EM 0 on MC task improves from 18 .", "15% to 19 .", "41% when we use the NLI model trained over SNLI data and it further improves to 21 .", "62% when we use MultiRC self-training data.", "Joint model achieves 20 .", "36% on EM 0 but achieves the highest EM 1 of 57 .", "08% .", "Human EM 0 is 56 .", "56% .", "Results of SemEval experiments are summarized in table", "3. TriAN-single results are as reported in (Wang et al., 2018).", "The results we obtained using their implementation are stand-alone QA results.", "With the same setting, joint model got 85 .", "4% on dev set and 82 .", "1% on test set.", "The difference in performance of the models in tables 2 and 3 is statistically significant according to Mc-Nemar's chi-squared test.", "We have shown that capturing the relationship between various answer choices or subsequent questions helps in answering questions better.", "Our experimental results, shown in tables 2 & 3, are only a first step towards leveraging this relationship to help construct better machine reading systems.", "We suggest two possible extensions to our model, that would help realize the potential of these relations.", "1. Improving the performance of entailment and contradiction detection.", "2. Using the information given in the text to identify the relations between choices better.", "As shown in table 1, identification of entail-ment/contradiction is far from perfect.", "Entailment detection is particularly worse because often the system returns entailment when there is a high lexical overlap.", "Moreover, the presence of a strong negation word ( not ) causes the NLI system to predict contradiction even for entailment and neutral cases.", "This issue impedes the performance of our model on SemEval'18 dataset as roughly 40% of the questions have yes/no answers.", "Naik et al. (2018) show that this is a common issue with state-of-the-art NLI detection models.", "Self-training (table 1) results suggest that there are other types of relationships present among answer choice pairs that do not come under the strict definitions of entailment or contradiction .", "Upon investigating, we found that although some answer hypotheses do not directly have an inference relation between them, they might be related in context of the given text.", "For example, consider the sentence, I snack when I shop ' and the answer choices: c 1 : She went shopping this extended weekend ' and c 2 : She ate a lot of junk food recently '.", "Although the sentences don't have an explicit relationship when considered in isolation, the text suggests that c 1 might entail c 2 .", "Capturing these kinds of relationships could potentially improve MC further.", "In this paper we take a first step towards modeling an accumulative knowledge state for machine comprehension, ensuring consistency between the model's answers.", "We show that by adapting NLI to the MC task using self-training, performance over multiple tasks improves.", "In the future, we intend to generalize our model to other relationships beyond strict entailment and contradiction relations.", "We would like to thank the reviewers for their insightful comments.", "This work was partially supported by the NSF through grant NSF-1814105." ]
[ "abstain", "objective", "method", "method", "objective", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "objective", "abstain", "objective", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "other", "other" ]
[ "While Transformer-based text classifiers pre-trained on large volumes of text have yielded significant improvements on a wide range of computational linguistics tasks, their implementations have been unsuitable for live incremental processing thus far, operating only on the level of complete sentence inputs.", "We address the challenge of introducing methods for word-by-word left-to-right incremental processing to Transformers such as BERT, models without an intrinsic sense of linear order.", "We modify the training method and live decoding of non-incremental models to detect speech disfluencies with minimum latency and without pre-segmentation of dialogue acts.", "We experiment with several decoding methods to predict the rightward context of the word currently being processed using a GPT-2 language model and apply a BERT-based disfluency detector to sequences, including predicted words.", "We show our method of incrementalising Transformers maintains most of their high non-incremental performance while operating strictly incrementally.", "We also evaluate our models' incremental performance to establish the trade-off between incremental performance and final performance, using different prediction strategies.", "We apply our system to incremental speech recognition results as they arrive into a live system and achieve state-of-the-art results in this setting.", "Conversational systems provide a significant addition to the present approaches in mental health care delivery.", "Interactions with these conversational agents have been shown to contain observable indicators of cognitive states, such as the rate of filled pauses and different temporal and turn-related features (Gratch et al., 2014).", "Alzheimer's Disease (AD) patients, for example, have trouble performing tasks that leverage semantic information; they have difficulties with verbal fluency and object recognition.", "AD patients speak more slowly with long pauses and spend extra time looking for the correct word, which leads to speech disfluency (Lopez-de Ipina et al., 2013; Nasreen et al., 2021).", "Disfluency markers can be key features for identifying certain cognitive disorders for application in conversational agents (Rohanian et al., 2020).", "Such conversational systems are primarily used for content processing, which is then analyzed offline.", "There is much work on detecting disfluencies for offline analysis of transcripts.", "However, given that these disfluency detection models do not work for live systems and depend on rich transcription data, including pre-segmentation of dialogue acts, to facilitate more cost-effective analysis of other data, we need systems capable of performing directly and incrementally off the speech signal, or at least from the results of automatic speech recognition (ASR) as they arrive in the system.", "As it receives word-by-word data, an incremental model must operate with minimum latency and do so without changing its initial assumptions and delivering its best decisions as early as possible following the principles outlined in (Hough and Purver, 2014).", "Here we design and evaluating models that work with online, incremental speech recognition output to detect disfluencies with varying levels of granularity.", "The best neural language encoders currently used in computational linguistics consider word sequences as a whole, and their implementations have been unsuitable for live incremental processing.", "Transformers (Vaswani et al., 2017), for instance, operate on representations that do not naturally have an organizing principle of linear word order.", "We analyze how these models work under incremental frameworks, where it is essential to present partial output relying on partial input provided up to a certain time step that may occur in interactive healthcare systems.", "We explore whether we can adjust such models to function incrementally and how useful they are in terms of overall accuracy and incremental metrics.", "To further enhance the models' incremental performance, we use two general strategies to adjust the training regime and the real-time procedure: incremental training (chunk-based' training and addM training) and incremental decoding (con-stant latency and prophecies).", "We employ three prominent decoding methods to predict the rightward context of the word currently being processed: beam search, topk sampling, and topp sampling.", "We also measure our models' incremental performance to set the trade-off between incremental performance and final performance.", "Although considerable work has been done on detecting disfluencies, much of this work uses transcripts as texts rather than live speech inputs, with the goal of cleaning' the disfluent content for post-processing purposes.", "They are almost exclusively conducted on pre-segmented utterances of the Switchboard corpus of telephone conversations (Godfrey et al., 1992).", "Several disfluency detection efforts involve sentence-based parsing and language models (Johnson and Charniak, 2004; Zwarts et al., 2010).", "Sequence labeling models with start-inside-outside (BIO) style tags have been used in recent neural sequence approaches to disfluency detection based on bi-directional Long Short Term Memory (BiLSTM) networks and Transformers, in which the sequences are available in full (Zayats et al., 2016; Lou and Johnson, 2020; Wang et al., 2020).", "Such offline methods are insufficient if we intend to infer meaning from repairs and edit words for disfluency detection in real-time, which is ben-eficial in a healthcare domain dialogue system that seeks to get a consistent and clear understanding of user statements and the user's cognitive state.", "Methods based on strictly incremental operation have been rare.", "Hough and Purver (2014) used a line of classifiers and language model features in a strong incremental operating system without looking ahead.", "Incremental dependency parsing combined with the removal of disfluency was also studied (Rasooli and Tetreault, 2015).", "Some studies have used recurrent neural networks for live disfluency identification.", "Using a basic Elman Recurrent Neural Network (RNN), Hough and Schlangen (2015) investigated incremental processing, with an objective coupling detection accuracy with low latency.", "Language models have been used as an additional task for the identification of disfluencies, relying on the intuition that disfluencies can be detected by divergences from clean language models, with Johnson and Charniak (2004)'s noisy channel model beginning this effort.", "Shalyminov et al. (2018) made language modelling an auxiliary task to disfluency detection in a deep multi-task learning (MTL) set-up, gaining accuracy over a vanilla RNN tagger.", "POS tags have also been used as an input for detecting disfluencies, showing slight increases in disfluency detection over using word values alone (Purver et al., 2018).", "While the work above operates only on transcripts pre-segmented into utterances, recent research has been performed on combining disfluency detection with utterance segmentation.", "This was done in a joint tagset of disfluency, and utterance segmentation tags by (Hough and Schlangen, 2017), showing an improvement over the performance of the individual tasks, and (Rohanian and Hough, 2020) show an improvement in both tasks when framed as a multi-task learning (MTL) set-up with a Long Short-term Memory network (LSTM), also simultaneously doing POS-tagging and language modelling.", "The recent live incremental systems fall short of the same accuracies achievable on pre-segmented transcripts, so there is a natural interest in using the best non-incremental sequence models and adapting them for incrementality.", "Madureira and Schlangen (2020) take up this effort in several other sequence tagging and classification tasks, showing how bidirectional encoders and Transformers can be modified to work incrementally.", "To reduce the impact of the partiality of the input, the models predict future content and wait for more rightward context.", "Dalvi et al. (2018) also use truncated inputs during the training phase of live machine translation to address the partial input sentence decoding problem Bidirectional encoders face.", "Here, we seek to add to this growing effort to investigate the trade-off of incremental performance against the final output quality of deep neural network-based language processing, applied to incremental disfluency detection.", "Disfluencies are generally assumed to have a reparandum-interregnum-repair structure in their fullest form as speech repairs (Shriberg, 1994; Meteer et al., 1995).", "A reparandum is a stretch of speech later corrected by the speaker; the corrected expression is a repair, the beginning of which is referred to as repair onset .", "An interregnum word is a filler or a reference expression between the repair and reparandum, usually an interruption and hesitation step when the speaker expresses a repair, giving the structure as in (1).", "In the absence of reparandum and repair, the disfluency is reduced to an isolated edit term .", "A marked, lexicalised edit term such as a filled pause (uh or um) or more phrasal terms such as I mean and you know may occur.", "The identification of these elements and their structure is then the task of disfluency detection.", "The task of detecting incremental disfluencies adds to the difficulty of doing this in real-time, word-by-word, from left to right.", "Disfluency recognition is then treated as the same problem that a human processor faces with a disfluent expression: only when an interregnum is detected, or maybe even when a repair is initiated, does it become clear that the earlier content is now to be regarded as to be repaired,'", "i.e., to be classified as a reparandum.", "Therefore, the task cannot be defined as a simple sequence labeling task in which the tags for the reparandum, interregnum, and repair phases are assigned left-to-right over words as seen in the above example; in this case, it will require the assumption that likes would be repaired, at a time when there is no data to make it available.", "We use a tag set that encodes the start of the reparandum only at a time when it can be inferred, primarily when the repair starts the disfluency detection task is to tag words as in the top line of tags in Fig. 1 as either fluent ( f ) an edit term ( e ), a repair onset word ( rpS N for the reparandum starting N words back) and a repair end word of the type repeat ( rpnRep ), substitution ( rpnSub ) or delete ( rpnDel ).", "To incrementalise a Transformer-based model for word-by-word disfluency detection, we devise a model built on top of a pre-trained BERT architecture (Devlin et al., 2019) with a Conditional Random Field (CRF) output architecture to tag sequences with tags such as those in the top line of Fig. 1. We use a BERT-based encoder and try different strategies to incrementalise the system's operation and output, using language models to predict future word sequences as described in Section 5 while maintaining BERT's non-incremental quality.", "Utterance segmentation Our models are designed to work not only with pre-segmented data but also on raw transcripts and ASR results, where utterance segmentation is required to leverage the use of sentence-based linguistic knowledge in BERT.", "Utterance segmentation has a clear interdependence with and influence on the detection of disfluency as disfluent restarts and repairs may be incorrectly predicted at fluent utterance boundaries without segmentation.", "In this paper, rather than performing utterance segmentation in tandem with disfluency detection, we perform it on words as they arrive in the system as a live segmentation task before sending the current prefix of the utterance to the disfluency detection system.", "We use the word-by-word segmentation system from (Ro-hanian and Hough, 2020) where four output tags define ranges of transcribed words or word hypotheses using a BIES tag scheme (Beginning, Inside, End, and Single) to allow for the prediction of an utterance ending.", "The tagset allows information to be captured from the context of the word to decide whether this word continues a current utterance (the prefix) or starts anew (the . prefix), and also allows live prediction of whether the next word will continue the current utterance (the suffix) or whether the current word finishes the utterance (the . suffix).", "An example of the scheme is shown in the second line of Fig. 1. CRF We use a CRF output architecture to predict a tag for every token.", "Although this model generates predictions for the whole sequence, the labels are outputted individually.", "There are important dependencies between adjacent labels in disfluency detection, and explicit modeling of these relationships can help.", "The addition of the CRF enables the model to test for the most optimal path across all available label sequences.", "Part-of-speech tags POS tags may enhance the identification of disfluencies on various settings.", "POS tagging helps detect disfluency structure as the parallelism between the reparandum and repair in substitutions, as shown in the repeated IN NNP sequences in Fig. 1. Word timings We also experiment with the duration from the ending of the previous word to the ending of the current word as it enters the system, either from ground truth word transcriptions or from ASR results.", "Here we describe the different strategies we used to modify the training and live decoding methods of non-incremental models to detect speech disfluencies word-by-word incrementally.", "The general principle is to leverage high accuracy full sequence classification using BERT but deploying it on sequences, including future predictions for words up to the hypothesised end of the current utterance.", "Training is performed on full sentences/utterances, but the decoder produces outputs based on partial input data at the test time.", "This disparity between training and decoding can potentially affect our models' performance.", "Based on (Dalvi et al., 2018), we present two methods to address this issue: chunk-based training and addM training.", "Chunk-based training In chunk-based training, we change the training scheme by removing the ends of each sentence in the training set and simply break each training sentence into chunks of N tokens.", "Here we use 2 and 3 for N .", "AddM training We begin with the first N words in training sentences in addM training.", "The next training instances are then generated by N + M, N +2 M, N +3 M... words before the end of the sentence is reached.", "In our experiments, we found setting N =1 and M =1 worked best.", "Constant latency The technique of constant latency requires allowing certain future' words to be seen before a label to previous words is given.", "It is a form of look-ahead based on Baumann et al. (2011), in which before making the first decision with respect to previous time steps, the processor is required to wait for some correct context.", "We explore the oneor two-word contexts of our input.", "This suggests that the model generates the first label for word t after the word t + 1 is seen or the model observes words t + 1 and t + 2 before tagging word t .", "This has an inherent limit on the latency achievable, and we use this as a baseline incremental decoding system.", "Prophecy-based decoding For our other decoding strategies, we use a prophecy'-based approach to predicting future word sequences, following the task of open-ended language generation, which, given an input text passage as context, is to produce text that constitutes a cohesive continuation (Holtzman et al., 2019).", "Inspired by (Madureira and Schlangen, 2020), using the GPT-2 language model (Radford et al., 2019), we first give each word as a left context and create a continuation until the end of an utterance to create a hypothetical complete context that satisfies the requirements of the models' non-incremental structure.", "Formally, with m tokens x 1 ...x m as our context, the task is to create the next n continuation tokens to achieve the completed sequence x 1 ...x m + n .", "It is assumed that the models compute P ( x 1: m + n ) using a standard left-to-right decomposition of the text probability as in (2).", "This process is used to build the utterance continuation token-by-token using a specific decoding technique.", "Three of the most common decoding methods are used in this paper: Beam search, Topk sampling, and Topp sampling.", "Example word sequence prophecies from these decoding methods", "are shown in Fig. 2. The right-most block shows the prediction of the continuation of the word sequences as each new word in the sequence John likes uh loves Mary is fed into the language model.", "Beam search Assuming that the model gives a greater likelihood to better quality text, we are looking for a sequence with the highest probability.", "During the search, a group of stacks is used to hold hypotheses.", "Beam size N is used to manage the search space by expanding the top N hypotheses in the existing stack.", "We used beam size 10 for all the models.", "Topk sampling We define sampling as randomly choosing the next word based on its conditional probability distribution as in (3).", "In the Topk sampling, the most probable next k words are extracted and the probability mass is redistributed between only the following k words", "(Fan et al., 2018).", "Given a distribution P ( x | x 1: i 1 ) , we extract its topk vocabulary V ( k ) V as the set of size k which maximizes (cid:80) x V ( k ) P ( x | x 1: i 1 ) .", "After an initial investigation, we set k to 50 in all experiments.", "Topp sampling Rather than selecting only the most probable K words, in Topp sampling, we select the smallest possible range of words with their total likelihood exceeds the probability p (Holtzman et al., 2019).", "The probability mass is then redistributed between this set of words.", "With this method, the size of the word set will dynamically adjust based on the probability distribution of the next word.", "With the distribution P ( x | x 1: i 1 ) , we consider its topp sequence, with vocabulary V ( p ) V as the smallest set with P ( x | x 1: i 1 ) p .", "We set p = 0 .", "95 .", "We train on transcripts and test on both transcripts and ASR hypotheses.", "All models in testing have strictly word-by-word left to right input.", "In addition to using the latest word hypothesis as input, we train and evaluate the presented models with two kinds of additional inputs: time elapsed from the end of the previous word (hypothesis) to the current one and the POS tag of the current word.", "Results on the development set were used to find the best model to be evaluated on the test set.", "We used the data from (Hough and Schlangen, 2017) for ASR hypotheses this was generated by a free trial version of IBM's Watson Speech-To-Text service for incremental ASR.", "The service offers good quality ASR on noisy data-on our selected held-out data on Switchboard, and the average WER is 26.5%.", "The Watson service, crucially for our task, does not filter out hesitation markers or disfluencies (Baumann et al., 2017).", "The service delivers results incrementally, so silence-based end-pointing is not used.", "It also outputs word timings, which are close enough to the source timings to use as features in the live version of our system.", "The word embedding for LSTM was initialised with 50-dimensional embedding trained on Google News (Mikolov et al., 2013).", "The model has been implemented using Tensorflow 2.1.", "We train all models for a maximum of 50 epochs; otherwise, stop training if there is no improvement on the best score on the validation set after 7 epochs.", "A large version of the pre-trained BERT is used with 340M parameters (24-layer blocks, 16 self-Input Model Pre-segmented transcripts (per word) Transcripts (per word) ASR (per 10 second window) F rm F rpS F e F rm F rpS F e F rm F rpS F e Words STIR (HS'15/ PHH'18) 0.741 / 0.749 -/0.827 0.880/---RNN (HS'15) 0.689 -0.873 ---LSTM 0.686 0.771 0.928 0.59 0.678 0.904 -0.548 0.726 LSTM-MTL (RH'20) 0.737 0.799 0.938 0.629 0.743 0.917 -0.573 0.757 BERT 0.758 0.851 0.960 0.659 0.782 0.947 0.524 0.603 0.812 Word + Timings LSTM 0.681 0.777 0.921 0.623 0.718 0.908 -0.555 0.721 LSTM-MTL (RH'20) 0.741 0.812 0.929 0.629 0.741 0.922 -0.559 0.751 BERT 0.752 0.842 0.958 0.678 0.791 0.939 0.502 0.594 0.793 Word + POS STIR (HP'14 / PHH'18) 0.779 / 0.768 -/0.833 0.937/---RNN (HS'15 / PHH'18) 0.711 / 0.668 -/0.790 0.902/---LSTM joint tagset (HS'17) --0.599 0.686 0.907 -0.557 0.726 LSTM-MTL (SEL'18) 0.753 0.816 0.919 --0.548 Words + Timings + POS LSTM joint tagset (HS'17) --0.601 0.719 0.918 -0.555 0.727 LSTM 0.692 0.778 0.931 0.601 0.720 0.910 -0.557 0.727 LSTM-MTL (RH'20) 0.743 0.811 0.932 0.633 0.743 0.931 -0.571 0.757 BERT 0.757 0.853 0.958 0.676 0.802 0.944 0.522 0.605 0.809 Table 1: Final disfluency detection accuracy results on Switchboard data attention heads, and 1024 hidden-size) for the model.", "In our analysis, when fine-tuning BERT, we followed the hyper-parameters of (Devlin et al., 2019).", "Since the datasets we use are tokenized, and each token has a matching tag, we adopt the directions provided by (Devlin et al., 2019) to deal with the sub-tokenization of BERT: to determine its label, the scores of the first sub-token are used, and further sub-token scores are discarded.", "Data We use standard Switchboard training data (all conversation numbers starting sw2*,sw3 * in the Penn Treebank III release: 100k utterances, 650k words) and use standard held-out data (PTB III files sw4[5-9] *: 6.4k utterances, 49k words) as our validation set.", "We test on the standard test data (PTB III files 4[0-1] *) with partial words and punctuation stripped away from all files.", "We only choose a subset of the held-out and test data for the ASR results in assessment, whereby both channels achieve below 40 percent WER to ensure good separationthis left us with 18 dialogues in validation data and 17 dialogues for test data.", "We calculate F1 accuracy for repair onset detection F rpS and for edit term words F e , which includes interregna and F rm for reparandum detection.", "Performing the task live, on hypotheses of speech recognition that may not be quite equivalent to the annotated gold-standard transcription involves the use of time-based local accuracy metrics in a time window (i.e., within this time frame, has a disfluency been detected, even if not on the identical words?)-we, therefore, measure the F1 score over 10-second windows of each speaker's channel.", "For incremental performance, we measure latency and output stability over time.", "We use the first time to detection (FTD) metric of (Zwarts et al., 2010) for latency: the average latency (in number of words) before the first detection of a gold standard repair onset or edit term word.", "For stability, we evaluate the edit overhead (EO) of output labels (Baumann et al., 2011), the proportion of the unnecessary edits (insertions and deletions) required to achieve the final labels produced by the model, with perfect performance being 0%.", "We compare our incrementalised BERT model against a number of existing baselines, largely from existing incremental disfluency detection systems trained and tested on the same data:", "STIR (HP'14/HS'15/PHH'18) : Hough and Purver (2014)'s STrongly Incremental Repair detection (STIR) non-deep model using n-gram language model features in a pipeline of Random Forest classifiers.", "The reparandum is detected by a backward search, showing robustness for longer lengths of repair compared to deep sequence tagging models (Purver et al., 2018).", "A state-of-the-art incremental model on pre-segmented transcripts.", "incremental disfluency detection model using the same tagset as in our model.", "Results from Purver et al. (2018) are used, which reproduced the model with some degradation in the results.", "LSTM : An LSTM version of Hough and Schlangen (2015) on pre-segmented transcripts LSTM joint tagset (HS'17) Hough and Schlangen (2017)'s model, which simultaneously predicts utterance segmentation using a joint tag set of utterance segmentation tags and disfluency tags, the latter of which is the same as our own.", "This is the only other work to use word timing information and to be testable on ASR results.", "LSTM-MTL (SEL'18) Shalyminov et al. (2018)'s multi-task learning model, which tags according to our tag set but simultaneously does language modelling by predicting the probability of the current word given the history.", "Also adds ground-truth POS tags to input.", "LSTM-MTL (RH'20) : Rohanian and Hough (2020)'s multi-task learning model, which simultaneously predicts utterance segmentation, POS tags and language model probabilities, exhibiting state-of-the-art results for a strictly incremental deep model.", "The model is used as described by the authors and also here with the addition of timing information and gold standard POS information (as opposed to simultaneously predicted POS tags).", "It is also applied to ASR results as it is a suitable model to do so.", "This same model provides the automatic live utterance segmentation in our own model.", "The results in terms of the final output of our best performing incremental BERT system in the three testing regimes versus its competitors is shown in", "Table 1. 1 We found our best model was the addM trained model, and the best decoding strategy was using topp sampling for predicting future words.", "Disfluency detection on transcripts For repair detection, our system's best F rpS score for detecting repair onsets on pre-segmented transcripts at 0.853 beats state-of-the-art incremental systems.", "This performance degrades using automatic segmentation to 0.802, a state-of-the-art result for this setting.", "Its F rm accuracy of 0.757 on reparandum words on pre-segmented transcripts is only beaten by HP'14/PHH'18 model using word and POS input, making it a state-of-the-art strictly incremental deep model.", "This performance degrades to 0.678 on raw transcripts but is a state-of-the-art result for this setting.", "In terms of edit term detection, state-of-the-art detection results of 0.960 and 0.944 are achieved on the pre-segmented and unsegmented settings, improving over the existing benchmarks of HP'14 and RH'20.", "These results suggest we have achieved the aim of a strictly incremental model achieving high final accuracies.", "Disfluency detection on ASR results Using the ASR results from HS'17 for comparison, a significant improvement can be seen over the previously reported results on F rpS and F e per 10-second window, improving from 0.557 to 0.605 and from 0.727 to 0.809 respectively.", "Given the previously reported best system gave strong correlations in terms of real repair rates, this is encouraging that our system could be very useful in a live setting.", "The purpose of this paper was to adapt a high-performing, non-incremental model for incremental operation.", "As can be seen in Table 2 and in Fig. 3, while our BERT model with topp sample utterance prediction outperforms the multi-task 1 Experiments are reproducible from https://github.", "com/mortezaro/tr-disfluency", "model and vanilla LSTM model in terms of final output accuracy, its incremental output stability is slightly below its competitors, with the best edit overhead of 63% unnecessary edits versus 25% (LSTM joint tagset (HS'17)) and 42% (LSTM-MTL (RH'20)) on ASR results, meaning the output is slightly, though not severely, more jittery.", "Of the prophecy-based approaches, we found the topp sampling method gave the most stable results (EO=61% with chunk training, EO=60% with addM training) and beam search gave the least stable.", "As shown in Fig. 3, while the constant latency approaches offer large advantages in EO over prophecy-based models on transcripts, that advantage disappears on ASR results, where the prophecy models generally outperform them.", "As can be seen in Table 2, there is a slight improvement in stability across all systems using the addM training regime for final output and incremental performance.", "In terms of latency, results are even more encouraging, with the best FTD for rpS of 0.31 words (versus 0.03 and 0.07) on transcripts, which shows a relatively short latency of detecting the repair for the first time this suggests a responsive, sensitive system.", "We conduct an error analysis in terms of performance on different repair types and in terms of repairs with different lengths.", "Table 3 shows the performance in terms of F rpS score on detecting repairs of the three different types: verbatim repeats, substitutions, and deletes (restarts).", "Our BERT model performs best, either jointly or uniquely, across all three types, with a gain of 0.06 over its nearest competitors for substitutions and deletes.", "Through large-scale training, the enhanced linguistic knowledge equips it to recognize the syntactic Model Reparandum length Reparandum length of nested disfluencies 1 2 3 4 5 6 1 2 3 4 5 6 With Standard Training LSTM .843 .675 .405 .311 .134 .131 .747 .586 .382 .320 .110 .104 MTL .856 .683 .431 .335 .134 .131 .763 .586 .405 .291 .110 .104 BERT .892 .716 .469 .379 .310 .187 .818 .623 .405 .320 .130 .140 With Add-M Training LSTM .843 .675 .434 .334 .134 .131 .741 .586 .382 .320 .110 .104 MTL .851 .709 .468 .335 .134 .131 .779 .586 .405 .291 .130 .104 BERT .892 .719 .472 .379 .310 .187 .833 .645 .405 .320 .130 .140 Table 4: F1 of models on repairs with reparanda of different length and lexical parallelism in more complex repairs while retaining high accuracy on repeats.", "Table 4 shows the degradation in performance in detecting repairs of different lengths.", "With Add-M training, the BERT model degrades less and performs (joint) best on all lengths and nested disfluencies.", "While the performance on length five repairs is considerably better than the other deep models, the 0.187 accuracy on length six repairs is what gives it a slight disadvantage compared to the HP'14 explicit backtracking system (reported as high as 0.500 in PHH'18), which likely accounts for the lower F rm score despite the superior F rpS score of our system.", "Our incremental GPT-2 and BERT-driven system performs well at detecting repair disfluencies on pre-segmented and unsegmented transcripts, achieving state-of-the-art results for a strictly incremental repair onset detection.", "Our system is competitive at reparadnum word detection and achieves state-of-the-art results in edit term detection.", "The results on ASR transcripts are also state-of-the-art.", "The high sequence-final performance comes at the expense of marginally increased jitter in the word-by-word output, but with sensitive and fast repair detection, on average first detecting the repair under a third of a second after the end of the repair onset word.", "These results suggest it is beginning to enjoy the best of both worlds in leveraging the right-ward context which BERT uses for its high performance, while the continuation predictions from the GPT-2 model are good enough to allow good incremental performance before the true right-ward context is available.", "The linguistic knowledge in the BERT model allows it to recognize parallelism in reparandum and repair phases and the absence thereof to in-crease performance on detecting substitution and delete repairs.", "This improvement to existing deep disfluency detection models, and, with appropriate use of open-ended language generation techniques with a GPT-2 language model, its good incremental performance, is consistent with a growing body of work (Heeman and Allen, 1999; Johnson and Charniak, 2004; Zwarts et al., 2010; Hough and Purver, 2014; Shalyminov et al., 2018; Rohanian and Hough, 2020), showing good language modelling can lead to good disfluency detection, as they are inherently part of the same process.", "Our system still fails to detect longer repairs compared to an explicit backtracking mechanism like (Hough and Purver, 2014).", "While the vanishing gradient problem is partly overcome here, the strictly left-to-right constraint on decoding puts memory limitations on any repair detection system.", "In future, we will explore efficient ways to navigate this space whilst not filtering out rarer repair forms.", "The results on ASR results show our disfluency detection system is ready for use in a live setting with a good degree of accuracy, and work is currently underway to use it to help detect a variety of different cognitive conditions, including Alzheimer's Disease, in a live diagnostic system.", "We thank the anonymous ACL-IJCNLP reviewers for their helpful comments and Matthew Purver for his continuous support and supervision on the wider project." ]
[ "abstain", "abstain", "method", "method", "result", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "method", "objective", "method", "method", "method", "objective", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "other", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "objective", "result", "other" ]
[ "Training a referring expression comprehension (ReC) model for a new visual domain requires collecting referring expressions, and potentially corresponding bounding boxes, for images in the domain.", "While large-scale pre-trained models are useful for image classification across domains, it remains unclear if they can be applied in a zero-shot manner to more complex tasks like ReC.", "We present ReCLIP, a simple but strong zero-shot baseline that repurposes CLIP, a state-of-the-art large-scale model, for ReC.", "Motivated by the close connection between ReC and CLIP's contrastive pre-training objective, the first component of ReCLIP is a region-scoring method that isolates object proposals via cropping and blurring, and passes them to CLIP.", "However, through controlled experiments on a synthetic dataset, we find that CLIP is largely incapable of performing spatial reasoning off-the-shelf.", "Thus, the second component of ReCLIP is a spatial relation resolver that handles several types of spatial relations.", "We reduce the gap between zero-shot baselines from prior work and supervised models by as much as 29% on RefCOCOg, and on RefGTA (video game imagery), ReCLIP's relative improvement over supervised ReC models trained on real images is 8%.", "Visual referring expression comprehension (ReC) the task of localizing an object in an image given a textual referring expressionhas applications in a broad range of visual domains.", "For example, ReC is useful for guiding a robot in the real world (Shridhar et al., 2020) and also for creating natural language interfaces for software applications with visuals (Wichers et al., 2018).", "Though the task is the same across domains, the domain shift is problematic for supervised referring expression models, as shown in Figure 1: the same simple This work was done while Sanjay, Will, and Matt were affiliated with AI2.", "Collecting task-specific data in each domain of interest is expensive.", "Weakly supervised ReC (Rohrbach et al., 2016) partially addresses this issue, since it does not require the ground-truth box for each referring expression, but it still assumes the availability of referring expressions paired with images and trains on these.", "Given a large-scale pretrained vision and language model and a method for doing ReC zero-shoti.e. without any addi-5198 Figure 2: Overview of ReCLIP.", "tional trainingpractitioners could save a great deal of time and effort.", "Moreover, as pre-trained models have become more accurate via scaling (Ka-plan et al., 2020), fine-tuning the best models has become prohibitively expensiveand sometimes infeasible because the model is offered only via API, e.g. GPT-3 (Brown et al., 2020).", "Pre-trained vision and language models like CLIP (Radford et al., 2021) achieve strong zero-shot performance in image classification across visual domains (Jia et al., 2021) and in object detection (Gu et al., 2021), but the same success has not yet been achieved in tasks requiring reasoning over vision and language.", "For example, Shen et al. (2021) show that a straightforward zero-shot approach for VQA using CLIP performs poorly.", "Specific to ReC, Yao et al. (2021) introduce a zero-shot approach via Colorful Prompt Tuning (CPT), which colors object proposals and references the color in the text prompt to score proposals, but this has low accuracy.", "In both of these cases, the proposed zero-shot method is not aligned closely enough with the model's pre-training task of matching naturally occurring images and captions.", "In this work, we propose ReCLIP, a simple but strong new baseline for zero-shot ReC.", "ReCLIP, illustrated in Figure 2, has two key components: a method for scoring object proposals using CLIP and a method for handling spatial relations between objects.", "Our method for scoring region proposals, Isolated Proposal Scoring (IPS), effectively reduces ReC to the contrastive pre-training task used by CLIP and other models.", "Specifically, we propose to isolate individual proposals via cropping and blurring the images and to score these isolated proposals with the given expression using CLIP.", "To handle relations between objects, we first consider whether CLIP encodes the spatial information necessary to resolve these relations.", "We show through a controlled experiment on CLEVR images (Johnson et al., 2017) that CLIP and another pre-trained model ALBEF (Li et al., 2021) are unable to perform its pre-training task on examples that require spatial reasoning.", "Thus, any method that solely relies on these models is unlikely to resolve spatial relations accurately.", "Consequently, we propose spatial heuristics for handling spatial relations in which an expression is decomposed into subqueries, CLIP is used to compute proposal probabilities for each subquery, and the outputs for all subqueries are combined with simple rules.", "On the standard RefCOCO/g/+ datasets (Mao et al., 2016; Yu et al., 2016), we find that ReCLIP outperforms CPT (Yao et al., 2021) by about 20%.", "Compared to a stronger GradCAM (Selvaraju et al., 2017) baseline, ReCLIP obtains better accuracy on average and has less variance across object types.", "Finally, in order to illustrate the practical value of zero-shot grounding, we also demonstrate that our zero-shot method surpasses the out-of-domain performance of state-of-the-art supervised ReC models.", "We evaluate on the RefGTA dataset (Tanaka et al., 2019), which contains images from a video game (out of domain for models trained only on real photos).", "Using ReCLIP and an object detector trained outside the target domain, we outperform UNITER-Large (Chen et al., 2020) (using the same proposals) and MDETR (Kamath et al., 2021) by an absolute 4.5% (relative improvement of 8%).", "zero-shot spatial reasoning performance, and (3) a comparison of our zero-shot ReC performance with the out-of-domain performance of state-of-the-art fully supervised ReC systems.", "1 2 Background In this section, we first describe the task at hand (2.1) and introduce CLIP, the pre-trained model we primarily use (2.2).", "We then describe two existing methods for scoring region proposals using a pre-trained vision and language model: colorful prompt tuning (2.3) and GradCAM (2.4).", "In referring expression comprehension (ReC), the model is given an image and a textual referring expression describing an entity in the image.", "The goal of the task is to select the object (bounding box) that best matches the expression.", "As in much of the prior work on REC, we assume access to a set of object proposals b 1 , b 2 , ..., b n , each of which is a bounding box in the image.", "Task accuracy is measured as the percentage of instances for which the model selects a proposal whose intersection-over-union (IoU) with the ground-truth box is at least 0.5.", "In this paper, we focus on the zero-shot setting in which we apply a pre-trained model to ReC without using any training data for the task.", "The zero-shot approaches that we consider are general in that the only requirement for the pretrained model is that when given a query consisting of an image and text, it computes a score for the similarity between the image and text.", "In this paper, we primarily use CLIP (Radford et al., 2021).", "We focus on CLIP because it was pretrained on 400M image-caption pairs collected from the web 2 and therefore achieves impressive zero-shot image classification performance on a variety of visual domains.", "CLIP has an image-only encoder, which is either a ResNet-based architecture (He et al., 2016) or a visual transformer (Dosovitskiy et al., 2021), and a text-only transformer.", "We mainly use the RN50x16 and ViT-B/32 versions of CLIP.", "The image encoder takes the raw image and produces an image representation x R d , and the text transformer takes the 1 Our code is available at https://www.github.", "sequence of text tokens and produces a text representation y R d .", "In CLIP's contrastive pretraining task, given a batch of N images and matching captions, each image must be matched with the corresponding text.", "The model's probability of matching image i with caption j is given by exp( x i T y j ) / (cid:80) Nk =1 exp( x i T y k ) , where is a hyperparameter.", "3 We now describe two techniques from prior work for selecting a proposal using a pre-trained model.", "The first baseline from prior work that we consider is colorful prompt tuning (CPT), proposed by Yao et al. (2021) 4 : they shade proposals with different colors and use a masked language prompt in which the referring expression is followed by in [MASK] color.", "The color with the highest probability from a pre-trained masked language model (MLM) (VinVL; (Zhang et al., 2021)) is then chosen.", "In order to apply this method to models like CLIP, that provide image-text scores but do not offer an MLM, we create a version of the input image for each proposal, where the proposal is transparently shaded in red.", "5 Our template for the input text is [referring expression] is in red color.", "Since we have adapted CPT for non-MLM models, we refer to this method as CPT-adapted in the experiments.", "The second baseline from prior work that we consider is based on gradient-based visualizations, which are a popular family of techniques for understanding, on a range of computer vision tasks, which part(s) of an input image are most important to a model's prediction.", "We focus on the most popular technique in this family, GradCAM (Sel-varaju et al., 2017).", "Our usage of GradCAM follows Li et al. (2021), in which GradCAM is used to perform weakly supervised referring expression comprehension using the ALBEF model.", "In our setting, for a given layer in a visual transformer, we take the layer's class-token (CLS) attention matrix M R h,w .", "The spatial dimensions h and w are dependent on the model's architecture and are generally smaller than the input dimensions 3 x i and y i are normalized before the dot product.", "of the image.", "Then the GradCAM is computed as G = M LM , where L is the model's output logit (the similarity score for the image-text pair) and denotes elementwise multiplication.", "The procedure for applying GradCAM when the visual encoder is a convolutional network is similar; in place of the attention matrix, we use the activations of the final convolutional layer.", "Next, we perform a bicubic interpolation on G so that it has the same dimensions as the input image.", "Finally, we compute for each proposal b i = ( x 1 , y 1 , x 2 , y 2 ) the score 1 A (cid:80) x 2 i = x 1 (cid:80) y 2 j = y 1 G [ i, j ] , where A is the area of the image and is a hyperparameter, and we choose the proposal with the highest score.", "ReCLIP consists of two main components: (1) a region-scoring method that is different from CPT and GradCAM and (2) a rule-based relation resolver.", "In this section, we first describe our region scoring method (3.1).", "However, using controlled experiments on a synthetic dataset, we find that CLIP has poor zero-shot spatial reasoning performance (3.2).", "Therefore, we propose a system that uses heuristics to resolve spatial relations (3.3).", "Our proposed method, which we call isolated proposal scoring , is based on the observation that ReC is similar to the contrastive learning task with which models like CLIP are pre-trained, except that rather than selecting one out of several images to match with a given text, we must select one out of several image regions.", "Therefore, for each proposal, we create a new image in which that proposal is isolated.", "We consider two methods of isolation cropping the image to contain only the proposal and blurring everything in the image except for the proposal region.", "For blurring, we apply a Gaussian filter with standard deviation to the image RGB values.", "Appendix A.2 provides an example of isolation by blurring.", "The score for an isolated proposal is obtained by passing it and the expression through the pre-trained model.", "To use cropping and blurring in tandem, we obtain a score s crop and s blur for each proposal and use s crop + s blur as the final score.", "This can be viewed as an ensemble of visual prompts, analogous to Radford et al. (2021)'s ensembling of text prompts.", "A key limitation in Isolated Proposal Scoring is that relations between objects in different proposals are not taken into account.", "For example, in Figure 2, the information about the spatial relationships among the cats is lost when the proposals are isolated.", "In order to use CLIP to decide which object has a specified relation to another object, the model's output must encode the spatial relation in question.", "Therefore, we design an experiment to determine whether a pre-trained model, such as CLIP, can understand spatial relations within the context of its pre-training task.", "We generate synthetic images using the process described for the CLEVR dataset (Johnson et al., 2017).", "These scenes include three shapesspheres, cubes, and cylindersand eight colorsgray, blue, green, cyan, yellow, purple, brown, red.", "In the text-pair version of our tasks, using the object attribute and position information associated with each image, we randomly select one of the pairwise relationships between objectsleft, right, front, or behindand construct a sentence fragment based on it.", "For example: A blue sphere to the left of a red cylinder.", "We also write a distractor fragment that replaces the relation with its opposite.", "In this case, the distractor would be A blue sphere to the right of a red cylinder.", "The task, similar to the contrastive and image-text matching tasks used to pre-train these models, is to choose the correct sentence given the image.", "As a reference point, we also evaluate on a control (non-spatial) task in which the correct text is a list of the scene's objects and the distractor text is identical except that one object is swapped with a random object not in the scene.", "For example, if the correct text is A blue sphere and a red cylinder, then the distractor text could be A blue sphere and a blue cylinder.", "single sentence fragment constructed as described above for the spatial and control (non-spatial) tasks and two images such that only one matches the text.", "Appendix B shows examples of these tasks.", "CLIP's performance on these tasks is shown in Table 1. Similar results for the pre-trained model ALBEF (Li et al., 2021) are shown in Appendix D.1", "While performance on the control task is quite good, accuracy on the spatial task is not so different from random chance (50%).", "This indicates that the model scores of image-text pairs largely do not take spatial relations into account.", "Since CLIP lacks sensitivity to spatial relations, we propose to decompose complex expressions into simpler primitives.", "The basic primitive is a predicate applying to an object, which we use CLIP to answer.", "The second primitive is a spatial relation between objects, for which we use heuristic rules.", "Predicates A predicate is a textual property that the referent must satisfy.", "For example, the cat and blue airplane are predicates.", "We write P ( i ) to say that object i satisfies the predicate P .", "We model P as a categorical distribution over objects, and estimate p ( i ) = Pr[ P ( i )] with the pre-trained model using isolated proposal scoring ( 3.1).", "Relations We have already discussed the importance of binary spatial relations like the cat to the left of the dog for the ReC task.", "We consider seven spatial relations left , right , above , below , bigger , smaller , and inside .", "We write R ( i, j ) to mean that the relation R holds between objects i and j , and we use heuristics to determine the probability r ( i, j ) = Pr[ R ( i, j )] .", "For example, for left , we set r ( i, j ) = 1 if the center point of box i is to the left of the center point of box j and r ( i, j ) = 0 otherwise.", "C.1 describes all relation semantics.", "Superlative Relations We also consider superlatives, which refer to an object that has some relation to all other objects satisfying the same predicate, e.g. leftmost dog.", "We handle superlatives as a special case of relations where the empty second argument is filled by copying the predicate specifying the first argument.", "Thus, leftmost dog effectively finds the dog that is most likely to the left of other dog(s).", "Our set of superlative relation types is the same as our set of relation types, excluding inside .", "it procedurally.", "We first use spaCy (Honnibal and Johnson, 2015) to build a dependency parse for the expression.", "As illustrated in Figure 3, we extract a semantic tree from the dependency parse, where each noun chunk becomes a node, and dependency paths between the heads of noun chunks become relations between entities based on the keywords they contain.", "See C.2 for extraction details.", "In cases where none of our relation/superlative keywords occur in the text, we simply revert to the plain isolated proposal scoring method using the full text.", "In the tree, each node N contains a predicate PN and has a set of children; an edge ( N, N ) between N and its child N corresponds to a relation R N,N .", "For example, as shown in Figure 3, a cat to the left of a dog would be parsed as a node containing the predicate a cat connected by the relation left to its child corresponding to a dog.", "We define N ( i ) as the probability that node N refers to object i , and compute it recursively.", "For each node N , we first set N ( i ) = p N ( i ) and then iterate through each child N and update N ( i ) as follows 6 : N ( i ) N ( i ) (cid:88) j Pr (cid:2) R N,N ( i, j ) PN ( j ) (cid:3) N ( i ) (cid:88) j r N,N ( i, j ) N ( j ) .", "The last line makes the simplifying assumption that all predicates and relations are independent.", "7 To compute our final output, we ensemble the distribution root for the root node with the output of plain isolated proposal scoring (with the whole input expression) by multiplying the proposal probabilities elementwise.", "This method gives us a principled way to combine predicates ( PN ) with spatial relational constraints ( R N,N ) for each node N .", "We compare ReCLIP to other zero-shot methods on RefCOCOg (Mao et al., 2016), RefCOCO and RefCOCO+ (Yu et al., 2016).", "These datasets use images from MS COCO (Lin et al., 2014).", "RefCOCO and RefCOCO+ were created in a two-player game, and RefCOCO+ is designed to avoid spatial relations.", "RefCOCOg includes spatial relations and has longer expressions on average.", "For comparing zero-shot methods with the out-of-domain performance of models trained on COCO, we use RefGTA (Tanaka et al., 2019), which contains images from the Grand Theft Auto video game.", "All referring expressions in RefGTA correspond to people, and the objects (i.e. people) tend to be much smaller on average than those in RefCOCO/g/+.", "We use an ensemble of the CLIP RN50x16 and ViT-B/32 models (results for individual models are shown in Appendix G).", "We ensemble model outputs by adding together the logits from the two models elementwise before taking the soft-max.", "GradCAM's hyperparameter controls the 6 Superlatives of a node are processed after all its relations.", "effect of the proposal's area on its score.", "We select = 0 .", "5 for all models based on tuning on the RefCOCOg validation set.", "We emphasize that the optimal value of for a dataset depends on the size distribution of ground-truth objects.", "ReCLIP also has a hyperparameter, namely the standard deviation .", "We try a few values on the RefCOCOg validation set and choose = 100 , as we show in Appendix E.4, isolated proposal scoring has little sensitivity to .", "As discussed by (Perez et al., 2021), zero-shot experiments often use labeled data for model selection.", "Over the course of this work, we primarily experimented with the RefCOCOg validation set and to a lesser extent with the RefCOCO+ validation set.", "For isolated proposal scoring, the main variants explored are documented in our ablation study (4.6).", "Other techniques that we tried, including for relation-handling, and further implementation details are given in Appendix E. 4.3 Results on RefCOCO/g/+ Table 2 shows results on RefCOCO, RefCOCO+, and RefCOCOg.", "ReCLIP is better than the other zero-shot methods on RefCOCOg and RefCOCO and on par with GradCAM on RefCOCO+.", "However, GradCAM has a much higher variance in its accuracy between the TestA and TestB splits of RefCOCO+ and RefCOCO.", "We note that GradCAM's 5203 hyperparameter , controlling the effect of proposal size, was tuned on the RefCOCOg validation set, and RefCOCOg was designed such that boxes of referents are at least 5% of the image area (Mao et al., 2016).", "In the bottom portion of Table 2, we show that when this 5% threshold, a prior on object size for this domain, is used to filter proposals for both GradCAM and ReCLIP , ReCLIP performs on par with/better than GradCAM on TestA.", "ReCLIP's spatial relation resolver helps on RefCOCOg and RefCOCO but not on RefCOCO+, which is designed to avoid spatial relations.", "Next, we evaluate on RefGTA to compare our method's performance to the out-of-domain accuracy of two state-of-the-art fully supervised ReC models: UNITER-Large (Chen et al., 2020) and MDETR (Kamath et al., 2021).", "Like ReCLIP, UNITER takes proposals as input.", "8 We show results using ground-truth proposals and detections from UniDet (Zhou et al., 2021), which is trained on the COCO, Objects365 (Shao et al., 2019), OpenImages (Kuznetsova et al., 2020), and Mapillary (Neuhold et al., 2017) datasets.", "Following the suggestion of the UniDet authors, we use the confidence threshold of 0.5.", "MDETR does not take proposals as input.", "Table 3 shows our results.", "For methods that take proposals (all methods except MDETR), we consider two evaluation settings using UniDet DT-P , in which the detected proposals are filtered to have only proposals whose predicted class label is per-son, and DT , in which all detected proposals are considered.", "ReCLIP's accuracy is more than 15% higher than the accuracy of UNITER-Large and roughly 5% more than that of MDETR.", "ReCLIP also outperforms GradCAM by about 20%, and the gap is larger when all UniDet proposals are considered.", "ReCLIP w/o relations is 1-2% better than ReCLIP in the settings with ground-truth proposals and filtered UniDet proposals.", "One possible reason for this gap is that the objects of relations in the expressions could be non-people entities.", "When 8 UNITER requires features from the bottom-up top-down attention model (Anderson et al., 2017).", "We use https://github.com/airsplay/ py-bottom-up-attention to compute the features for RefGTA.", "We trained UNITER models on RefCOCO+ and RefCOCOg using features computed from this repository.", "On the RefCOCO+ validation set, the resulting model has an accuracy roughly 0.4% less than that of a model trained and evaluated using the original features (when using ground-truth proposals).", "considering all UniDet proposals, the relation resolver in ReCLIP does not hurt accuracy much but also does not improve accuracy significantlyan additional challenge in this setting is that the number of proposals is dramatically higher.", "Appendix F shows qualitative examples of predictions on RefGTA.", "In order to determine how isolated proposal scoring (IPS) compares to GradCAM and CPT on other pre-trained models, we present results using ALBEF (Li et al., 2021).", "ALBEF offers two methods for scoring image-text pairsthe output used for its image-text contrastive (ITC) loss and the output used for its image-text matching (ITM) loss.", "The architecture providing the ITC output is very similar to CLIPhas only a shallow interaction between the image and text modalities.", "The ITM output is given by an encoder that has deeper interactions between image and text and operates on top of the ITC encoders' output.", "Appendix D provides more details.", "The results, shown in Table 4, show that with the ITC output, IPS performs better than GradCAM, but with the ITM output, GradCAM performs better.", "This suggests that IPS works well across models like CLIP and ALBEF ITC (i.e. contrastively pre-trained with shallow modality interactions) but that GradCAM may be better for models with deeper interactions.", "(a) ReCLIP is correct, while GradCAM is incorrect", "IPS achieves the highest accuracy for contrastively pre-trained models like CLIP.", "Figure 4a gives intuition for thisaside from an object's attributes, many referring expressions describe the local context around an object, and IPS focuses on this local context (as well as object attributes).", "Table 5 shows that using both cropping and blurring obtains greater accuracy than either alone.", "Error Analysis and Limitations Although ReCLIP outperforms the baselines that we consider, there is a considerable gap between it and supervised methods.", "The principal challenge in improving the system is making relation-handling more flexible.", "There are several object relation types Isolation type RefCOCOg RefCOCO+ Crop 54.43 41.28 Blur 55.96 47.23 max( Crop,Blur ) 55.76 44.55 Crop+Blur 57.70 47.43 Table 5: Ablation study of isolation types used to score proposals on Val splits of RefCOCOg/RefCOCO+, using detections from MAttNet (Yu et al., 2018).", "that our spatial relation resolver cannot handle; for instance, those that involve counting: the second dog from the right.", "Another challenge is in determining which relations require looking at multiple proposals.", "For instance, ReCLIP selects a proposal corresponding to the incorrect noun chunk in Figure 4b because the relation resolver has no rule for splitting an expression on the relation with.", "Depending on the context, relations like with may or may not require looking at multiple proposals, so handling them is challenging for a rule-based system.", "In the RefCOCO+ validation set, when using detected proposals, there are 75 instances for which ReCLIP answers incorrectly but ReCLIP w/o relations answers correctly.", "We categorize these instances based on their likely sources of error: 4 instances are ambiguous (multiple valid propos-als), in 7 instances the parser misses the head noun chunk, in 14 instances our processing of the parse leads to omissions of text when doing isolated proposal scoring (e.g. in girl sitting in back, the only noun chunk is girl, so this is the only text used during isolated proposal scoring), 52 cases in which there is an error in the execution of the heuristic (e.g. our spatial definition of a relation does not match the relation in the instance).", "(There are 2 instances for which we mark 2 categories.)", "The final category (execution) includes several kinds of errors, some examples of which are shown in Appendix F. 5 Related Work Referring expression comprehension Datasets for ReC span several visual domains, including photos of everyday scenes (Mao et al., 2016; Kazemzadeh et al., 2014), video games (Tanaka et al., 2019), objects in robotic context (Shridhar et al., 2020; Wang et al., 2021), and webpages (Wichers et al., 2018).", "Spatial heuristics have been used in previous work (Moratz and Tenbrink, 2006).", "Our work is also related to Krishnamurthy and Kollar (2013), 5205 which similarly decomposes the reasoning process into a parsing step and visual execution steps, but the visual execution is driven by learned binary classifiers for each predicate type.", "In the supervised setting, prior work shows that using an external parser, as we do, leads to lower accuracy than training a language module jointly with the remainder of the model (Hu et al., 2017).", "There is a long line of work in weakly supervised ReC, where at training time, pairs of referring expressions and images are available but the ground-truth bounding boxes for each expression are not (Rohrbach et al., 2016; Liu et al., 2019; Zhang et al., 2018, 2020; Sun et al., 2021).", "Our setting differs from the weakly supervised setting in that the model is not trained at all on the ReC task.", "Sadhu et al. (2019) discuss a zero-shot setting different from ours in which novel objects are seen at test time, but the visual domain stays the same.", "Pre-trained vision and language models Early pre-trained vision and language models (Tan and Bansal, 2019; Lu et al., 2019; Chen et al., 2020) used a cross-modal transformer (Vaswani et al., 2017) and pre-training tasks like masked language modeling, image-text matching, and image feature regression.", "By contrast, CLIP and similar models (Radford et al., 2021; Jia et al., 2021) use a separate image and text transformer and a contrastive pre-training objective.", "Recent hybrid approaches augment CLIP's architecture with a multi-modal transformer (Li et al., 2021; Zellers et al., 2021).", "Zero-shot application of pre-trained models Models pre-trained with the contrastive objective have exhibited strong zero-shot performance in image classification tasks (Radford et al., 2021; Jia et al., 2021).", "Gu et al. (2021) use CLIP can be to classify objects by computing scores for class labels with cropped proposals.", "Our IPS is different in that it isolates proposals by both cropping and blurring .", "Shen et al. (2021) show that a simple zero-shot application of CLIP to visual question answering performs almost on par with random chance.", "Yao et al. (2021) describe a zero-shot method for ReC based on a pre-trained masked language model (MLM); we show that their zero-shot results and a version of their method adapted for models pre-trained to compute image-text scores (rather than MLM) are substantially worse than isolated proposal scoring and GradCAM.", "We present ReCLIP, a zero-shot method for referring expression comprehension (ReC) that decomposes an expression into subqueries, uses CLIP to score isolated proposals against these subqueries, and combines the outputs with spatial heuristics.", "ReCLIP outperforms zero-shot ReC approaches from prior work and also performs well across visual domains: ReCLIP outperforms state-of-the-art supervised ReC models, trained on natural images, when evaluated on RefGTA.", "We also find that CLIP has low zero-shot spatial reasoning performance, suggesting the need for pre-training methods that account more for spatial reasoning.", "Recent work has shown that pre-trained vision and language models suffer from biases such as gender bias (Ross et al., 2021; Srinivasan and Bisk, 2021).", "Agarwal et al. (2021) provide evidence that CLIP has racial and other biases, which makes sense since CLIP was trained on data collected from the web and not necessarily curated carefully.", "Therefore, we do not advise deploying our system directly in the real world immediately.", "Instead, practitioners interested in this system should first perform analysis to measure its biases based on previous work and attempt to mitigate them.", "We also note that our work relies heavily on a pre-trained model whose pre-training required a great deal of energy, which likely had negative environmental effects.", "That being said our zero-shot method does not require training a new model and in that sense could be more environmentally friendly than supervised ReC models (depending on the difference in the cost of inference).", "We thank the Berkeley NLP group and Med-hini Narasimhan for helpful comments.", "We thank Michael Schmitz for help with AI2 infrastructure.", "This work was supported in part by DoD, including DARPA's LwLL, and/or Se-maFor programs, and Berkeley Artificial Intelligence Research (BAIR) industrial alliance programs.", "Sameer Singh was supported in part by the National Science Foundation grant #IIS-1817183 and in part by the DARPA MCS program under Contract No.", "N660011924033 with the United States Office Of Naval Research." ]
[ "abstain", "abstain", "method", "abstain", "result", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "objective", "objective", "result", "abstain", "objective", "result", "abstain", "objective", "method", "result", "abstain", "objective", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "other" ]
[ "The success of neural networks on a diverse set of NLP tasks has led researchers to question how much these networks actually know about natural language.", "Probes are a natural way of assessing this.", "When probing, a researcher chooses a linguistic task and trains a supervised model to predict annotations in that linguistic task from the network's learned representations.", "If the probe does well, the researcher may conclude that the representations encode knowledge related to the task.", "A commonly held belief is that using simpler models as probes is better; the logic is that simpler models will identify linguistic structure , but not learn the task itself .", "We propose an information-theoretic operationalization of probing as estimating mutual information that contradicts this received wisdom: one should always select the highest performing probe one can, even if it is more complex, since it will result in a tighter estimate, and thus reveal more of the linguistic information inherent in the representation.", "The experimental portion of our paper focuses on empirically estimating the mutual information between a linguistic property and BERT, comparing these estimates to several baselines.", "We evaluate on a set of ten typologically diverse languages often underrepresented in NLP researchplus English totalling eleven languages.", "Our implementation is available in https://github.com/ rycolab/info-theoretic-probing .", "Neural networks are the backbone of modern state-of-the-art natural language processing (NLP) systems.", "One inherent by-product of training a neural network is the production of real-valued representations.", "Many speculate that these representations encode a continuous analogue of discrete linguistic properties, e.g., part-of-speech tags, due to the networks' impressive performance on many NLP tasks (Belinkov et al., 2017).", "As a result of this speculation, one common thread of research focuses on the construction of probes , i.e., supervised models that are trained to extract the linguistic properties directly (Belinkov et al., 2017; Con-neau et al., 2018; Peters et al., 2018b; Zhang and Bowman, 2018; Naik et al., 2018; Tenney et al., 2019).", "A syntactic probe, then, is a model for extracting syntactic properties, such as part of speech, from the representations (Hewitt and Liang, 2019).", "In this work, we question what the goal of probing for linguistic properties ought to be.", "Informally, probing is often described as an attempt to discern how much information representations encode about a specific linguistic property.", "We make this statement more formal: We assert that the natural operationalization of probing is estimating the mutual information (Cover and Thomas, 2012) between a representation-valued random variable and a linguistic propertyvalued random variable.", "This operationalization gives probing a clean, information-theoretic foundation, and allows us to consider what probing actually means.", "Our analysis also provides insight into how to choose a probe family: We show that choosing the highest-performing probe, independent of its complexity, is optimal for achieving the best estimate of mutual information (MI).", "This contradicts the received wisdom that one should always select simple probes over more complex ones (Alain and Ben-gio, 2017; Liu et al., 2019; Hewitt and Manning, 2019).", "In this context, we also discuss the recent work of Hewitt and Liang (2019) who proposes selectivity as a criterion for choosing families of probes.", "Hewitt and Liang (2019) defines selectivity as the performance difference between a probe on the target task and a control task, writing [t]he selectivity of a probe puts linguistic task accuracy in context with the probe's capacity to memorize from word types.", "They further ponder: when a probe achieves high accuracy on a linguistic task using a representation, can we conclude that the representation encodes linguistic structure, or has the probe just learned the task?", "Information-theoretically, there is no difference between learning the task and probing for linguistic structure, as we will show; thus, it follows that one should always employ the best possible probe for the task without resorting to artificial constraints.", "In the experimental portion of the paper, we empirically analyze word-level part-of-speech labeling, a common syntactic probing task (Hewitt and Liang, 2019; Sahin et al., 2019), within our MI operationalization.", "Working on a typologically diverse set of languages (Basque, Czech, English, Finnish, Indonesian, Korean, Marathi, Tamil, Telugu, Turkish and Urdu), we show that only in five of these eleven languages do we recover higher estimates of mutual information between part-of-speech tags and BERT (Devlin et al., 2019), a common contextualized embedder, than from a control.", "These modest improvements suggest that most of the information needed to tag part-of-speech well is encoded at the lexical level, and does not require sentential context.", "Put more simply, words are not very ambiguous with respect to part of speech, a result known to practitioners of NLP (Garrette et al., 2013).", "We interpret this to mean that part-of-speech labeling is not a very informative probing task.", "We further investigate how BERT fares in dependency labeling, as analysed by Tenney et al. (2019).", "In this task, estimates based on BERT return more information than a type-level embedding in all analysed languages.", "However, our MI estimates still only show that BERT contains at most 12% more information than the control.", "We also remark that operationalizing probing information-theoretically gives us a simple, but stunning result: contextual word embeddings, e.g., BERT (Devlin et al., 2019) and ELMo (Peters et al., 2018a), contain the same amount of information about the linguistic property of interest as the original sentence.", "This follows from the data-processing inequality under a very mild assumption.", "What this suggests is that, in a certain sense, probing for linguistic properties in representations may not be a well grounded enterprise at all.", "It also highlights the need to more formally define ease of extraction .", "Following Hewitt and Liang (2019), we consider probes that examine syntactic knowledge in contextualized embeddings.", "These probes only consider a token's embedding in isolation, and try to perform the task using only that information.", "Specifically, in this work, we consider part-of-speech (POS) and dependency labeling: determining a word's part of speech in a given sentence and the dependency relation for a pair of tokens joined by a dependency arc.", "Say we wish to determine whether the word love is a NOUN or a VERB .", "This task requires the sentential context for success.", "As an example, consider the utterance love is blind where, only with the context, is it clear that love is a NOUN .", "Thus, to do well on this task, the contextualized embeddings need to encode enough about the surrounding context to correctly guess the POS.", "Analogously, we need the whole sentence to know that love is the NOMINAL SUBJECT .", "Whereas in the sentence greed can blind love, love is the DIRECT OBJECT .", "Let S be a random variable ranging over all possible sequences of words.", "For the sake of this paper, we assume the vocabulary V is finite and, thus, the values S can take are in V .", "We write s S as s = s 1 s | s | for a specific sentence, where each s i V is a specific token in the sentence at the position i Z + .", "We also define the random variable W that ranges over the vocabulary V .", "We define both a sentence-level random variable S and a word type-level random variable W since each will be useful in different contexts during our exposition.", "Next, let T be a random variable whose possible values are the analyses t that we want to consider for token s i in its sentential context, s = s 1 s i s | s | .", "In the discussion, we focus on predicting the part-of-speech tag of the i th word s i , but the same results apply to the dependency label of an edge between two words.", "We denote the set of values T can take as the set T .", "Finally, let R be a representation-valued random variable for a token s i derived from the entire sentence s .", "We write r R d for a value of R .", "While any given value r is a continuous vector, there are only a countable number of values R can take.", "1 To see this, note there are only a countable number of sentences in V .", "Next, we assume there exists a true distribution p ( t, s , i ) over analyses t (elements of T ), sentences s (elements of V ), and positions i (elements of Z + ).", "Note that the conditional distribution p ( t | s , i ) gives us the true distribution over analyses t 1 In this work, we ignore the fact that the floating points have precision constraints in practice.", "for the i th word token in the sentence s .", "We will augment this distribution such that p is additionally a distribution over r , i.e., p ( r , t, s , i ) = ( r | s , i ) p ( t, s , i ) (1) where we define the augmentation as: ( r | s , i ) = 1 { r = BERT ( s ) i } (2) Since contextual embeddings are a deterministic function of a sentence s , the augmented distribution in eq.", "(1) has no more randomness than the originalits entropy is the same.", "We assume the values of the random variables defined above are distributed according to this (unknown) p .", "While we do not have access to p , we assume the data in our corpus were drawn according to it.", "Note that W the random variable over possible word typesis distributed according to p ( w ) = (cid:88) s V | s | (cid:88) i =1 ( w | s , i ) p ( s , i ) (3) where we define the deterministic distribution ( w | s , i ) = 1 { s i = w } (4) 2.2 Probing as Mutual Information The task of supervised probing is an attempt to ascertain how much information a specific representation r tells us about the value of t .", "This is naturally operationalized as the mutual information, a quantity from information theory: I( T ; R ) = H( T ) H( T | R ) (5) where we define the entropy, which is constant with respect to the representations, as H( T ) = (cid:88) t T p ( t ) log p ( t ) (6) and we define the conditional entropy as H( T | R ) = (cid:90) p ( r ) H ( T | R = r ) d r (7) = (cid:88) s V | s | (cid:88) i =1 p ( s , i ) H ( T | R = BERT ( s ) i ) where the point-wise conditional entropy inside the sum is defined as H( T | R = r ) = (cid:88) t T p ( t | r ) log p ( t | r ) (8) Again, we will not know any of the distributions required to compute these quantities; the distributions in the formulae are marginals and conditionals of the true distribution discussed in eq.", "(1).", "The desired conditional entropy, H( T | R ) is not readily available, but with a model q ( t | r ) in hand, we can upper-bound it by measuring their empirical cross entropy:", "where H q ( T | R ) is the cross-entropy we obtain by using q to get this estimate.", "Since the KL divergence is always positive, we may lower-bound the desired mutual information I( T ; R ) := H( T ) H( T | R ) H( T ) H q ( T | R ) (10) This bound gets tighter, the more similarin the sense of the KL divergence q ( | r ) is to the true distribution p ( | r ) .", "Bigger Probes are Better.", "If we accept mutual information as a natural operationalization for how much representations encode a target linguistic task (2.2), the best estimate of that mutual information is the one where the probe q ( t | r ) is best at the target task.", "In other words, we want the best probe q ( t | r ) such that we get the tightest bound to the actual distribution p ( t | r ) .", "This paints the question posed in Hewitt and Liang (2019), who write when a probe achieves high accuracy on a linguistic task using a representation, can we conclude that the representation encodes linguistic structure, or has the probe just learned the task? as a false dichotomy.", "2 From an information-theoretic view, we will always prefer the probe that does better at the target task, since there is no difference between learning a task and the representations encoding the linguistic structure.", "To place the performance of a probe in perspective, Hewitt and Liang (2019) develops the notion of a control task.", "Inspired by this, we develop an analogue we term control functions , which are functions of the representation-valued random variable R .", "Similar to Hewitt and Liang (2019)'s control tasks, the goal of a control function c ( ) is to place the mutual information I( T ; R ) in the context of a baseline that the control function encodes.", "Control functions have their root in the data-processing inequality (Cover and Thomas, 2012), which states that, for any function c ( ) , we have I( T ; R ) I( T ; c ( R )) (11) In other words, information can only be lost by processing data.", "A common adage associated with this inequality is garbage in, garbage out. 3.1 Type-Level Control Functions We focus on type-level control functions in this paper.", "These functions have the effect of decon-textualizing the embeddings, being related to the common trend of analyzing probe results in comparison to input layer embeddings (Belinkov and Glass, 2017; Liu et al., 2019; Hewitt and Manning, 2019; Tenney et al., 2019).", "Such functions allow us to inquire how much the contextual aspect of the contextual embeddings help the probe perform the target task.", "To show that we may map from contextual embeddings to the identity of the word type, we need the following assumption.", "Assumption", "1. Every contextualized embedding is unique, i.e., for any pair of sentences s , s (cid:48) V , we have ( s (cid:54) = s (cid:48) ) || ( i (cid:54) = j ) BERT ( s ) i (cid:54) = BERT ( s (cid:48) ) j for all i { 1 , . . . | s |} and j { 1 , . . . , | s (cid:48) |} .", "We note that Assumption 1 is mild.", "Contextualized word embeddings map words (in their context) to R d , which is an uncountably infinite space.", "However, there are only a countable number of sentences, which implies only a countable number of sequences of real vectors in R d that a contextualized embedder may produce.", "The event that any two embeddings would be the same across two distinct sentences is infinitesimally small.", "3 Assumption 1 yields the following corollary.", "3 Indeed, even if we sampled every embedding randomly from a d -dimensional Gaussian, the probability that we would ever sample the same real vector is zero.", "Corollary", "1. There exists a function id : R d V that maps a contextualized embedding to its word type.", "The function id is not a bijection since multiple embeddings will map to the same type.", "Using Corollary 1, we can show that any non-contextualized word embedding will contain no more information than a contextualized word embedding.", "More formally, we do this by constructing a look-up function e : V R d that maps a word to a word embedding.", "This embedding may be onehot, randomly generated ahead of time, or the output of a data-driven embedding method, e.g. fastText (Bojanowski et al., 2017).", "We can then construct a control function as the composition of the look-up function e and the id function id .", "Using the data-processing inequality, we can prove that in a word-level prediction task, any non-contextual (type level) word-embedding will contain no more information than a contextualized (token level) one, such as BERT and ELMo.", "Specifically, we have I( T ; R ) (12) I( T ; id ( R )) = I( T ; W ) I( T ; e ( W )) This result 4 is intuitive and, perhaps, trivial context matters information-theoretically.", "However, it gives us a principled foundation by which to measure the effectiveness of probes as we will show in 3.2.", "We will now quantify how much a contextualized word embedding knows about a task with respect to a specific control function c ( ) .", "We term how much more information the contextualized embeddings have about a task than a control variable the gain , G , which we define as G ( T, R, c ) = I( T ; R ) I( T ; c ( R )) (13) = H( T | c ( R )) H( T | R ) 0 The gain function will be our method for measuring how much more information contextualized representations have over a controlled baseline, encoded as the function c .", "We will empirically estimate this value in 6.", "Interestingly enough, the gain has a straightforward interpretation.", "Proposition", "1. The gain function is equal to the following conditional mutual information I( T ; R | c ( R )) = G ( T, R, c ) (14) 4 Note that although this result holds in theory, in practice the functions id and e ( ) might be arbitrarily hard to estimate.", "This is discussed in length in 4.3.", "I( T ; R | c ( R )) := I( T ; R ) I( T ; R ; c ( R )) = I( T ; R ) I( T ; c ( R )) = G ( T, R, c ) The jump from the first to the second equality follows since R encodes, by construction, all the information about T provided by c ( R ) .", "Proposition 1 gives us a clear understanding of the quantity we wish to estimate: It is how much information about a task is encoded in the representations, given some control knowledge.", "If properly designed, this control transformation will remove information from the probed representations.", "The gain, as defined in eq.", "(13), is intractable to compute.", "In this section we derive a pair of variational bounds on G ( T, R, e ) one upper and one lower.", "To approximate the gain, we will simultaneously minimize an upper and maximize a lower-bound on eq.", "(13).", "We begin by approximating the gain in the following manner G ( T, R, e ) (15) H q 2 ( T | c ( R )) H q 1 ( T | R ) (cid:124) (cid:123)(cid:122) (cid:125) estimated G q ( T,R, e ) these cross-entropies can be empirically estimated.", "We will assume access to a corpus { ( t i , r i ) } Ni =1 that is human-annotated for the target linguistic property; we further assume that these are samples ( t i , r i ) p ( , ) from the true distribution.", "This yields a second approximation that is tractable: H q ( T ; R ) 1 NN (cid:88) i =1 log q ( t i | r i ) (16) This approximation is exact in the limit N by the law of large numbers.", "We note the approximation given in eq.", "(15) may be either positive or negative and its estimation error follows from eq.", "(9): = E r p ( ) KL( p ( | r ) || q 1 ( | r )) (17) E r p ( ) KL( p ( | c ( r )) || q 2 ( | c ( r ))) = KL q 1 ( T, R ) KL q 2 ( T, c ( R )) where we abuse the KL notation to simplify the equation.", "This is an undesired behavior since we know the gain itself is non-negative by the data-processing inequality, but we have yet to devise a remedy.", "We justify the approximation in eq.", "(15) with a pair of variational bounds.", "The following two corollaries are a result of Theorem 2 in App.", "A. Corollary", "2. We have the following upper-bound on the gain G ( T, R, e ) (18) G q ( T, R, e )+KL q 1 ( T, R ) Corollary", "3. We have the following lower-bound on the gain G ( T, R, e ) (19) G q ( T, R, e ) KL q 2 ( T, c ( R )) The conjunction of Corollary 2 and Corollary 3 suggest a simple procedure for finding a good approximation: We choose q 1 ( | r ) and q 2 ( | r ) so as to minimize eq.", "(18) and maximize eq.", "(19), respectively.", "These distributions contain no overlapping parameters, by construction, so these two optimization routines may be performed independently.", "We will optimize both with a gradient-based procedure, discussed in 6.", "In 3, we developed an information-theoretic framework for thinking about probing contextual word embeddings for linguistic structure.", "However, we now cast doubt on whether probing makes sense as a scientific endeavour.", "We prove in 4.1 that contextualized word embeddings, by construction, contain no more information about a word-level syntactic task than the original sentence itself.", "Nevertheless, we do find a meaningful scientific interpretation of control functions.", "We expound upon this in 4.2, arguing that control functions are useful, not for understanding representations, but rather for understanding the influence of sentential context on word-level syntactic tasks, e.g., labeling words with their part of speech.", "Corollary", "4. It directly follows from Assumption 1 that BERT is a bijection between sentences s and sequences of embeddings (cid:104) r 1 , . . . , r | s | (cid:105) .", "As BERT is a bijection, it has an inverse, which we will denote as BERT 1 .", "Proof.", "I( T ; S ) I( T ; BERT ( S )) (20) I( T ; BERT 1 ( BERT ( S ))) = I( T ; S ) This implies I( T ; S ) = I( T ; BERT ( S )) .", "5 This is not a BERT-specific resultit rests on the fact that the data-processing inequality is tight for bijections.", "While Theorem 1 is a straightforward application of the data-processing inequality, it has deeper ramifications for probing.", "It means that if we search for syntax in the contextualized word embeddings of a sentence, we should not expect to find any more syntax than is present in the original sentence.", "In a sense, Theorem 1 is a cynical statement: under our operationalization, the endeavour of finding syntax in contextualized embeddings sentences is nonsensical.", "This is because, under Assumption 1, we know the answer a priori the contextualized word embeddings of a sentence contain exactly the same amount of information about syntax as does the sentence itself.", "Information-theoretically, the interpretation of control functions is also interesting.", "As previously noted, our interpretation of control functions in this work does not provide information about the representations themselves.", "Indeed, the same reasoning used in Corollary 1 can be used to devise a function id s ( r ) which maps a contextual representation of a token back to its sentence.", "For a type-level control function c , by the data-processing inequality, we have that I( T ; W ) I( T ; c ( R )) .", "Consequently, we can get an upper-bound on how much information we can get out of a decontextual-ized representation.", "If we assume we have perfect probes, then we get that the true gain function is I( T ; S ) I( T ; W ) = I( T ; S | W ) .", "This quantity is interpreted as the amount of knowledge we gain about the word-level task T by knowing S (i.e., the sentence) in addition to W (i.e., the word type).", "Therefore, a perfect probe provides insights about language and not about the actual representations.", "5 Actually, Hewitt and Liang likely had an intuition about this in mind when they wrote [a] sufficiently expressive probe with enough training data could learn any task on top of it (Hewitt and Liang, 2019).", "We do acknowledge another interpretation of the work of Hewitt and Liang (2019) inter alia ; BERT makes the syntactic information present in an ordered sequence of words more easily extractable.", "However, ease of extraction is not a trivial notion to operationalize, and indeed, we know of no attempt to do so; 6 it is certainly more complex to determine than the number of layers in a multi-layer perceptron (MLP).", "Indeed, a MLP with a single hidden layer can represent any function over the unit cube, with the caveat that we may need a very large number of hidden units (Cybenko, 1989).", "Although for perfect probes the above results should hold, in practice id ( ) and c ( ) may be hard to approximate.", "Furthermore, if these functions were to be learned, they might require an unreasonably large dataset.", "Learning a random embedding control function, for example, would require a dataset containing all words in the vocabulary V in an open vocabulary setting an infinite dataset would be required!", "Better representations should make their respective probes easily learnableand consequently their encoded information is more accessible (Voita and Titov, 2020).", "We suggest that future work on probing should focus on operationalizing ease of extraction more rigorouslyeven though we do not attempt this ourselves.", "As previously argued by Saphra and Lopez (2019, 5), the advantage of simple probes is that they may reveal something about the structure of the encoded informationi.e., is it structured in such a way that it can be easily taken advantage of by downstream consumers of the contextualized embeddings?", "Many researchers who are interested in less complex probes have, either implicitly or explicitly, had this in mind.", "We agree with Hewitt and Liang (2019)and with both Zhang and Bowman (2018) and Tenney et al. (2019)that we should have controlled baselines when probing for linguistic properties.", "However, we disagree with parts of their methodology for constructing control tasks.", "We present these disagreements here.", "Hewitt and Liang (2019) introduces control tasks to evaluate the effectiveness of probes.", "We draw 6 Xu et al. (2020) is a possible exception.", "inspiration from this technique as evidenced by our introduction of control functions.", "However, we take issue with the suggestion that controls should have structure and randomness , to use the terminology from Hewitt and Liang (2019).", "They define structure as the output for a word token is a deterministic function of the word type.", "This means that they are stripping the language of ambiguity with respect to the target task.", "In the case of part-of-speech labeling, love would either be a NOUN or a VERB in a control task, never both: this is a problem.", "The second feature of control tasks is randomness, i.e., the output for each word type is sampled independently at random.", "In conjunction, structure and randomness may yield a relatively trivial task that does not look like natural language.", "What is more, there is a closed-form solution for an optimal, retrieval-based probe that has zero learned parameters: If a word type appears in the training set, return the label with which it was annotated there, otherwise return the most frequently occurring label across all words in the training set.", "This probe will achieve an accuracy that is 1 minus the out-of-vocabulary rate (the number of tokens in the test set that correspond to novel types divided by the number of tokens) times the percentage of tags in the test set that do not correspond to the most frequent tag (the error rate of the guess-the-most-frequent-tag classifier).", "In short, the best model for a control task is a pure memorizer that guesses the most frequent tag for out-of-vocabulary words.", "Hewitt and Liang (2019) proposes that probes should be optimized to maximize accuracy and selectivity.", "Recall selectivity is given by the distance between the accuracy on the original task and the accuracy on the control task using the same architecture.", "Given their characterization of control tasks, maximising selectivity leads to a selection of a model that is bad at memorization.", "But why should we punish memorization?", "Much of linguistic competence is about generalization, however memorization also plays a key role (Fodor et al., 1974; Nooteboom et al., 2002; Fromkin et al., 2018), with word learning (Carey, 1978) being an obvious example.", "Indeed, maximizing selectivity as a criterion for creating probes seems to artifi-cially disfavor this property.", "Hewitt and Liang (2019) acknowledges that for", "the more complex task of dependency edge prediction, a MLP probe is more accurate and, therefore, preferable despite its low selectivity.", "However, they offer two counter-examples where the less selective neural probe exhibits drawbacks when compared to its more selective, linear counterpart.", "We believe both examples are a symptom of using a simple probe rather than of selectivity being a useful metric for probe selection.", "First, Hewitt and Liang (2019, 3.6) point out that, in their experiments, the MLP-1 model frequently mislabels the word with suffix -s as NNPS on the POS labeling task.", "They present this finding as a possible example of a less selective probe being less faithful in representing what linguistic information has the model learned.", "Our analysis leads us to believe that, on contrary, this shows that one should be using the best possible probe to minimize the chance of misinterpreting its encoded information.", "Since more complex probes achieve higher accuracy on the task, as evidence by the findings of Hewitt and Liang (2019), we believe that the overall trend of misinterpretation is higher for the probes with higher selectivity.", "The same applies for the second example in Hewitt and Liang 2019, 4.2 where a less selective probe appears to be less faithful.", "The paper shows that the representations on ELMo's second layer fail to outperform its word type ones (layer zero) on the POS labeling task when using the MLP-1 probe.", "While the paper argues this is evidence for selectivity being a useful metric in choosing appropriate probes, we argue that this demonstrates, yet again, that one needs to use a more complex probe to minimize the chances of misinterpreting what the model has learned.", "The fact that the linear probe shows a difference only demonstrates that the information is perhaps more accessible with ELMo, not that it is not present.", "Despite our discussion in 4, we still wish to empirically vet our estimation technique for the gain and we use this section to highlight the need to formally define ease of extraction (as argued in 4.3).", "We consider the tasks of POS and dependency labeling, using the universal POS tag (Petrov et al., 2012) and dependency label information from the Universal Dependencies 2.5 (Zeman et al., 2019).", "We probe the multilingual release of BERT 7 on eleven typologically diverse languages: Basque, Czech, 7 We used Wolf et al. (2019)'s implementation.", "English, Finnish, Indonesian, Korean, Marathi, Tamil, Telugu, Turkish and Urdu; and we compute the contextual representations of each sentence by feeding it into BERT and averaging the output word piece representations for each word, as tokenized in the treebank.", "e onehot returns a one-hot embedding.", "8 These functions can be considered type level, as they remove the influence of context on the word.", "As expounded upon above, our purpose is to achieve the best bound on mutual information we can.", "To this end, we employ a deep MLP as our probe.", "We define the probe as q ( t | r ) = (21) softmax (cid:16) W ( m ) (cid:16) W ( m 1) ( W (1) r ) (cid:17)(cid:17) an m -layer neural network with the non-linearity ( ) = ReLU( ) .", "The initial projection matrix is W (1) R r 1 d and the final projection matrix is W ( m ) R |T | r m 1 , where r i = r 2 i 1 .", "The remaining matrices are W ( i ) R r i r i 1 , so we halve the number of hidden states in each layer.", "We optimize 8 We initialize random embeddings at the type level, and let them train during the model's optimization.", "We also experiment with fixed random embeddingsresults for this control are in the Appendix.", "over the hyperparametersnumber of layers, hidden size, one-hot embedding size, and dropoutby using random search.", "For each estimate, we train 50 models and choose the one with the best validation cross-entropy.", "The cross-entropy in the test set is then used as our entropy estimate.", "For dependency labeling, we follow Tenney et al. (2019) and concatenate the embeddings for both a token and its headi.e. r = [ r i ; r head( i ) ] as such, the initial projection matrix is actually W (1) R r 1 2 d .", "We know BERT can generate text in many languages.", "Here we assess how much it actually knows about syntax in those languagesor at least how much we can extract from it given as powerful probes as we can train.", "We further evaluate how much it knows above and beyond simple type-level baselines.", "POS tags Table 1 presents these results, showing how much information BERT , fastText, and one-hot embeddings encode about POS tagging.", "We see thatin all analysed languagestype level embeddings can already capture most of the uncertainty in POS tagging.", "We also see that BERT only shares a small amount of extra information with the task, having small gains in all languages BERT even presents negative gains in some of them.", "Although this may seem to contradict the information processing inequality, it is actually caused by the dif-ficulty of approximating id and c ( ) with a finite training setcausing KL q 1 ( T | R ) to be larger than KL q 2 ( T | c ( R )) .", "This highlights the need to formalize ease of extraction , as discussed in 4.3.", "guages on this task.", "Nonetheless, although this is a much more context-dependent task, we see BERT -based estimates reveal at most 12% more information than fastText in English, the highest resource language in our set.", "If we look at the lower-resource languages, in five of them the gains are of less than 5% .", "Discussion When put into perspective, multilingual BERT 's representations do not seem to encode much more information about syntax than a simple baseline.", "On POS labeling, BERT only improves upon fastText in five of the eleven analysed languagesand by small amounts (less than 9% ) when it does.", "Even at dependency labelling, a task considered to require more contextual knowledge, we could only decode from BERT at most (in English) 12% additional information which again highlights the need to formalize ease of extraction.", "We propose an information-theoretic operationalization of probing that defines it as the task of estimating conditional mutual information.", "We introduce control functions, which put in context our mutual information estimateshow much more informative are contextual representations than some knowledge judged to be trivial?", "We further explored our operationalization and showed that, given perfect probes, probing can only yield insights into the language itself and cannot tell us anything about the representations under investigation.", "Keeping this in mind, we suggest a change of focusinstead of concentrating on probe size or information, we should pursue ease of extraction going forward.", "On a final note, we apply our formalization to evaluate multilingual BERT 's syntactic knowledge on a set of eleven typologically diverse languages.", "Although it does encode a large amount of information about syntaxmore than 76% and 65% , respectively, about POS and dependency labels in all languages 9 BERT only encodes at most 12% more information than a simple baseline (a type-level rep-resentation).", "On POS labeling, more specifically, our MI estimates based on BERT are higher than the control in less than half of the analyzed languages.", "This indicates that word-level POS labeling may not be ideal for contemplating the syntax contained in contextual word embeddings.", "The authors would like to thank Adam Poliak and John Hewitt for several helpful suggestions." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "result", "result", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "method", "abstain", "method", "abstain", "other" ]
[ "Machine reading comprehension has made great progress in recent years owing to large-scale annotated datasets.", "In the clinical domain, however, creating such datasets is quite difficult due to the domain expertise required for annotation.", "Recently, Pampari et al. (2018) tackled this issue by using expert-annotated question templates and existing i2b2 annotations to create emrQA , the first large-scale dataset for question answering (QA) based on clinical notes.", "In this paper, we provide an in-depth analysis of this dataset and the clinical reading comprehension (CliniRC) task.", "From our qualitative analysis, we find that", "(i) emrQA answers are often incomplete, and", "(ii) emrQA questions are often answerable without using domain knowledge.", "From our quantitative experiments, surprising results include that", "(iii) using a small sampled subset (5%-20%), we can obtain roughly equal performance compared to the model trained on the entire dataset,", "(iv) this performance is close to human expert's performance, and", "(v) BERT models do not beat the best performing base model.", "Following our analysis of the emrQA, we further explore two desired aspects of CliniRC systems: the ability to utilize clinical domain knowledge and to generalize to unseen questions and contexts.", "We argue that both should be considered when creating future datasets.", "1 1 Introduction Medical professionals often query over clinical notes in Electronic Medical Records (EMRs) to find information that can support their decision making (Demner-Fushman et al., 2009; Rosen-bloom et al., 2011; Wang et al., 2018).", "One way to facilitate such information seeking activities is to build a natural language question answering (QA) system that can extract precise answers from clinical notes (Cairns et al., 2011; Cao et al., 2011; Wren, 2011; Abacha and Demner-Fushman, 2016, 2019).", "Context: ...", "For HTN control, pt was given HCTZ and lopressor which sufficiently controlled his BP.", "Pt was sent home on HCTZ 25mg daily and atenolol 50mg daily.", "Answer : ADDITIONAL COMMENTS: 1.) Take hydrochlorothiazide 25mg daily and atenolol 50mg daily for your RECORD #992321, Date: 2145-09-22 Question : Why has the patient been prescribed hctz?", "...ADDITIONAL COMMENTS: 1.) Take hydrochlorothiazide 25mg daily and atenolol 50mg daily for your blood pressure.", "You should also take aspirin 81mg daily.", "Question : What was the dosage prescribed of hydrochlorothiazide?", "Answer : For HTN control, pt was given HCTZ and lopressor which sufficiently Figure 1: Examples from the emrQA dataset: Part of a clinical note as context and 2 question-answer pairs.", "Machine reading comprehension (RC) aims to automatically answer questions based on a given document or text corpus and has drawn wide attention in recent years.", "Many neural models (Cheng et al., 2016; Wang et al., 2017; Wang and Jiang, 2017; Seo et al., 2017; Chen et al., 2017; Devlin et al., 2019) have achieved very promising results on this task, owing to large-scale QA datasets (Hermann et al., 2015; Rajpurkar et al., 2016; Trischler et al., 2017; Joshi et al., 2017; Yang et al., 2018).", "Unfortunately, clinical reading comprehension (CliniRC) has not observed as much progress due to the lack of such QA datasets.", "In order to create QA pairs on clinical texts, annotators must have considerable medical expertise and data handling must be specifically designed to address ethical issues and privacy concerns.", "Due to these requirements, using crowdsourcing like in the open domain to create large-scale clinical QA datasets becomes highly impractical (Wei et al., 2018).", "Recently, Pampari et al. (2018) found a smart way to tackle this issue and created emrQA , the first large-scale QA dataset on clinical texts.", "Instead of relying on crowdsourcing, emrQA was semiautomatically generated based on annotated question templates and existing annotations from the n2c2 (previously called i2b2) challenge datasets 2 .", "Example QA pairs from the dataset are shown in Figure 1. In this paper, we aim to gain a deep understanding of the CliniRC task and conduct a thorough analysis of the emrQA dataset.", "We first explore the dataset directly by carrying out a meticulous qualitative analysis on randomly-sampled QA pairs and we find that: 1) Many answers in the emrQA dataset are incomplete and hence are hard to read and ineffective for training ( 3.1).", "2) Many questions are simple: More than 96% of the examples contain the same key phrases in both questions and answers.", "Though Pampari et al. (2018) claims that 39% of the questions may need knowledge to answer, our error analysis suggests only a very small portion of the errors (2%) made by a state-of-the-art reader might be due to missing external domain knowledge ( 3.2).", "Following our qualitative analysis of the emrQA dataset, we conduct a comprehensive quantitative analysis based on state-of-the-art readers and BERT models (BERT-base (Devlin et al., 2019) as well as its biomedical and clinical versions: BioBERT (Lee et al., 2019) and ClinicalBERT (Alsentzer et al., 2019)) to understand how different systems behave on the emrQA dataset.", "Surprising results include: 1) Using a small sampled subset (5%-20%), we can obtain roughly equal performance compared to the model trained on the entire dataset, suggesting that many examples in the dataset are redundant ( 4.1).", "2) The performance of the best base model is close to the human expert's performance 3 ( 4.2).", "3) The performance of BERT models is around 1%-5% worse than the best performing base model ( 4.3).", "After completing our analysis of the dataset, we explore two potential needs for systems doing CliniRC: 1) The need to represent and use clinical domain knowledge effectively ( 5.1) and 2) the need to generalize to unseen questions and contexts ( 5.2).", "To investigate the first one, we analyze sev-2 https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/ 3 Which is obtained by comparing emrQA answers to answers created by our medical experts on sampled QA pairs.", "eral types of clinical questions that require domain knowledge and can frequently appear in the real clinical setting.", "We also carry out an experiment showing that adding knowledge explicitly yields around 5% increase in F1 over the base model when tested on samples that we created by altering the original questions to involve semantic relations.", "To study generalizability, we ask medical experts to create new questions based on the unseen clinical notes from MIMIC-III (Johnson et al., 2016), a freely accessible critical care database.", "We find that the performance of the best model trained on emrQA drops by 40% under this new setting, showing how critical it is for us to develop more robust and generalizable models for the CliniRC task.", "In summary, given our analysis of the emrQA dataset and the task in general, we conclude that future work still needs to create better datasets to advance CliniRC.", "Such datasets should be not only large-scale, but also less noisy, more diverse, and allow researchers to directly evaluate a system's ability to encode domain knowledge and to generalize to new questions and contexts.", "Similar to the open-domain reading comprehension task, the Clinical Reading Comprehension (CliniRC) task is defined as follows:", "Definition 2.1.", "Given a patient's clinical note (context) C = { c 1 , ..., c n } and a question Q = { t 1 , ..., t m } , the CliniRC task aims to extract a continuous span A = { c i , c i +1 , ..., c i + k } (1 i i + k n ) from the context as the answer, where c i , t j are tokens.", "The emrQA dataset (Pampari et al., 2018) was semi-automatically generated from expert-annotated question templates and existing i2b2 annotations.", "More specifically, clinical question templates were first created by human experts.", "Then, manual annotations from the medication information extraction, relation learning, and coreference Has the patient ever been on | medication | ?", "resolution i2b2 challenges were re-framed into answers for the question templates.", "After linking question templates to i2b2 annotations, the gold annotation entities were used to both replace place-holders in the question templates and extract the sentence around them as answers.", "An example of this generation process can be seen in Figure 2. The emrQA dataset contains 5 subsets: Medication , Relation , Heart Disease , Obesity and Smoking , which were generated from 5 i2b2 challenge datasets respectively.", "The answer format in each dataset is different.", "For the Obesity and Smoking datasets, answers are categorized into 7 classes and the task is to predict the question's class based on the context.", "For the Medication , Relation , and Heart Disease datasets, answers are usually short snippets from the text accompanied by a longer span around it which we refer to as an evidence.", "The short snippet is a single entity or multiple entities while the evidence contains the entire line around those entities in the clinical note.", "For questions that cannot be answered via entities, only the evidence is provided as an answer.", "Given that some questions do not have short answers and that entire evidence spans are usually important for supporting clinical decision making (Demner-Fushman et al., 2009), we treat the answer evidence 4 as our answer just as is done in (Pampari et al., 2018).", "In this work, we mainly focus on the Medication and Relation datasets because (1) they make up 80% of the entire emrQA dataset and (2) their format is consistent with the span extraction task, which is more challenging and meaningful for clinical decision making support.", "We filter the answers whose lengths (number of tokens) are more than 20.", "The detailed statistics of the two datasets are shown in Table 1. 4 For simplicity, we use answer directly henceforth.", "In this section, we carry out an in-depth analysis of the emrQA dataset.", "We aim to examine (1) the quality and (2) level of difficulty for the generated QA pairs in the emrQA dataset.", "Since the emrQA dataset was created via a generation framework unlike human-labeled or crowdsourcing datasets, the quality of the datasets remains largely unknown.", "In order to use this dataset to explore the CliniRC task, it is essential to determine whether it is meaningful.", "In order to do this, we randomly sample 50 QA pairs from the Medication and the Relation datasets respectively.", "Since some questions share the same answer due to automatic generation, we make sure all the samples have different answers.", "Since the questions were generated from expert created templates, most of them are human-readable and unambiguous.", "We therefore mainly focus on evaluating answer quality.", "We ask two human experts to score each answer from 1 to 5 depending on the relevance of the answer to the question (1: irrelevant or incorrect; 2: missing key parts; 3: contains key parts but is not human-readable or contains many irrelevant parts; 4: contains key parts and is only missing a few parts or has a few irrelevant extra segments; 5: perfect an-swer).", "We also ask human annotators to label the gold answers and then calculate the Exact Match (EM) and F1 score (F1) of the emrQA answers", "v.s.", "human gold answers.", "The answer quality score, EM and F1 in both datasets, are shown in Table 2. The scores of the Medication dataset are low since most of the answers are broken sentences or contain unnecessary segments.", "For instance, in the Figure 2 example, the correct answer should be Clindamycin was changed to Flagyl , how-Error Type Question emrQA Answers Prediction Error Ratio Medication Relation Span mismatch include key info Does she have a history of known drug allergies?", "ever, the emrQA answer misses important parts Clindamycin was changed to and contains irrelevant parts By discharge, the patient was afebrile .", "These issues are common in the Medication dataset and make it difficult to train a good system.", "To understand why the generated answers contain such noise, we explored the i2b2 2009 Medication challenge dataset which was used to create these QA pairs.", "We found that most documents in this dataset contain many complete sentences split into separate lines.", "Since the i2b2 annotation are token based and the emrQA obtains full lines around the token as evidence spans, these lines often end up being broken sentences.", "We tried to relabel the answers with existing sentence segmentation tools and heuristic measures but found that it is very challenging to obtain concise and complete text spans as answers.", "Compared with the Medication dataset, the answer quality of the Relation dataset is much better.", "In most cases, the answers are complete and meaningful sentences with no unnecessary parts.", "Another observation from the 50 samples is that 96% of the answers in the Medication dataset and 100% of the answers in the Relation dataset contain the key phrase in the question.", "This is due to the generation procedure illustrated in Figure 2. In this example, the key phrase or entity ( Flagyl ) in the question is also included in the answer.", "This undoubtedly makes the answer easier to extract as long as the model can recognize significant words and do word matching.", "To further explore how much clinical language understanding is needed and what kind of errors do the state-of-the-art reader make, we conduct error analysis using DocReader (Chen et al., 2017) (also used in (Pampari et al., 2018)) on the emrQA dataset.", "More specifically, we randomly sample 50 questions that are answered incorrectly by the model (based on exact match metric) from the Medication and Relation dev set respectively 5 .", "The results are shown in Table 3 (examples for each error type are also given for better understanding).", "Since emrQA answers are often incomplete in the dataset, we deem span mismatch errors acceptable as long as the predictions include the key part of the ground truths.", "Surprisingly, span mismatch-include key info errors, along with ambiguous questions , incorrect golds and false negatives (the prediction is correct but it is not in the emrQA answers) errors, which are caused by the dataset itself, account for 90% of total errors, suggesting that the accuracy of these models is even higher than we report.", "5 Note that these 100 samples are sampled from errors, which are different from the previously sampled ones.", "Another interesting finding from the error analysis is that to our surprise, only a very small amount (2%) of errors may have been caused by a lack of external domain knowledge while Pampari et al. (2018) claim that 39% of the questions in the emrQA dataset need domain knowledge.", "This surprising result might be due to: (1) neural models being able to encode relational or associative knowledge from the text corpora as has also been reported in recent studies (Petroni et al., 2019; Bouraoui et al., 2020), and (2) questions and answers sharing key phrases (as we mentioned earlier in 3.1) in many samples, making it more likely that fewer questions need external knowledge to be answered than previously reported.", "In this section, we conduct comprehensive experiments on the emrQA dataset with state-of-the-art readers and recently dominating BERT models.", "Full experimental settings are described in Appendix A. 4.1 How redundant are the emrQA pairs?", "Though there are more than 1 million questions in the emrQA dataset (as shown in Table 1), many questions and their patterns are very similar since they are generated from the same question templates.", "This observation leads to a natural question: do we really need so many questions to train an CliniRC system?", "If many questions are similar to each other, it is very likely that using a sampled subset can achieve roughly the same performance that is based on the entire dataset.", "To verify our hypothesis, we first split the two datasets into train, dev, and test set with the proportion of 7:1:2", "w.r.t. the contexts (full statistics are shown in Appendix Table A1).", "Then we randomly sample { 5%, 10%, 20%, 40%, 60% } and { 1%, 3%, 5%, 10%, 15% } 6 of the QA pairs in each document (context) of the Medication and the Relation training sets respectively.", "We run DocReader (Chen et al., 2017) on the sampled subsets and evaluate them on the same dev and test set.", "As shown in Figure 3, using 20% of the questions in the Medication and 5% of the questions in the Relation dataset can achieve roughly the same performance as using the entire training sets.", "6 The sampling percentage of the Relation dataset is smaller than the Medication dataset since the former one has more QA pairs (roughly 4 times).", "These verify our hypothesis, and illustrate learning a good and robust reader system based on the emrQA dataset does not need so many question-answer pairs.", "While deep models are often data-hungry, it does not mean more data can always lead to better performance.", "In addition to the training size, diversity should also be considered as another important criterion for data quality.", "In the following experiments, we use the sampled subsets (20% for Medication and 5% for Relation ) considering the time and memory cost as well as performance.", "Since the answers in emrQA are often incomplete, the performance of a model is more appropriately reflected by its F1 score.", "As shown in Table 2, we obtain F1 scores of 74% and 95% on two datasets respectively when we test human-labeled answers against the emrQA answers on a sampled dataset.", "We can see from Table 4 that the best performing reader, DocReader, achieves around 70% and 94% F1 performance on the Medication and Relation test set respectively, which are very close to the human performance just described.", "Though designing more complex and advanced models may achieve better scores, such scores are obtained", "w.r.t. noisy emrQA answers and may not translate meaningfully to real cases.", "BERT models have achieved very promising results recently in various NLP tasks including RC (Devlin et al., 2019).", "We follow their experiment setting of BERT for doing reading comprehension on the SQuAD (Rajpurkar et al., 2016) dataset.", "To our surprise, as shown in Table 4, BERT models (BERT-base, its biomedical version BioBERT (Lee et al., 2019), and its clinical version ClinicalBERT Model Medication Relation Dev Test Dev Test EM F1 EM F1 EM F1 EM F1 BiDAF (Seo et al., 2017) 25.50 68.13 23.35 67.18 81.51 90.84 82.74 91.27 DocReader (Chen et al., 2017) 29.20 72.78 25.68 70.45 86.43 94.44 86.94 94.85 QANet (Yu et al., 2018) 27.67 69.40 24.74 67.34 82.41 90.61 82.68 91.56 BERT-base (Devlin et al., 2019) 26.62 68.75 24.00 67.49 80.17 90.01 83.29 92.38 BioBERT (Lee et al., 2019) 27.81 71.90 24.75 69.97 81.57 91.38 83.61 92.62 ClinicalBERT (Alsentzer et al., 2019) 27.14 71.84 24.06 69.05 83.12 91.96 85.33 93.06 Table 4: Overall performance of all models on the Medication and Relation dataset.", "(Alsentzer et al., 2019)) do not dominate as they do in the open-domain RC tasks.", "The reasons may be three-fold: 1) BERT benefits the most from large training corpora.", "The training corpora of BERT-base and BioBERT are Wikipedia + BookCorpus (Zhu et al., 2015) and PubMed articles respectively, both of which may have different vocabularies and use different language expressions from clinical texts.", "Though ClinicalBERT was pretrained on MIMIC-III (Johnson et al., 2016) clinical texts, the training size of the corpus ( 50M words) is far less than that used in BERT ( 3300M words), which may make the model less powerful as it is on the open-domain tasks.", "2) Longer Contexts.", "As can be seen from Table 1, the number of tokens in the contexts is commonly larger than open-domain RC datasets like SQuAD ( 1000", "v.s.", "116 avg).", "We suspect that long contexts might make it more challenging to model sequential information.", "For sequences that are longer than the max length of the BERT model, they are truncated into a set of short sequences, which may hinder the model from capturing long dependencies (Dai et al., 2019) and global information in the entire document.", "3) Easy Questions.", "Another possible reason might be the question patterns are too easy and a simpler reader with far less parameters can learn the patterns and obtain satisfying performance.", "Additionally, to further evaluate the models in the fine-grained level, inspired by (Gururangan et al., 2018), we partition the Medication and Relation test sets into Easy and Hard subsets using a base model.", "The details of Easy/Hard splits can be found in Appendix C. As can be seen from Table A4, most of the questions in the two datasets are easy, which indicates the emrQA dataset might not be challenging for the current QA models.", "More difficult datasets are needed to advance the Clinical Reading Comprehension task.", "Following our analysis of the emrQA dataset, we further study two aspects of clinical reading comprehension systems that we believe are crucial for their real-world applicability: the need to encode clinical domain knowledge and to generalize to unseen questions and documents.", "So far, we have shown that domain knowledge may not be very useful for models answering questions in the emrQA dataset ; however, we argue that systems in real-world CliniRC need to be able to encode and use clinical domain knowledge effectively.", "Clinical text often contains high variability in many domain-specific words due to abbreviations and synonyms.", "The presence of different aliases in the question and context can make it difficult for a model to represent semantics accurately and choose the correct span.", "Besides, medical domain-specific relations (e.g., treats, caused by ) and hierarchical relations (e.g., isa ) between medical concepts would be likely to appear.", "The process followed to generate the current emrQA dataset leads to these problems being largely under-represented, even though they can be very common in real cases.", "We use the following 3 examples as representatives to illustrate the real cases we may encounter.", "Synonym.", "For example, for the question in Figure 2, Has this patient ever been on Flagyl? , it is easy for the model to answer since Flagyl appears in the context.", "However, if we change Flagyl to its synonyms Metronidazole (which may not appear in training) in the question, it is hard for the reader to extract the correct answer, as it is not possible for model to capture the semantic meaning of Metronidazole as Flagyl .", "Clinical Relations.", "Another example is the question shown in Figure 1, Why has the patient been prescribed hctz? .", "Currently, machines can easily find the answer since keyword hctz is mentioned in the answer.", "However, given a situation where the drug hctz does not appear in the local context of HTN , our model may have a better chance to extract the correct answers if it stores the relation (hctz, treats, HTN) .", "Hierarchical Relation.", "For the question Is there a history of mental illness? , it is more likely that the medical report describes a specific type of psychological condition rather than mention the general phrase mental illness since clinical support require specifics.", "To obtain the correct answer in this case Depression with previous suicidal ideation. , encoding the relation (depression, isa, mental illness) would probably help the model make a correct prediction.", "These three cases help illustrate how complex medical relations affect the real CliniRC task.", "Without leveraging external domain knowledge, it is difficult for models to capture the semantic relations necessary to resolve such cases.", "In order to verify our claim quantitatively, we select synonym as a representative relation type and manipulate each question by replacing its entities with plausible synonyms or abbreviations.", "We then introduce external domain knowledge into current models and compare their performance against base models on these augmented questions.", "More specifically, we first detect entities in the questions and link them to a medical knowledge base (KB): UMLS (Bodenreider, 2004) using a biomedical and clinical text NLP pipeline tool, ScispaCy (Neumann et al., 2019).", "Synonyms of detected entities are then retrieved from UMLS and used to replace the original mention.", "We filter the questions that do not contain entities or that contain entities with no synonyms.", "We focus on the Relation dataset and only modify the questions in the dev and test set; the questions in the training set are not modified.", "Finally, we get 69,912 and 125,338 questions in the dev and test set.", "We then introduce a simple Knowledge Incorporation Module (KIM) to evaluate the usefulness of external domain knowledge.", "Formally, given a question q : { w q 1 , w q 2 , ..., w ql } and its context c : { w c 1 , w c 2 , ..., w cm } , where w q i , w cj are words (tokens), all the words can be mapped to d 1 dimensional vectors via a word embedding matrix E w R d 1 |V| , where V denotes the word vocab-EM F1 55 60 65 Dev w/o KIM w/ KIMEM F1 55 60 65 Test w/o KIM w/ KIM Figure 4: Performances of DocReader and DocReader + Knowledge Incorporation Module (KIM) on our created questions modified from the Relation dataset.", "ulary.", "So we have q : w q1 , ..., w ql R d 1 and c : w c1 , ..., w cm R d 1 .", "We then detect entities { e q 1 , e q 2 , ..., e qn } in the question and entities { e c 1 , e c 2 , ..., e qo } in the context and map them to a medical knowledge base (KB), UMLS (Bodenreider, 2004) using scispacy (Neu-mann et al., 2019).", "Note that l is not equal to n and m is not equal to o , since not every token can be mapped to a entity in KB.", "For entities that contain multiple words, we align them to the first token, same as the alignment used in (Zhang et al., 2019).", "We then map detected entities to d 2 dimensional vectors { e q1 , e q2 , ..., e qn } and { e c1 , e c2 , ..., e co } via a entity embedding matrix E e R d 2 |U| , which is pretrained on the entire UMLS KB using the knowledge embedding method TransE (Bordes et al., 2013).", "U denotes the entity vocabulary.", "We merge the word embeddings with entity embeddings to feed them into a Multi-layer Perceptron (MLP): h qi = ( W c w qi + W e e qi + b ) h cj = ( W c w cj + W e e cj + b ) (1) where is activation function, W c , W e , b are trainable parameters and h qi , h cj denote the integrated embeddings that contain information from both the word c j and the entity e j in the question and context respectively.", "For the word that is not mapped to an entity, e j will be set to 0 .", "The merged embeddings are used as the input to the base reader.", "As shown in Figure 4, by adding a basic Knowledge Incorporation Module to the base model, we obtain around 5% increase of F1 score on the manipulated questions in the test set.", "This suggests that for questions that involve relations between medical concepts, external domain knowledge may be quite important.", "The aim of CliniRC is to build robust QA systems for doctors to retrieve information buried in clinical texts.", "When deploying a CliniRC system to a new environment (e.g., a new set of clinical records, a new hospital, etc.), it is infeasible to create new QA pairs for training every time.", "Thus, an ideal CliniRC system is able to generalize to unseen documents and questions after being fully trained.", "To test the generalizability of models trained on emrQA (we focus on the Relation dataset here), our medical experts created 50 new questions that were not present in the emrQA dataset and extracted answers from unseen patient notes in the MIMIC-III (Johnson et al., 2016) dataset.", "This dataset consists of three types of questions: 12 questions were made from emrQA question templates but contain entities which do not appear in the training set (e.g., How was the diagnosis of acute cholecystitis made? was created from the template How was the diagnosis of | problem | made? ).", "The other 38 questions have different forms from existing question templates: 21 paraphrase existing questions from emrQA (e.g., Was an edema found in the physical exam?) was paraphrased from Does he have any evidence of | problem | in | test | ? ) and 17 are completely semantically different from the ones in the emrQA dataset (e.g., What chemotherapy drugs are being administered to the patient? ).", "As could be expected, we see in Table 5 that the more the new questions deviate from the original emrQA, the more the models struggle to answer them.", "We observe a performance drop of roughly 20% compared to the Relation test set on questions made from emrQA templates using MIMIC III clinical notes which were not in the original dataset.", "For question that are more significantly different, we notice an approximate 40% and 60% loss in F1 score when predicting paraphrased questions and entirely new questions respectively.", "This steep drop in performance for these new settings, especially for paraphrased and new questions, shows how much work there is to be done on this front and highlights generalizability as an important future direction in CliniRC.", "We also notice that ClinicalBERT works slightly better than the base model DocReader.", "The reason might be ClinicalBERT was pretrained on the MIMIC-III dataset, which might help the model have a better understanding of the context.", "Summary.", "Based on these two aspects and our previous thorough analysis of the emrQA dataset, it is clear that better datasets are needed to advance CliniRC.", "Such datasets should be not only large-scale, but also less noisy, more diverse, and moreover allow researchers to systematically evaluate a model's ability to encode domain knowledge and to generalize to new questions and contexts.", "We present a brief overview of open-domain, biomedical and clinical question answering tasks, which are most related to our work:", "Question Answering (QA) aims to automatically answer questions asked by humans based on external sources, such as Web (Sun et al., 2016), knowledge base (Yih et al., 2015; Sun et al., 2015) and free text (Chen et al., 2016).", "As an important type of QA, reading comprehension intends to answer a question after reading the passage (Hirschman et al., 1999).", "Recently, the release of large-scale RC datasets, such as CNN & Daily Mail (Hermann et al., 2015), Stanford Question-Answering Dataset (SQuAD) (Rajpurkar et al., 2016, 2018) makes it possible to solve RC tasks by building deep neural models (Hermann et al., 2015; Wang and Jiang, 2017; Seo et al., 2017; Chen et al., 2017).", "More recently, contextualized word representations and pretrained language models, such as ELMo (Peters et al., 2018), GPT (Radford et al., 2018), BERT (Devlin et al., 2019), have been demonstrated to be very useful in various NLP tasks including RC.", "By seeing diverse contexts in large corpora, these pretrained language models can capture the rich semantic meaning and produce more accurate and precise representations for words given different contexts.", "Even a simple clas-sifier or score function built upon these pretrained contextualized word representations perform well in extracting answer spans (Devlin et al., 2019).", "Biomedical and Clinical QA .", "Due to the lack of large-scale annotated biomedical or clinical data, QA and RC systems in these domains are often rule-based and heuristic feature-based (Lee et al., 2006; Niu et al., 2006; Athenikos and Han, 2010).", "In recent years, BioASQ challenges (Tsatsaronis et al., 2012) proposed the Biomedical Semantic QA task, where the participants need to respond to each test question with relevant articles, snippets and exact answers.", "Suster and Daelemans (2018) use summary points of clinical case reports to build a large-scale cloze-style dataset (CliCR), which is similar to the style of CNN & Daily Mail dataset.", "Jin et al. (2019b) presents PubMedQA, which extracts question-style titles and their corresponding abstracts as the questions and contexts respectively.", "A few QA pairs are annotated by human experts and most of them are annotated based a simple heuristic rule with yes/no/maybe.", "Due to the great power of contextualized word representations, pretrained language models also have been introduced to biomedical and clinical domain, e.g., BioELMo (Jin et al., 2019a), BioBERT (Lee et al., 2019), and ClinicalBERT (Alsentzer et al., 2019).", "They adopt similar architectures of the original models but pretrained on the medical and clinical corpus, such as PubMed articles and MIMIC-III (Johnson et al., 2016) clinical notes.", "We study the Clinical Reading Comprehension (CliniRC) task with the recently created emrQA dataset.", "Our qualitative and quantitative analysis as well as exploration of the two desired aspects of CliniRC systems show that future clinical QA datasets should not only be large-scale but also less noisy and more diverse.", "Moreover, questions that involve complex relations and are across different domains should be included, and then more advanced external knowledge incorporation methods as well as domain adaptation methods can be carefully designed and systematically evaluated.", "We thank our medical experts for their annotations.", "We thank Ping Zhang, Changchang Yin and anonymous reviewers for their helpful comments.", "This research was sponsored in part by the Patient-Centered Outcomes Research Institute Funding ME-2017C1-6413, the Army Research Office under cooperative agreements W911NF-17-1-0412, NSF Grant IIS1815674, and Ohio Supercomputer Center (Center, 1987).", "The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S.Government.", "The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein." ]
[ "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "method", "result", "abstain", "abstain", "objective", "other", "abstain", "result", "abstain", "objective", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "result", "abstain", "other", "other", "other", "other", "other" ]
[ "Most modern Information Extraction (IE) systems are implemented as sequential taggers and only model local dependencies.", "Non-local and non-sequential context is, however, a valuable source of information to improve predictions.", "In this paper, we introduce GraphIE, a framework that operates over a graph representing a broad set of dependencies between textual units (i.e. words or sentences).", "The algorithm propagates information between connected nodes through graph convolutions, generating a richer representation that can be exploited to improve word-level predictions.", "Evaluation on three different tasks namely textual, social media and visual information extraction shows that GraphIE consistently outperforms the state-of-the-art sequence tagging model by a significant margin.", "1 1 Introduction Most modern Information Extraction (IE) systems are implemented as sequential taggers.", "While such models effectively capture relations in the local context, they have limited capability of exploiting non-local and non-sequential dependencies.", "In many applications, however, such dependencies can greatly reduce tagging ambiguity, thereby improving overall extraction performance.", "For instance, when extracting entities from a document, various types of non-local contextual information such as co-references and identical mentions may provide valuable cues.", "See for example Figure 1, in which the non-local relations are crucial to discriminate the entity type of the second mention of Washington (i.e. PERSON or LOCATION ).", "Most of the prior work looking at the non-local dependencies incorporates them by constraining 1 Our code and data are available at https://github.", "the output space in a structured prediction framework (Finkel et al., 2005; Reichart and Barzilay, 2012; Hu et al., 2016).", "Such approaches, however, mostly overlook the richer set of structural relations in the input space.", "With reference to the example in Figure 1, the co-referent dependencies would not be readily exploited by simply constraining the output space, as they would not necessarily be labeled as entities (e.g. pro-nouns).", "In the attempt to capture non-local dependencies in the input space, alternative approaches define a graph that outlines the input structure and engineer features to describe it (Quirk and Poon, 2017).", "Designing effective features is however challenging, arbitrary and time consuming, especially when the underlying structure is complex.", "Moreover, these approaches have limited capacity of capturing node interactions informed by the graph structure.", "In this paper, we propose GraphIE, a framework that improves predictions by automatically learning the interactions between local and non-local dependencies in the input space.", "Our approach integrates a graph module with the encoder-decoder architecture for sequence tagging.", "The algorithm operates over a graph, where nodes correspond to textual units (i.e. words or sentences) and edges describe their relations.", "At the core of our model, a recurrent neural network sequentially encodes local contextual representations and then the graph module iteratively propagates information between neighboring nodes using graph convolutions (Kipf and Welling, 2016).", "The learned representations are finally projected back to a recurrent decoder to support tagging at the word level.", "We evaluate GraphIE on three IE tasks, namely textual, social media, and visual (Aumann et al., 2006) information extraction.", "For each task, we provide in input a simple task-specific graph, which defines the data structure without access to any major processing or external resources.", "Our model is expected to learn from the relevant dependencies to identify and extract the appropriate information.", "Experimental results on multiple benchmark datasets show that GraphIE consistently outperforms a strong and commonly adopted sequential model (SeqIE, i.e. a bidirectional long-short term memory (BiLSTM) followed by a conditional random fields (CRF) module).", "Specifically, in the textual IE task, we obtain an improvement of 0 .", "5% over SeqIE on the CO NLL03 dataset, and an improvement of 1 .", "4% on the chemical entity extraction (Krallinger et al., 2015).", "In the social media IE task, GraphIE improves over SeqIE by 3 .", "7% in extracting the EDUCATION attribute from twitter users.", "In visual IE, finally, we outperform the baseline by 1 .", "2% .", "The problem of incorporating non-local and nonsequential context to improve information extraction has been extensively studied in the literature.", "The majority of methods have focused on enforcing constraints in the output space during inference, through various mechanisms such as posterior regularization or generalized expectations (Finkel et al., 2005; Mann and McCallum, 2010; Reichart and Barzilay, 2012; Li et al., 2013; Hu et al., 2016).", "feature-based approaches.", "Roberts et al. (2008) and Swampillai and Stevenson (2011) have designed intraand inter-sentential features based on discourse and syntactic dependencies (e.g., shortest paths) to improve relation extraction.", "Quirk and Poon (2017) used document graphs to flexi-bly represent multiple types of relations between words (e.g., syntactic, adjacency and discourse re-lations).", "Graph-based representations can be also learned with neural networks.", "The most related work to ours is the graph convolutional network by Kipf and Welling (2016), which was developed to encode graph structures and perform node classification.", "In our framework, we adapt GCN as an intermediate module that learns non-local context, which instead of being used directly for classification is projected to the decoder to enrich local information and perform sequence tagging.", "A handful of other information extraction approaches have used graph-based neural networks.", "Miwa and Bansal (2016) applied Tree LSTM (Tai et al., 2015) to jointly represent sequences and dependency trees for entity and relation extraction.", "On the same line of work, Peng et al. (2017) and Song et al. (2018) introduced Graph LSTM, which extended the traditional LSTM to graphs by enabling a varied number of incoming edges at each memory cell.", "Zhang et al. (2018) exploited graph convolutions to pool information over pruned dependency trees, outperforming existing sequence and dependency-based neural models in a relation extraction task.", "These studies differ from ours in several respects.", "First, they can only model word-level graphs, whereas our framework can learn non-local context either from wordor sentence-level graphs, using it to reduce ambiguity during tagging at the word level.", "Second, all these studies achieved improvements only when using dependency trees.", "We extend the graph-based approach to validate the benefits of using other types of relations in a broader range of tasks, such as co-reference in named entity recognition, followed-by link in social media, and layout structure in visual information extraction.", "We formalize information extraction as a sequence tagging problem.", "Rather than simply modeling inputs as sequences, we assume there exists a graph structure in the data that can be exploited to cap-Encoder Graph Module Decoder Input: Text Output: Tags Encoder (BiLSTM) Decoder (BiLSTM + CRF) Sentence 1() Sentence 1() 2() () Graph Module Encoder (BiLSTM) Decoder (BiLSTM + CRF) Graph Module", "ture non-local and non-sequential dependencies between textual units, namely words or sentences.", "We consider the input to be a set of sentences S = { s 1 , . . . , s N } and an auxiliary graph G = ( V, E ) , where V = { v 1 , . . . , v M } is the node set and E V V is the edge set.", "Each sentence is a sequence of words.", "We consider two different designs of the graph: (1) sentence-level graph , where each node is a sentence (i.e. M = N ), and the edges encode sentence dependencies; (2) word-level graph , where each node is a word (i.e. M is the number of words in the input), and the edges connect pairs of words, such as co-referent tokens.", "The edges e i,j = ( v i , v j ) in the graph can be either directed or undirected.", "Multiple edge types can also be defined to capture different structural factors underlying the task-specific input data.", "We use the BIO (Begin, Inside, Outside) tagging scheme in this paper.", "s i = ( w ( i ) 1 , w ( i ) 2 , . . . , w ( i ) k ) , 2 we sequentially tag each word as y i = ( y ( i ) 1 , y ( i ) 2 , . . . , y ( i ) k ) .", "GraphIE jointly learns local and non-local dependencies by iteratively propagating information between node representations.", "Our model has three components: an encoder , which generates local context-aware hidden representations for the textual unit (i.e. word or sentence, depending on the task) with a recurrent neural network; a graph module , which captures the graph structure, learning non-local and nonsequential dependencies between textual units; a decoder , which exploits the contextual information generated by the graph module to perform labelling at the word level.", "2 While sentences may have different lengths, for notation simplicity we use a single variable k .", "Figure 2 illustrates the overview of GraphIE and the model architectures for both sentenceand word-level graphs.", "In the following sections, we first introduce the case of the sentence-level graph, and then we explain how to adapt the model for the word-level graph.", "In GraphIE, we first use an encoder to generate text representations.", "Given a sentence s i = ( w ( i ) 1 , w ( i ) 2 , . . . , w ( i ) k ) of length k , each word w ( i ) t is represented by a vector x ( i ) t , which is the concatenation of its word embedding and a feature vector learned with a character-level convolutional neural network (CharCNN; Kim et al. (2016)).", "We encode the sentence with a recurrent neural network (RNN), defining it as h ( i ) 1: k = RNN (cid:16) x ( i ) 1: k ; 0 , enc (cid:17) , (1) where x ( i ) 1: k denotes the input sequence [ x ( i ) 1 , , x ( i ) k ] , h ( i ) 1: k denotes the hidden states [ h ( i ) 1 , , h ( i ) k ] , 0 indicates the initial hidden state is zero, and enc represents the encoder parameters.", "We implement the RNN as a bi-directional LSTM (Hochreiter and Schmidhuber, 1997), and encode each sentence independently.", "We obtain the sentence representation for s i by averaging the hidden states of its words, i.e. Enc ( s i ) = 1 k (cid:16)(cid:80) kt =1 h ( i ) t (cid:17) .", "The sentence representations are then fed into the graph module.", "The graph module is designed to learn the nonlocal and non-sequential information from the graph.", "We adapt the graph convolutional network (GCN) to model the graph context for information extraction.", "Given the sentence-level graph G = ( V, E ) , where each node v i (i.e. sentence s i ) has the encoding Enc ( s i ) capturing its local information, the graph module enriches such representation with neighbor information derived from the graph structure.", "Our graph module is a GCN which takes as input the sentence representation, i.e. g (0) i = Enc ( s i ) , and conducts graph convolution on every node, propagating information between its neighbors, and integrating such information into a new hidden representation.", "Specifically, each layer of GCN has two parts.", "The first gets the information of each node from the previous layer, i.e. ( l ) i = W ( l ) v g ( l 1) i , (2) where W ( l ) v is the weight to be learned.", "The second aggregates information from the neighbors of each node, i.e. for node v i , we have ( l ) i = 1 d ( v i ) W ( l ) e (cid:32) (cid:88) e i,j E g ( l 1) j (cid:33) , (3) where d ( v i ) is the degree of node v i (i.e. the number of edges connected to v i ) and is used to normalize ( l ) i , ensuring that nodes with different degrees have representations of the same scale.", "3 In the simplest case, where the edges in the graph are undirected and have the same type, we use the same weight W ( l ) e for all of them.", "In a more general case, where multiple edge types exist, we expect them to have different impacts on the aggregation.", "Thus, we model these edge types with different weights in Eq.", "3, similar to the relational GCN proposed by Schlichtkrull et al. (2018).", "When edges are directed, i.e. edge e i,j is different from e j,i , the propagation mechanism should mirror such difference.", "In this case, we consider directed edges as two types of edges (for-ward and backward), and use different weights for them.", "where ( ) is the non-linear activation function, and b ( l ) is a bias parameter.", "Because each layer only propagates information between directly connected nodes, we can stack multiple graph convolutional layers to get a larger receptive field, i.e. each node can be aware of more distant neighbors.", "After L layers, for each node v i we obtain a contextual representation, GCN ( s i ) = g ( L ) i , that captures both local and non-local information.", "To support tagging, the learned representation is propagated to the decoder.", "3 We choose this simple normalization strategy instead of the two-sided normalization in Kipf and Welling (2016), as it performs better in the experiments.", "The same strategy is also adopted by Zhang et al. (2018).", "In our work, the decoder is instantiated as a BiLSTM+CRF tagger (Lample et al., 2016).", "The output representation of the graph module, GCN ( s i ) , is split into two vectors of the same length, which are used as the initial hidden states for the forward and backward LSTMs, respectively.", "In this way, the graph contextual information is propagated to each word through the LSTM.", "Specifically, we have z ( i ) 1: k = RNN (cid:16) h ( i ) 1: k ; GCN ( s i ) , dec (cid:17) , (5) where h ( i ) 1: k are the output hidden states of the encoder, GCN ( s i ) represents the initial state, and dec is the decoder parameters.", "A simpler way to incorporate the graph representation into the decoder is concatenating with its input, but the empirical performance is worse than using as the initial state.", "Finally, we use a CRF layer (Lafferty et al., 2001) on top of the BiLSTM to perform tagging, y i = arg max y Y k p (cid:16) y | z ( i ) 1: k ; crf (cid:17) , (6) where Y k is the set of all possible tag sequences of length k , and crf represents the CRF parameters, i.e. transition scores of tags.", "CRF combines the local predictions of BiLSTM and the transition scores to model the joint probability of the tag sequence.", "4 4.4 Adaptation to Word-level Graphs GraphIE can be easily adapted to model word-level graphs.", "In such case, the nodes represent words in the input, i.e. the number of nodes M equals the total number of words in the N sentences.", "At this point, each word's hidden state in the encoder can be used as the input node vector g (0) i of the graph module.", "GCN can then conduct graph convolution on the word-level graph and generate graph-contextualized representations for the words.", "Finally, the decoder directly operates on the GCN's outputs, i.e. we change the BiLSTM decoder to z ( i ) 1: k = RNN (cid:16)(cid:104) GCN ( w ( i ) 1 ) , , GCN ( w ( i ) k ) (cid:105) ; 0 , dec (cid:17) , 4 In GraphIE, the graph module models the input space structure, i.e. the dependencies between textual units (i.e. sentences or words), and the final CRF layer models the sequential connections of the output tags.", "Even though loops may exist in the input graph, CRF operates sequentially, thus the inference is tractable.", "where GCN ( w ( i ) t ) is the GCN output for word w ( i ) t .", "In this case, the BiLSTM initial states are set to the default zero vectors.", "The CRF layer remains unchanged.", "As it can be seen in Figure", "2(c), the word-level graph module differs from the sentence-level one because it directly takes the word representations from the encoder and feeds its output to the decoder.", "In sentence-level graph, the GCN operates on sentence representations, which are then used as the initial states of the decoder BiLSTM.", "We evaluate the model on three tasks, including two traditional IE tasks, namely textual information extraction and social media information extraction, and an under-explored task visual information extraction .", "For each of these tasks, we created a simple task-specific graph topology, designed to easily capture the underlying structure of the input data without any major processing.", "Table 1 summarizes the three tasks.", "In this task, we focus on named entity recognition at discourse level (DiscNER).", "In contrast to traditional sentence-level NER (SentNER), where sentences are processed independently, in DiscNER, long-range dependencies and constraints across sentences have a crucial role in the tagging process.", "For instance, multiple mentions of the same entity are expected to be tagged consistently in the same discourse.", "Here we propose to use this (soft) constraint to improve entity extraction.", "Dataset We conduct experiments on two NER datasets: the CoNLL-2003 dataset (CO NLL03) (Tjong et al., 2003), and the CHEMDNER dataset for chemical entity extraction (Krallinger et al., 2015).", "We follow the standard split of each corpora.", "Statistics are shown in Table 2. Graph Construction In this task, we use a word-level graph where nodes represent words.", "We create two types of edges for each document: Local edges : forward and backward edges are created between neighboring words in each sentence, allowing local contextual information to be utilized.", "Non-local edges : re-occurrences of the same token other than stop words are connected, so Evaluation Task Graph Type Node Edge Textual IE word-level word 1. non-local consistency (identical mentions) 2. local sentential forward and backward Social Media IE sentence-level user's tweets followed-by Visual IE sentence-level text box spatial layout (horizontal and vertical) Table 1: Comparisons of graph structure in the three IE tasks used for evaluation.", "Social media information extraction refers to the task of extracting information from users' posts in online social networks (Benson et al., 2011; Li et al., 2014).", "In this paper, we aim at extracting education and job information from users' tweets.", "Given a set of tweets posted by a user, the goal is to extract mentions of the organizations to which they belong.", "The fact that the tweets are short, highly contextualized and show special linguistic features makes this task particularly challenging.", "Dataset We construct two datasets, EDUCATION and JOB , from the Twitter corpus released by Li et al. (2014).", "The original corpus contains millions of tweets generated by 10 thousand users, where the education and job mentions are annotated using distant supervision (Mintz et al., 2009).", "We sample the tweets from each user, maintaining the ratio between positive and negative posts.", "6 The obtained EDUCATION dataset consists of 443 , 476 tweets generated by 7 , 208 users, and the JOB dataset contains 176 , 043 tweets generated by 1 , 772 users.", "Dataset statistics are reported in Table 3. 5 Note that other non-local relations such as co-references (cf. the example in Figure 1) may be used for further improvement.", "However, these relations require additional resources to obtain, and we leave them to future work.", "6 Positive and negative refer here to whether or not the education or job mention is present in the tweet.", "The datasets are both split in 60% for training, 20% for development, and 20% for testing.", "We perform 5 different random splits and report the average results.", "Graph Construction We construct the graph as ego-networks (Leskovec and Mcauley, 2012), i.e. when we extract information about one user, we consider the subgraph formed by the user and his/her direct neighbors.", "Each node corresponds to a Twitter user, who is represented by the set of posted tweets.", "7 Edges are defined by the followed-by link, under the assumption that connected users are more likely to come from the same university or company.", "An example of the social media graph is reported in the appendices.", "Visual information extraction refers to the extraction of attribute values from documents formatted in various layouts.", "Examples include invoices and forms, whose format can be exploited to infer valuable information to support extraction.", "Dataset The corpus consists of 25,200 Adverse Event Case Reports (AECR) recording drug-related side effects.", "Each case contains an average of 9 pages.", "Since these documents are produced by multiple organizations, they exhibit large variability in the layout and presentation styles (e.g. 7 As each node is a set of tweets posted by the user, we encode every tweet with the encoder, and then average them to obtain the node representation. In the decoding phase, the graph module's output is fed to the decoder for each tweet. text, table, etc.).", "8 The collection is provided with a separate human-extracted ground truth database that is used as a source of distant supervision.", "Our goal is to extract eight attributes related to the patient, the event, the drug and the reporter (cf. Table 6 for the full list).", "Attribute types include dates, words and phrases which can be directly extracted from the document.", "Graph Construction We first turn the PDFs to text using PDFMiner, 9 which provides words along with their positions in the page (i.e. bounding-box coordinates).", "Consecutive words are then geometrically joined into text boxes .", "Each text box is considered as a sentence in this task, and corresponds to a node in the graph.", "Since the page layout is the major structural fac-tor in these documents, we work on page-by-page basis, i.e. each page corresponds to a graph.", "The edges are defined to horizontally or vertically connect nodes (text boxes) that are close to each other (i.e. when the overlap of their bounding boxes, in either the vertical or horizontal direction, is over 50%).", "Four types of edge are considered: left-to-right, right-to-left, up-to-down, and down-to-up.", "When multiple nodes are aligned, only the closest ones are connected.", "An example of visual document graph is reported in the appendices.", "We implement a two-layer BiLSTM with a conditional random fields (CRF) tagger as the sequential baseline (SeqIE).", "This architecture and its variants have been extensively studied and demonstrated to be successful in previous work on information extraction (Lample et al., 2016; Ma and Hovy, 2016).", "In the textual IE task (Task 1), our baseline is shown to obtain competitive results with the state-of-the-art method in the CONLL03 dataset.", "In the visual IE task (Task 3), in order to further increase the competitiveness of the baseline, we sequentially concatenate the horizontally aligned text boxes, therefore fully modeling the horizontal edges of the graph.", "computational cost.", "In Task 1, we apply GraphIE with word-level graph module (cf.", "Figure", "2(c)), and in Task 2 and Task 3, we apply GraphIE with sentence-level graph module (cf.", "Figure", "2(b)).", "The models are trained with Adam (Kingma and Ba, 2014) to minimize the CRF objective.", "For regularization, we choose dropout with a ratio of 0.1 on both the input word representation and the hidden layer of the decoder.", "The learning rate is set to 0.001.", "We use the development set for early-stopping and the selection of the best performing hyperparameters.", "For CharCNN, we use 64-dimensional character embeddings and 64 fil-ters of width 2 to 4 (Kim et al., 2016).", "The 100-dimensional pretrained GloVe word embeddings (Pennington et al., 2014) are used in Task 1 and 2, and 64-dimensional randomly initialized word embeddings are used in Task 3. We use a two-layer GCN in Task 1, and a one-layer GCN in Task 2 and Task 3. The encoder and decoder BiLSTMs have the same dimension as the graph convolution layer.", "In Task 3, we concatenate a positional encoding to each text box's representation by transforming its bounding box coordinates to a vector of length 32, and then applying a tanh activation.", "Table 4 describes the NER accuracy on the CO NLL03 (Tjong et al., 2003) and the CHEMDNER (Krallinger et al., 2015) datasets.", "For CO NLL03, we list the performance of existing approaches.", "Our baseline SeqIE obtains competitive scores compared to the best methods.", "The fact that GraphIE significantly outperforms DATASET Dictionary SeqIE GraphIE P R F1 P R F1 P R F1 EDUCATION 78.7 93.5 85.4 85.2 93.6 89.2 92.9 92.8 92.9 JOB 55.7 70.2 62.1 66.2 66.7 66.2 67.1 66.1 66.5 Table 5: Extraction accuracy on the EDUCATION and JOB datasets (Task 2).", "\u0000 1", "it, highlights once more the importance of modeling non-local and non-sequential dependencies and confirms that our approach is an appropriate method to achieve this goal.", "10 For CHEMDNER , we show the best performance reported in Krallinger et al. (2015), obtained with a feature-based method.", "Our baseline outperforms the feature-based method, and GraphIE further improves the performance by 1 .", "4% .", "Analysis To understand the advantage of GraphIE, we first investigate the importance of graph structure to the model.", "As shown in Figure 3, using random connections clearly hurts the performance, bringing down the F1 score of GraphIE from 95.12% to 94.29%.", "It indicates that the task-specific graph structures introduce bene-ficial inductive bias.", "Trivial feature augmentation also does not work well, confirming the necessity of learning the graph embedding with GCN.", "We further conduct error analysis on the test set to validate our motivation that GraphIE resolves tagging ambiguity by encouraging consistency among identical entity mentions (cf.", "Figure 10 We achieve the best reported performance among methods not using the recently introduced ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018), which are pretrained on extra-large corpora and computationally demanding.", "1).", "Here we examine the word-level tagging accuracy.", "We define the words that have more than one possible tags in the dataset as ambiguous .", "We find that among the 1 .", "78% tagging errors of SeqIE, 1 .", "16% are ambiguous and 0 .", "62% are unambiguous .", "GraphIE reduces the error rate to 1 .", "67% , with 1 .", "06% to be ambiguous and 0 .", "61% unambiguous .", "We can see that most of the error reduction indeed attributes to the ambiguous words.", "Table 5 shows the results for the social media information extraction task.", "We first report a simple dictionary-based method as a baseline.", "Neural IE models achieve much better performance, showing that meaningful patterns are learned by the models rather than simply remembering the entities in the training set.", "The proposed GraphIE outperforms SeqIE in both the EDUCATION and JOB datasets, and the improvements are more significant for the EDUCATION dataset ( 3 . 7% versus 0 . 3% ).", "The reason for such difference is the variance in the affinity scores (Mislove et al., 2010) between the two datasets.", "Li et al. (2014) underline that affinity value for EDUCATION is 74 .", "3 while for JOB it is only 14 .", "5 , which means that in the datasets neighbors are 5 times more likely to have studied in the same university than worked in the same company.", "We can therefore expect that a model like GraphIE, which exploits neighbors' information, obtains larger advantages in a dataset characterized by higher affinity.", "Table 6 shows the results in the visual information extraction task.", "GraphIE outperforms the SeqIE baseline in most attributes, and achieves 1 .", "2% improvement in the mirco average F1 score.", "It confirms that the benefits of using layout graph structure in visual information extraction.", "The extraction performance varies across the attributes, ranging from 61 .", "4% for Drug Name to ATTRIBUTE SeqIE GraphIE P R F1 P R F1 P. Initials 93.5 92.4 92.9 93.6 91.9 92.8 P. Age 94.0 91.6 92.8 94.8 91.1 92.9 P. Birthday 96.6 96.0 96.3 96.9 94.7 95.8 Drug Name 71.2 51.2 59.4 78.5 50.4 61.4 Event 62.6 65.2 63.9 64.1 68.7 66.3 R. First Name 78.3 95.7 86.1 79.5 95.9 86.9 R. Last Name 84.5 68.4 75.6 85.6 68.2 75.9 R. City 88.9 65.4 75.4 92.1 66.3 77.1 Avg.", "95 .", "8% for Patient Birthday (similar variations are visible in the baseline).", "Similarly, the gap between GraphIE and SeqIE varies in relation to the attributes, ranging between 0 .", "5% in Patient Birthday and 2 .", "4% in Event .", "In the ablation test described in Table 7, we can see the contribution of: using separate weights for different edge types ( +0 . 8% ), horizontal edges ( +3 . 1% ), vertical edges ( +5 . 4% ), and CRF ( +5 . 7% ).", "Generalization We also assess GraphIE's capacity of dealing with unseen layouts through an extra analysis.", "From our dataset, we sample 2 , 000 reports containing the three most frequent templates, and train the models on this subset.", "Then we test all models in two settings: 1) seen templates , consisting of 1 , 000 additional reports in the same templates used for training; and 2) unseen templates , consisting of 1 , 000 reports in two new template types.", "The performance of GraphIE and SeqIE is reported in Figure 4. Both models achieve good results on seen templates , with GraphIE still scoring 2 .", "8% higher than SeqIE.", "The gap becomes even Table 1 SeqIE GraphIE Seen Templates 80.3 83.1 Unseen Templates 13.4 33.7 91.66 91.87 91.77 BiLSTM-CRF: 91.83 92.10 91.96 91.12 91.55 91.34 8%, precision: 91.78 recall: 89.39 F1: 90.57% 91.34 91.93 91.63 7%, precision: 91.90 recall: 89.14 F1: 90.50% 91.88 92.16 92.02 5%, precision: 90.88 recall: 89.62 F1: 90.25% 91.566 91.922 91.744 5%, precision: 90.49 recall: 90.24 F1: 90.37% 5%, precision: 90.53 recall: 90.25 F1: 90.39% 91.54 91.36 91.45 91.116 89.728 90.42% 91.83 91.31 91.57 90.41 90.69 90.74 90.72 91.07 90.81 90.94 91.23 91.04 91.14 91.272 91.052 91.164 GCN: 9%, precision: 91.37%, recall: 90.30%, F1: 90.83% 9%, precision: 92.03%, recall: 90.25%, F1: 91.13% 9%, precision: 92.15%, recall: 90.05%, F1: 91.09% 9%, precision: 91.23%, recall: 90.35%, F1: 90.79% 9%, precision: 92.07%, recall: 90.16%, F1: 91.10% 90.99% 90.99 F 1 0 25 50 75 100 Seen Templates Unseen Templates 33.7 83.1 13.4 80.3 SeqIEGraphIE \u0000 1 Figure 4: Micro average F1 scores tested on seen and unseen templates (Task 3).", "larger when our model and the sequential one are tested on unseen templates (i.e. 20 . 3% ), demonstrating that by explicitly modeling the richer structural relations, GraphIE achieves better generalizability.", "We introduced GraphIE, an information extraction framework that learns local and non-local contextual representations from graph structures to improve predictions.", "The system operates over a task-specific graph topology describing the underlying structure of the input data.", "GraphIE jointly models the node (i.e. textual units, namely words or sentences) representations and their dependencies.", "Graph convolutions project information through neighboring nodes to finally support the decoder during tagging at the word level.", "We evaluated our framework on three IE tasks, namely textual, social media and visual information extraction.", "Results show that it effi-ciently models non-local and non-sequential context, consistently enhancing accuracy and outperforming the competitive SeqIE baseline (i.e. BiLSTM+CRF).", "Future work includes the exploration of automatically learning the underlying graphical structure of the input data.", "We thank the MIT NLP group and the reviewers for their helpful comments.", "This work is supported by MIT-IBM Watson AI Lab.", "Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "objective", "other", "objective", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "other", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other" ]
[ "Riyaz Ahmad Bhat Interaction Labs, Bangalore, India", "Dipti Misra Sharma LTRC, IIIT-H, Hyderabad, India [email protected]", "Abstract Code-switching is a phenomenon of mixing grammatical structures of two or more languages under varied social constraints.", "The code-switching data differ so radically from the benchmark corpora used in NLP community that the application of standard technologies to these data degrades their performance sharply.", "Unlike standard corpora, these data often need to go through additional processes such as language identification, normalization and/or back-transliteration for their efficient processing.", "In this paper, we investigate these indispensable processes and other problems associated with syntactic parsing of code-switching data and propose methods to mitigate their effects.", "In particular, we study dependency parsing of code-switching data of Hindi and English multilingual speakers from Twitter.", "We present a treebank of Hindi-English code-switching tweets under Universal Dependencies scheme and propose a neural stacking model for parsing that effi-ciently leverages part-of-speech tag and syntactic tree annotations in the code-switching treebank and the preexisting Hindi and English treebanks.", "We also present normalization and back-transliteration models with a decoding process tailored for code-switching data.", "Results show that our neural stacking parser is 1.5% LAS points better than the augmented parsing model and our decoding process improves results by 3.8% LAS points over the first-best normalization and/or back-transliteration.", "1 Code-mixing is another term in the linguistics literature used interchangeably with code-switching.", "Both terms are often used to refer to the same or similar phenomenon of mixed language use.", "belonging to two or more different languages (Gumperz, 1982).", "The phenomenon is prevalent in multilingual societies where speakers share more than one language and is often prompted by multiple social factors (Myers-Scotton, 1995).", "Moreover, code-switching is mostly prominent in colloquial language use in daily conversations, both online and offline.", "Most of the benchmark corpora used in NLP for training and evaluation are based on edited monolingual texts which strictly adhere to the norms of a language related, for example, to orthography, morphology, and syntax.", "Social media data in general and CS data, in particular, deviate from these norms implicitly set forth by the choice of corpora used in the community.", "This is the reason why the current technologies often perform miserably on social media data, be it monolingual or mixed language data (Solorio and Liu, 2008b; Vyas et al., 2014; Cetinoglu et al., 2016; Gimpel et al., 2011; Owoputi et al., 2013; Kong et al., 2014).", "CS data offers additional challenges over the monolingual social media data as the phenomenon of code-switching transforms the data in many ways, for example, by creating new lexical forms and syntactic structures by mixing morphology and syntax of two languages making it much more diverse than any monolingual corpora (Cetinoglu et al., 2016).", "As the current computational models fail to cater to the complexities of CS data, there is often a need for dedicated techniques tailored to its specific characteristics.", "Given the peculiar nature of CS data, it has been widely studied in linguistics literature (Poplack, 1980; Gumperz, 1982; Myers-Scotton, 1995), and more recently, there has been a surge in studies concerning CS data in NLP as well (Solorio and Liu, 2008a,a; Vyas et al., 2014; Sharma et al., 2016; Rudra et al., 2016; Joshi et al., 2016; Bhat et al., 2017; Chandu et al., 2017; Rijhwani et al., 987 2017; Guzman et al., 2017, and others).", "Besides the individual computational works, a series of shared-tasks and workshops on preprocessing and shallow syntactic analysis of CS data have also been conducted at multiple venues such as Empirical Methods in NLP (EMNLP 2014 and 2016), International Conference on NLP (ICON 2015 and 2016) and Forum for Information Retrieval Evaluation (FIRE 2015 and 2016).", "Most of these works have attempted to address preliminary tasks such as language identification, normalization and/or back-transliteration as these data often need to go through these additional processes for their efficient processing.", "In this paper, we investigate these indispensable processes and other problems associated with syntactic parsing of code-switching data and propose methods to mitigate their effects.", "In particular, we study dependency parsing of Hindi-English code-switching data of multilingual Indian speakers from Twitter.", "Hindi-English code-switching presents an interesting scenario for the parsing community.", "Mixing among typologically diverse languages will intensify structural variations which will make parsing more challenging.", "For example, there will be many sentences containing: (1) both SOV and SVO word orders 2 , (2) both head-initial and head-final genitives, (3) both prepositional and postpositional phrases, etc.", "More importantly, none among the Hindi and English treebanks would provide any training instance for these mixed structures within individual sentences.", "In this paper, we present the first code-switching treebank that provides syntactic annotations required for parsing mixed-grammar syntactic structures.", "Moreover, we present a parsing pipeline designed explicitly for Hindi-English CS data.", "The pipeline comprises of several modules such as a language identification system, a back-transliteration system, and a dependency parser.", "The gist of these modules and our overall research contributions are listed as follows: back-transliteration and normalization models based on encoder-decoder frameworks with sentence decoding tailored for code-switching data; a dependency treebank of Hindi-English code-switching tweets under Universal Dependencies scheme; and 2 Order of Subject, Object and Verb in transitive sentences.", "a neural parsing model which learns POS tagging and parsing jointly and also incorporates knowledge from the monolingual treebanks using neural stacking.", "As preliminary steps before parsing of CS data, we need to identify the language of tokens and normalize and/or back-transliterate them to enhance the parsing performance.", "These steps are indispensable for processing CS data and without them the performance drops drastically as we will see in Results Section.", "We need normalization of non-standard word forms and back-transliteration of Romanized Hindi words for addressing out-of-vocabulary problem, and lexical and syntactic ambiguity introduced due to contracted word forms.", "As we will train separate normalization and back-transliteration models for Hindi and English, we need language identification for selecting which model to use for inference for each word form separately.", "Moreover, we also need language information for decoding best word sequences.", "For language identification task, we train a multilayer perceptron (MLP) stacked on top of a recurrent bidirectional LSTM (Bi-LSTM) network as shown in Figure 1.", "Finally, output layer uses the feed-forward neural network with a softmax function for a probability distribution over the language tags.", "We train the network on our CS training set concatenated with the data set provided in ICON 2015 3 shared task (728 Facebook comments) on language identification and evaluate it on the datasets from Bhat et al. (2017).", "We achieved the state-of-the-art performance on both development and test sets (Bhat et al., 2017).", "The results are shown in Table 1.", "We learn two separate but similar character-level models for normalization-cum-transliteration of noisy Romanized Hindi words and normalization of noisy English words.", "We treat both normalization and back-transliteration problems as a general sequence to sequence learning problem.", "In general, our goal is to learn a mapping for non-standard English and Romanized Hindi word forms to standard forms in their respective scripts.", "In case of Hindi, we address the problem of normalization and back-transliteration of Romanized Hindi words using a single model.", "We use the attention-based encoder-decoder model of Luong (Luong et al., 2015) with global attention for learning.", "For Hindi, we train the model on the transliteration pairs (87,520) from the Libindic transliteration project 4 and Brahmi-Net (Kunchukuttan et al., 2015) which are further augmented with noisy transliteration pairs (1,75,668) for normalization.", "Similarly, for normalization of noisy English words, we train the model on noisy word forms (4,29,715) synthetically generated from the English vocabulary.", "We use simple rules such as dropping non-initial vowels and replacing consonants based on their phonological proximity to generate synthetic data for 3 http://ltrc.iiit.ac.in/icon2015/ 4 https://github.com/libindic/indic-trans normalization.", "Figure 2 shows some of the noisy forms generated from standard word forms using simple and finite rules which include vowel elision ( please pls ), interchanging similar consonants and vowels ( cousin couzin ), replacing consonant or vowel clusters with a single letter ( Twitter Twiter ), etc.", "From here onwards, we will refer to both normalization and back-transliteration as normalization.", "rules.", "At inference time, our normalization models will predict the most likely word form for each input word.", "However, the single-best output from the model may not always be the best option considering an overall sentential context.", "Contracted word forms in social media content are quite often ambiguous and can represent different standard word forms.", "For example, noisy form pt ' can expand to different standard word forms such as put ', pit ', pat ', pot ' and pet '.", "The choice of word selection will solely depend on the sentential context.", "To select contextually relevant forms, we use exact search over n-best normalizations from the respective models extracted using beam-search decoding.", "The best word sequence is selected using the Viterbi decoding over b n word sequences scored by a trigram language model.", "b is the size of beam-width and n is the sentence length.", "The language models are trained on the monolingual data of Hindi and English using KenLM toolkit (Heafield et al., 2013).", "For each word, we extract five best normalizations ( b =5).", "Decoding the best word sequence is a nontrivial problem for CS data due to lack of normalized and back-transliterated CS data for training a language model.", "One obvious solution is to apply decoding on individual language fragments in a CS sentence (Dutta et al., 2015).", "One major prob-989 buddy --of droplet one and iam Yar cn anyone tel me k twitr acount bnd ksy krty hn plz year can anyones tell mae ok twitt account band casey courty hon please yarn con anyone teal moe kk twirt count bind cosy karity nh poles yard cano anyone tele men coo twitre adcount bound sky curity han plus friend --from drop certain do am buddy can anyone tell me of twitt account drop one and am please - - -- hi en en en en hi ne en hi hi hi hi en can anyone tell me twitt account please RawTweet Top 3 Normalizations Top 2 Dictionary Equivalents Top 3 Transliterations Top 2 Dictionary Equivalents English Decoding Hindi Decoding Lang.Tag FinalBest Best Best Figure 3: The figure shows a 3-step decoding process for the sentence Yar cn anyone tel me k twitr account bnd ksy krty hn plz (Friend can anyone tell me how to close twitter account please) .", "lem with this approach is that the language models used for scoring are trained on complete sentences but are applied on sentence fragments.", "Scoring individual CS fragments might often lead to wrong word selection due to incomplete context, particularly at fragment peripheries.", "We solve this problem by using a 3-step decoding process that works on two separate versions of a CS sentence, one in Hindi, and one in English.", "In the first step, we replace first-best back-transliterated forms of Hindi words by their translation equivalents using a Hindi-English bilingual lexicon.", "5 An exact search is used over the top 5 ' normalizations of English words, the translation equivalents of Hindi words and the actual word itself.", "In the second step, we decode best word sequence over Hindi version of the sentence by replacing best English word forms decoded from the first step by their translation equivalents.", "An exact search is used over the top 5 ' normalizations of Hindi words, the dictionary equivalents of decoded English words and the original words.", "In the final step, English and Hindi words are selected from their respective decoded sequences using the predicted language tags from the language identification system.", "Note that the bilingual mappings are only used to aid the decoding process by making the CS sentences lexically monolingual so that the monolingual language models could be used for scoring.", "They are not used in the final decoded output.", "The overall decoding process is shown in Figure 3.", "evaluation set of Bhat et al. (2017).", "Results of our systems are reported in Table 3 with a comparison of accuracies based on the nature of decoding used.", "The results clearly show the significance of our 3-step decoding over first-best and fragment-wise decoding.", "Data-set Hindi English Tokens FB FW 3-step Tokens FB FW 3-step Dev 1549 82.82 87.28 90.01 34 82.35 88.23 88.23 Test 1465 83.54 88.19 90.64 28 71.42 75.21 81.71 Table 2: Normalization accuracy based on the number of noisy tokens in the evaluation set.", "FB = First Best, and FW = Fragment Wise 3 Universal Dependencies for Hindi-English Recently Bhat et al. (2017) provided a CS dataset for the evaluation of their parsing models which they trained on the Hindi and English Universal Dependency (UD) treebanks.", "We extend this dataset by annotating 1,448 more sentences.", "Following Bhat et al. (2017) we first sampled CS data from a large set of tweets of Indian language users that we crawled from Twitter using Tweepy 6 a Twitter API wrapper.", "We then used a language identification system trained on ICON dataset (see Section 2) to filter Hindi-English CS tweets from the crawled Twitter data.", "Only those tweets were selected that satisfied a minimum ratio of 30:70(%) code-switching.", "From this dataset, we manually selected 1,448 tweets for annotation.", "The selected tweets are thoroughly checked for code-switching ratio.", "For POS tagging and dependency annotation, we used Version 2 of Universal dependency guidelines (De Marneffe et al., 2014), 6 http://www.tweepy.org/ 990 while language tags are assigned based on the tag set defined in (Solorio et al., 2014; Jamatia et al., 2015).", "The dataset was annotated by two expert annotators who have been associated with annotation projects involving syntactic annotations for around 10 years.", "Nonetheless, we also ensured the quality of the manual annotations by carrying an inter-annotator agreement analysis.", "We randomly selected a dataset of 150 tweets which were annotated by both annotators for both POS tagging and dependency structures.", "The inter-annotator agreement has a 96.20% accuracy for POS tagging and a 95.94% UAS and a 92.65% LAS for dependency parsing.", "We use our dataset for training while the development and evaluation sets from Bhat et al. (2017) are used for tuning and evaluation of our models.", "Since the annotations in these datasets follow version 1.4 of the UD guidelines, we converted them to version 2 by using carefully designed rules.", "The statistics about the data are given in Table 3.", "We adapt Kiperwasser and Goldberg (2016) transition-based parser as our base model and incorporate POS tag and monolingual parse tree information into the model using neural stacking, as shown in Figures 4 and 6.", "Our parsing models are based on an arc-eager transition system (Nivre, 2003).", "The arc-eager system defines a set of configurations for a sentence w 1 ,...,w n , where each configuration C = (S, B, A) consists of a stack S , a buffer B , and a set of dependency arcs A .", "For each sentence, the parser starts with an initial configuration where S = [ROOT], B = [w 1 ,...,w n ] and A = and terminates with a configuration C if the buffer is empty and the stack contains the ROOT .", "The parse trees derived from transition sequences are given by A .", "To derive the parse tree, the arc-eager system defines four types of transitions ( t ): Shift , Left-Arc , Right-Arc , and Reduce .", "We use the training by exploration method of Goldberg and Nivre (2012) for decoding a transition sequence which helps in mitigating error propagation at evaluation time.", "We also use pseudo-projective transformations of Nivre and Nilsson (2005) to handle a higher percentage of non-projective arcs in the CS data ( 2%).", "We use the most informative scheme of head+path to store the transformation information.", "Our base model is a stack of a tagger network and a parser network inspired by stack-propagation model of Zhang and Weiss (2016).", "The parameters of the tagger network are shared and act as a regularization on the parsing model.", "The model is trained by minimizing a joint negative log-likelihood loss for both tasks.", "Unlike Zhang and Weiss (2016), we compute the gradients of the log-loss function simultaneously for each training instance.", "While the parser network is updated given the parsing loss only, the tagger network is updated with respect to both tagging and parsing losses.", "Both tagger and parser networks comprise of an input layer, a feature layer, and an output layer as shown in Figure 4.", "Following Zhang and Weiss (2016), we refer to this model as stack-prop.", "Tagger network: The input layer of the tagger encodes each input word in a sentence by concatenating a pre-trained word embedding with its character embedding given by a character Bi-LSTM.", "In the feature layer, the concatenated word and character representations are passed through two stacked Bi-LSTMs to generate a sequence of hidden representations which encode the contextual information spread across the sentence.", "The first Bi-LSTM is shared with the parser network while the other is specific to the tagger.", "Finally, output layer uses the feed-forward neural network with a softmax function for a probability distribution over the Universal POS tags.", "We only use the forward and backward hidden representations of the focus word for classification.", "Parser Network: Similar to the tagger network, the input layer encodes the input sentence using word and character embeddings which are then passed to the shared Bi-LSTM.", "The hidden representations from the shared Bi-LSTM are then concatenated with the dense representations from the feed-forward network of the tagger and passed through the Bi-LSTM specific to the parser.", "This ensures that the tagging network is penalized for the parsing error caused by error propagation by back-propagating the gradients to the shared tagger parameters (Zhang and Weiss, 2016).", "Finally, we use a non-linear feed-forward network to predict the labeled transitions for the parser configurations.", "From each parser configuration, we extract the top node in the stack and the first node in the buffer and use their hidden representations from the parser specific Bi-LSTM for classifica-tion.", "dis rat ki barish alwayz scares me .", "This night of rain always scares me .", "Mixed grammar Mixed grammar Hindi grammar English grammar Figure 5: Code-switching tweet showing grammatical fragments from Hindi and English.", "It seems reasonable that limited CS data would complement large monolingual data in parsing CS data and a parsing model which leverages both data would significantly improve parsing performance.", "While a parsing model trained on our limited CS data might not be enough to accurately parse the individual grammatical fragments of Hindi and English, the preexisting Hindi and English treebanks are large enough to provide suf-ficient annotations to capture their structure.", "Similarly, parsing model(s) trained on the Hindi and English data may not be able to properly connect the divergent fragments of the two languages as the model lacks evidence for such mixed structures in the monolingual data.", "This will happen quite often as Hindi and English are typologicalls very diverse (see Figure 5).", "As we discussed above, we adapted feature-level neural stacking (Zhang and Weiss, 2016; Chen et al., 2016) for joint learning of POS tagging and parsing.", "Similarly, we also adapt this stacking approach for incorporating the monolingual syntactic knowledge into the base CS model.", "Recently, Wang et al. (2017) used neural stacking for injecting syntactic knowledge of English into a graph-based Singlish parser which lead to significant improvements in parsing performance.", "Unlike Wang et al. (2017), our base stacked models will allow us to transfer the POS tagging knowledge as well along the parse tree knowledge.", "As shown in Figure 6, we transfer both POS tagging and parsing information from the source 992 model trained on augmented Hindi and English data.", "For tagging, we augment the input layer of the CS tagger with the MLP layer of the source tagger.", "For transferring parsing knowledge, hidden representations from the parser specific Bi-LSTM of the source parser are augmented with the input layer of the CS parser which already includes the hidden layer of the CS tagger, word and character embeddings.", "In addition, we also add the MLP layer of the source parser to the MLP layer of the CS parser.", "The MLP layers of the source parser are generated using raw features from CS parser configurations.", "Apart from the addition of these learned representations from the source model, the overall CS model remains similar to the base model shown in Figure 4.", "The tagging and parsing losses are back-propagated by traversing back the forward paths to all trainable parameters in the entire network for training and the whole network is used collectively for inference.", "We train all of our POS tagging and parsing models on training sets of the Hindi and English UD-v2 treebanks and our Hindi-English CS treebank.", "For tuning and evaluation, we use the development and evaluation sets from Bhat et al. (2017).", "We conduct multiple experiments in gold and predicted settings to measure the effectiveness of the sub-modules of our parsing pipeline.", "In predicted settings, we use the POS taggers separately trained on the Hindi, English and CS training sets.", "All of our models use word embeddings from transformed Hindi and English embedding spaces to address the problem of lexical differences prevalent in CS sentences.", "Word Representations For language identification, POS tagging and parsing models, we include the lexical features in the input layer of our neural networks using 64-dimension pre-trained word embeddings, while we use randomly initialized embeddings within a range of [ 0 . 1 , +0 . 1] for non-lexical units such as POS tags and dictionary flags.", "We use 32-dimensional character embeddings for all the three models and 32-dimensional POS tag embeddings for pipelined parsing models.", "The distributed representation of Hindi and English vocabulary are learned separately from the Hindi and English monolingual corpora.", "The English monolingual data contains around 280M sentences, while the Hindi data is comparatively smaller and contains around 40M sentences.", "The word representations are learned using Skip-gram model with negative sampling which is implemented in word2vec toolkit (Mikolov et al., 2013).", "We use the projection algorithm of Artetxe et al. (2016) to transform the Hindi and English monolingual embeddings into same semantic space using a bilingual lexicon ( 63,000 entries).", "The bilingual lexicon is extracted from ILCI and Bojar Hindi-English parallel corpora (Jha, 2010; Bojar et al., 2014).", "For normalization models, we use 32-dimensional character embeddings uniformly initialized within a range of [ 0 . 1 , +0 . 1] .", "Hidden dimensions The POS tagger specific Bi-LSTMs have 128 cells while the parser specific Bi-LSTMs have 256 cells.", "The Bi-LSTM in the language identification model has 64 cells.", "The character Bi-LSTMs have 32 cells for all three models.", "The hidden layer of MLP has 64 nodes for the language identification network, 128 nodes for the POS tagger and 256 nodes for the parser.", "We use hyperbolic tangent as an activation function in all tasks.", "In the normalization models, we use single layered Bi-LSTMs with 512 cells for both encoding and decoding of character sequences.", "Learning For language identification, POS tagging and parsing networks, we use momentum SGD for learning with a minibatch size of 1.", "The LSTM weights are initialized with random orthonormal matrices as described in (Saxe et al., 2013).", "We set the dropout rate to 30% for POS tagger and parser Bi-LSTM and MLP hidden states while for language identification network we set the dropout to 50%.", "All three models are trained for up to 100 epochs, with early stopping based on the development set.", "In case of normalization, we train our encoder-decoder models for 25 epochs using vanilla SGD.", "We start with a learning rate of 1 .", "0 and after 8 epochs reduce it to half for every epoch.", "We use a mini-batch size of 128, and the normalized gradient is rescaled whenever its norm exceeds 5.", "We use a dropout rate of 30% for the Bi-LSTM.", "Language identification, POS tagging and parsing code is implemented in DyNet (Neubig et al., 2017) and for normalization without decoding, we use Open-NMT toolkit for neural machine translation (Klein et al., 2017).", "All 993 the code is available at https://github.", "com/irshadbhat/nsdp-cs and the data is available at https://github.com/ CodeMixedUniversalDependencies/ UD_Hindi_English .", "In Table 4, we present the results of our main model that uses neural stacking for learning POS tagging and parsing and also for knowledge transfer from the Bilingual model.", "Transferring POS tagging and syntactic knowledge using neural stacking gives 1.5% LAS 7 improvement over a naive approach of data augmentation.", "The Bilingual model which is trained on the union of Hindi and English data sets is least accurate of all our parsing models.", "However, it achieves better or near state-of-the-art results on the Hindi and English evaluation sets (see Table 5).", "As compared to the best system in CoNLL 2017 Shared Task on Universal Dependencies (Zeman et al., 2017; Dozat et al., 2017), our results for English are around 3% better in LAS, while for Hindi only 0.5% LAS points worse.", "The CS model trained only on the CS training data is slightly more accurate than the Bilingual model.", "Augmenting the CS data to Hindi-English data complements their syntactic structures relevant for parsing mixed grammar structures which are otherwise missing in the individual datasets.", "The average improvements of around 5% LAS clearly show their complementary nature.", "Table 6 summarizes the POS tagging results on the CS evaluation set.", "The tagger trained on the CS training data is 2.5% better than the Bilingual tagger.", "Adding CS training data to Hindi and English train sets further improves the accuracy by 1%.", "However, our stack-prop tagger achieves the high-7 The improvements discussed in the running text are for the models that are evaluated in auto settings.", "est accuracy of 90.53% by leveraging POS information from Bilingual tagger using neural stacking.", "Pipeline Stack-prop Data-set Gold POS Auto POS UAS LAS POS UAS LAS POS UAS LAS Hindi 95.66 93.08 97.52 94.08 90.69 97.65 94.36 91.02 English 89.95 87.96 95.75 87.71 84.59 95.80 88.30 85.30 Table 5: POS and parsing results for Hindi and English monolingual test sets using pipeline and stack-prop models.", "Pipeline vs Stack-prop Table 7 summarizes the parsing results of our pipeline models which use predicted POS tags as input features.", "As compared to our stack-prop models (Table 4), pipeline models are less accurate (average 1% LAS improvement across models) which clearly emphasizes the significance of back-propagating the parsing loss to tagging parameters as well.", "Significance of normalization We also conducted experiments to evaluate the impact of normalization on both POS tagging and parsing.", "The results are shown in Table", "8. As expected, tagging and parsing models that use normalization without decoding achieve an average of 1% improvement over the models that do not use normalization at all.", "However, our 3-step decoding leads to higher gains in tagging as well as parsing accuracies.", "We achieved around 2.8% improvements in tagging and around 4.6% in parsing over the models that use first-best word forms from the normalization models.", "More importantly, there is a mod-994 erate drop in accuracy (1.4% LAS points) caused due to normalization errors (see results in Table 4 for gold vs auto normalization).", "2016.", "Learning principled bilingual mappings of word embeddings while preserving monolingual invariance.", "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing .", "pages 22892294.", "Irshad Bhat, Riyaz A. Bhat, Manish Shrivastava, and Dipti Sharma.", "2017.", "Joining hands: Exploiting monolingual treebanks for parsing of code-mixing data.", "In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers .", "Association for Computational Linguistics, Valencia, Spain, pages 324330.", "Ozlem Cetinoglu, Sarah Schulz, and Ngoc Thang Vu.", "2016.", "Challenges of computational processing of code-switching.", "In Proceedings of the Second Workshop on Computational Approaches to Code Switching .", "Association for Computational Linguistics, Austin, Texas, pages 111.", "Monolingual vs Cross-lingual Embeddings We also conducted experiments with monolingual and cross-lingual embeddings to evaluate the need for transforming the monolingual embeddings into a same semantic space for processing of CS data.", "Results are shown in Table", "9. Cross-lingual embeddings have brought around 0.5% improvements in both tagging and parsing.", "Cross-lingual embeddings are essential for removing lexical differences which is one of the problems encountered in CS data.", "Addressing the lexical differences will help in better learning by exposing syntactic similarities between languages.", "Embedding POS UAS LAS Monolingual 90.07 79.46 70.53 Crosslingual 90.53 80.23 71.03 Table 9: Impact of monolingual and cross-lingual embeddings on stacking model performance.", "In this paper, we have presented a dependency parser designed explicitly for Hindi-English CS data.", "The parser uses neural stacking architecture of Zhang and Weiss (2016) and Chen et al. (2016) for learning POS tagging and parsing and for knowledge transfer from Bilingual models trained on Hindi and English UD treebanks.", "We have also presented normalization and back-transliteration models with a decoding process tailored for CS data.", "Our neural stacking parser is 1.5% LAS points better than the augmented parsing model and 3.8% LAS points better than the one which uses first-best normalizations." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "objective", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "objective" ]
[ "Ethan Wilcox 1 , Peng Qian 2 , Richard Futrell 3 , Miguel Ballesteros 4 , and Roger Levy 2", "1 Department of Linguistics, Harvard University, [email protected] 2 Department of Brain and Cognitive Sciences, MIT, { pqian,rplevy } @mit.edu 3 Department of Language Science, UC Irvine, [email protected] 4 IBM Research, MIT-IBM Watson AI Lab [email protected]", "Abstract", "State-of-the-art LSTM language models trained on large corpora learn sequential contingencies in impressive detail and have been shown to acquire a number of non-local grammatical dependencies with some success.", "Here we investigate whether supervision with hierarchical structure enhances learning of a range of grammatical dependencies, a question that has previously been addressed only for subject-verb agreement.", "Using controlled experimental methods from psycholinguistics, we compare the performance of word-based LSTM models versus two models that represent hierarchical structure and deploy it in left-to-right processing: Recurrent Neural Network Grammars (RNNGs) (Dyer et al., 2016) and a incrementalized version of the Parsing-as-Language-Modeling configuration from Charniak et al. (2016).", "Models are tested on a diverse range of configurations for two classes of non-local grammatical dependencies in English Negative Polarity licensing and FillerGap Dependencies .", "Using the same training data across models, we find that structurally-supervised models outperform the LSTM, with the RNNG demonstrating best results on both types of grammatical dependencies and even learning many of the Island Constraints on the fillergap dependency.", "Structural supervision thus provides data efficiency advantages over purely string-based training of neural language models in acquiring human-like generalizations about non-local grammatical dependencies.", "Long Short-Term Memory Recurrent Neural Networks (LSTMs) (Hochreiter and Schmidhuber, 1997) have achieved state of the art language modeling performance (Jozefowicz et al., 2016) and have been shown to indirectly learn a number of non-local grammatical dependencies, such as", "subject-verb number agreement and filler-gap licensing (Linzen et al., 2016; Wilcox et al., 2018), although they fail to learn others, such as Negative Polarity Item and anaphoric pronoun licensing (Marvin and Linzen, 2018; Futrell et al., 2018).", "LSTMs, however, require large amounts of training data and remain relatively uninterpretable.", "One model that attempts to address both these issues is the Recurrent Neural Network Grammar (Dyer et al., 2016).", "RNNGs are generative models, which represent hierarchical syntactic structure and use neural control to deploy it in left-to-right processing.", "They can achieve state-of-the-art broad-coverage scores on language modeling and phrase structure parsing tasks, learn Noun Phrase headedness (Kuncoro et al., 2016), and outperform linear models at learning subject-verb number agreement (Kuncoro et al., 2018).", "In this work, we comparatively evaluate LSTMs, RNNGs and a third model trained using syntactic supervisionsimilar to the Parsing-as-Language-Modeling configuration from Charniak et al. (2016)by conducting side-by-side tests on two novel English grammatical dependencies, deploying methodology from psycholinguistics.", "In this paradigm, the language models are fed with hand-crafted sentences, designed to draw out behavior that belies whether they have learned the underlying syntactic dependency.", "For example, Linzen et al. (2016) and Kuncoro et al. (2018) assessed how well neural language models were able to learn subject-verb number agreement by feeding the prefix The keys to the cabinet...", "If the model assigns a relatively higher probability to the grammatical plural verb are than the ungrammatical singular is it can be said to have learned the agreement dependency.", "Here, we investigate two non-local dependencies that remain untested for RNNGs: Negative Polarity Item (NPI) licensing is the dependency between a negative licensor such as not or none and a Negative Polarity Item such as any or ever .", "The fillergap dependency is the dependency between a fillersuch as who or what and a gap, which is an empty syntactic position.", "Both dependencies have been shown to be learnable by LSTMs trained on large amounts of data (Wilcox et al., 2018; Marvin and Linzen, 2018).", "Here, we investigate whether, after controlling for size of the training data, explicit hierarchical representation results in learning advantages.", "Recurrent Neural Network LMs model a sentence in a purely sequential basis, without explicitly representing the latent syntactic structure.", "We use the LSTM architecture in Hochreiter and Schmidhuber (1997), deploying a 2-layer LSTM language model with hidden layer size 256, input embedding size 256, and dropout rate 0.3.", "We refer to this model as the LSTM model in the following sections.", "Recurrent Neural Network Grammars (Dyer et al., 2016) predict joint probability of a sentence as well as its syntactic parse.", "RNNGs contain three sub-components, all of which are LSTMS: the neural stack , which keeps track of the current parse, the output buffer , which keeps track of previously-seen terminals and the history of actions .", "At each timestep the model can take three different actions: NT , which introduces a nonterminal symbolsuch as a VP or NPonto the stack; SHIFT , which places a terminal symbol onto the top of the stack, or REDUCE .", "REDUCE pops terminal symbols (words) off the stack until a nonterminal phrasal boundary is encountered; it then combines the terminals into a single representation via a bidirectional-LSTM and pushes the newly-reduced constituent back onto the stack.", "By reducing potentially unbounded constituents within the neural stack, the RNNG is able to create structural adjacency between co-dependent words that may be linearly distal.", "Following Dyer et al. (2016), we use 2-layer LSTMs with 256 hidden layer size for the stack-LSTM, action LSTM, and terminal LSTM, and dropout rate 0.3.", "ActionLSTM : It is the combination of the neural stack and the REDUCE function that may give the RNNG an advantage over purely sequential models (such as LSTMS) or models that deploy syntactic supervision without explicit notions of com-positionality.", "In order to assess the gains from explicitly modeling compositionality, we compare the previous two models against an incrementalized version of the Parsing-as-Language-Modeling configuration presented in Charniak et al. (2016).", "In this model, we strip an RNNG of its neural stack and output buffer , and train it to jointly predict the action sequence of a parse tree as well as the upcoming word.", "The action space of the model contains a set of non-terminal nodes ( NT ), terminal generations ( GEN ), as well as a ( REDUCE ) action, which functions only as a generic phrasal boundary marker.", "The model was trained using embedding size 256, dropout 0.3, and was able to achieve a parsing F1 score of 92.81 on the PTB, which is only marginally better than the performance of the original architecture on the same test set, as reported in Kuncoro et al. (2016).", "We will refer to this model as the ActionLSTM model in the following sections.", "All three models are trained on the training-set portion of the English Penn Treebank standardly used in the parsing literature (PTB; sections 2-21), which consists of about 950,000 tokens of English language news-wire text (Marcus et al., 1993).", "The RNNG and Action models get supervision from syntactic annotationcrucially, only constituent boundaries and major syntactic categories, with functional tags and empty categories stripped awaywhereas the LSTM language model only uses the sequences of terminal words.", "We train the models until performance converges on the held-out PTB development-set data.", "The surprisal , or negative log-conditional probability, S ( x i ) of a sentence's i th word x i , tells us how strongly x i is expected in context and is also known to correlate with human processing diffi-culty (Smith and Levy, 2013; Hale, 2001; Levy, 2008).", "For sentences out of context, surprisal is: S ( x i ) = log p ( x i | x 1 ... x i 1 ) We investigate a model's knowledge of a grammatical dependency, which is the co-variance between an upstream licensor and a downstream licensee , by measuring the effect that an upstream licensor has on the surprisal of a downstream licensee.", "The idea is that grammatical licensors should set up an expectation for the licensee thus reducing its surprisal compared to minimal pairs in which the licensor is absent.", "We derive the word surprisal from the LSTM language model by directly computing the negative log value of the predicted conditional probability p ( x i | x 1 ... x i 1 ) from the softmax layer.", "Following the method in Hale et al. (2018) for estimating word surprisals from RNNG, we use word-synchronous beam search (Stern et al., 2017) to find a set of most likely incremental parses and sum their forward probabilities to approximate P ( x 1 ,... x i ) and P ( x 1 ,... x i 1 ) for computing the surprisal.", "We set the action beam size to 100 and word beam size to 10.", "We ensured that the correct incremental RNNG parses were present on the beam immediately before and throughout the material over which surprisal was calculated through manual spot inspection; the correct parse was almost always at the top of the beam.", "Unlike NPI, licensing, the fillergap dependency is the covariance between a piece of extant material, a filler, and a piece of absent material, a gap.", "Here, we employ the methodology from Wilcox et al. (2018), which introduces the Wh-Licensing Interaction .", "To compute the wh-licensing interaction for a sentence, Wilcox et al. (2018) construct four variants, given in (1), that exhibit the four possible combinations of fillers and gaps for a specific syntactic position.", "The underscores are for presentational purposes only and were not included in experimental materials.", "(1)", "a. I know that the lion devoured the gazelle at sunrise.", "[-FILLER-GAP ]", "b. *I know what the lion devoured the gazelle at sunrise.", "[+F ILLER-GAP ]", "c. *I know that the lion devoured at sunrise.", "[-FILLER +G AP ]", "d. I know what the lion devoured at sunrise.", "[+F ILLER +G AP ] If a filler sets up an expectation for a gap, then filled syntactic positions should be more surprising in the context of a filler than in a minimally-different, non-filler variants.", "We measure this expectation by calculating the difference of surprisal between (1-b) and (1-a).", "Similarly, if gaps require fillers to be licensed, transitions from transitive verbs to adjunct clauses that skip an obligatory argument should be less surprising in the context of a filler than in minimally-different, non-filler variants.", "We measure this expectation by computing the difference in surprisal between (1-c) and (1-d).", "Because the fillergap dependency is a two-way interaction, the wh-licensing interaction consists of the difference of these two differences, which is given in (2).", "(2) ( S (1-b) S (1-a) ) ( S (1-c) S (1-d) ) For basic fillergap dependencies, we expect the presence of a filler to set up a global expectation for a gap, thus we measure the summed licensing interaction across the entire embedded clause, which we expect to be significantly above zero if the model is learning the dependency.", "Our experimental materials include only vocabulary items within the PTB, avoiding the need for Out of Vocabulary handling.", "We determine statistical sig-nificance using a mixed-effects linear regression model, using sum-coded conditions (Baayen et al., 2008).", "For within-model comparison we use surprisal as the dependent variable and experimental conditions as predictors; for between-model comparison, we use wh-licensing interaction as the dependent variable with model type and experimental conditions as predictors.", "All figures depict by-item means, with error bars representing 95% confidence intervals, computed by subtracting out the within-item means from each condition as advocated by Masson and Loftus (2003).", "The strength of a wh-licensing interaction can be interpreted as either its mean size in bits, or as its mean size normalized by its standard deviation across items.", "The latter is Cohen's d , rooted in signal-detection theory ( ? ); because all our experiments involve similar number of items, it is roughly proportional to the size of wh-interaction relative to the size of the associated confidence interval.", "1 3 Negative Polarity Item Licensing In English, Negative Polarity Items (NPIs), such as any , ever must be in the SCOPE of a negative LICENSOR such as no , none , or not ( ? Ladusaw, 1979).", "Crucially, the scope of a licensor is characterized structurally, not in purely linear terms; for present purposes, a sufficient approximation is that an NPI is in the proper scope of a licensor if it is c-commanded by it.", "Thus while ever in (3-b) and (3-d) is grammatical because it is licensed by no in the main-clause subject, ever is ungrammatical in (3-c) despite the linearly preceding no , be-1 All of our experiments were pre-registered online at", "http://aspredicted.org/blind.php?x= { xd9cw9, 3xv2du, jd384m, cy6zp6, 2hk4gf, zt73qt, f9pk9f, ab9f3h, yt6pi4 } cause inside a subject-modifying relative clause is not a valid position for an NPI licensor; we call this a DISTRACTOR position.", "(3)", "a. * The senator that supported the measure has ever found any support from her constituents.", "b. No senator that supported the measure has ever found any support from her constituents.", "c. * The senator that supported no measure has ever found any support from her constituents.", "d. No senator that supported no measure has ever found any support from her constituents.", "Learning of NPI licensing conditions by LSTM language models trained on large corpora has previously been investigated by Marvin and Linzen (2018) and Futrell et al. (2018).", "Futrell et al. found that the language models of both Gulordava et al. (2018) and Jozefowicz et al. (2016) (here-after called Large Data LSTMs') learned a contingency between licensors and NPIs: the NPIs in examples like (3) were lower-surprisal when linearly preceded by negative licensors.", "However, both papers reported that these models failed to constrain the contingency along the correct structural lines: negative NPI surprisal was decreased at least as much by a preceding negative distractor as by a negative licensor.", "Syntactic supervision might plausibly facilitate learning of NPI licensing conditions.", "We tested this following the method of Futrell et al. (2018), constructing 27 items on the design of in (3), with two variants: one included ever and omitting any , and one including any and omitting ever .", "Figure 1, left panel, shows the results.", "For the RNNG and the ActionLSTM, negative licensors and distractors alike reduced surprisal of both NPIs ( p < 0 . 05 for the RNNG, p < 0 . 001 for the ActionLSTM).", "For the LSTM, negative licensors and distractors alike reduced surprisal of ever (both p < 0 . 01), but not any .", "This may seem surprising as any is considerably more frequent than ever (123 vs. 727 instances in the training data), but any 's non-NPI uses (e.g., I will eat anything fried ) may complicate its learning.", "From Figure 1 it is also apparent that the RNNG and ActionLSTM show signs of stronger NPI licensing effects from negation in the licensor position than in the distractor position, at least for ever .", "To quantify this, we follow Marvin and Linzen (2018) in computing item-mean classification accuracies, with classification being considered correct if the NPI is assigned higher probability in context for (3-b) than for (3-c).", "Results are shown in Figure 1, right panel.", "No Figure 1: NPI Licensing at left: Y-axis shows surprisal at the NPI, x-axis indicates polarity of the c-commanding licensor, and color indicates distractor polarity.", "model is significantly above chance for any , but for ever the syntactically supervised models perform much better: The RNNG reaches 85% performance, and the ActionLSTM 88%, both significantly above chance ( p < 0 . 001 by binomial test for each), and are not significantly different from each other, but both better than the LSTM ( p < 0 . 01 for the RNNG/LSTM; p < 0 . 001 for the ActionLSTM/LSTM by Fisher's exact test).", "To our knowledge this is the first demonstration of a language model learning the licensing conditions for an NPI without direct supervision.", "Overall, we find that syntactic supervision facilitates the contingency of NPIs on a negative licensor in context, but is not sufficient for clean generalization of the structural conditions on NPI licensing with the training dataset used here.", "The dependency between a FILLER , which is a wh-word such as who or what , and a GAP , which is an empty syntactic position, is characterized by a number of properties, some of which were tested for large data LSTMs by Wilcox et al. (2018).", "Here we investigate the effect of syntactic supervision on fillergap dependency learning.", "Syntactic annotation of the dependency itself is stripped from the training data (Figure 2), so syntactic supervision can play only an indirect facilitatory role for the models' neural learning mechanisms.", "The fillergap dependency is flexible: a filler can license a gap in any of a number of syntactic positions, including the argument positions of subject, object, and indirect object, as illustrated in (4), as well as in other positions (e.g. the adjunct position for how in Figure 2).", "(4)", "a. I know who introduced the accountant to the guests after lunch.", "b. I know who the CEO introduced to the guests after lunch.", "c. I know who the CEO introduced the accountant to after lunch.", "These gap positions differ in frequency, however (Table 1): the majority (63.1%) are in some argument structure position, of which the vast majority (75.6%) are subject position (mostly subject-extracted relative clauses), 23.7% are object position, and 0.7% are indirect object position.", "Using the wh-interaction measure described in Section 2.2, Wilcox et al. (2018) showed that large-data LSTMs learn fillergap dependencies for all three argument positions, with the size of the wh-interaction generally largest for subject gaps and smallest for indirect-object gaps.", "Table 1 suggests that this gradation may reflect frequency of learning signal, with the dependency being learned more robustly the more frequent the extraction type.", "We applied the same method, adapting Wilcox et", "al.'s items to the smaller training dataset.", "The results can be seen in the upper-left panel of Figure 3.", "All three models learn the filler-gap dependency for subject and object positions, and there is suggestive but inconclusive evidence for learning in the rare indirect object position.", "We see stronger dependency learning for more frequent gap types, as was found for large data LSTMs, and the supervised models show a much stronger wh-licensing effect than the LSTM.", "As with NPIs, the fillergap dependency is subject to a number of hierarchical, structural constraints.", "The most basic of these constraints is that the filler must be above the gap in the appropriate structural sense (to a first approximation, the filler must c -command the gap, though see e.g. ? for qualifi-cations).", "Hence who in (5-a) is a legitimate extraction from the relative clause, but (5-b) is ungrammatical as the gap is in the matrix clause, above the filler.", "(5)", "a. The policeman who the criminal shot with his gun shocked the jury during the trial.", "A model that properly generalizes this constraint on the fillergap dependency should not show a wh-interaction for cases like (5-b): an undischarged who filler should not make the matrix-clause gap particularly more expected.", "As far as we are aware, no prior work has investigated this property of the fillergap dependency in language models; we do so here.", "Because the context in (5) does not allow for an immediate that clause initiation for the F ILLER condition as in (1), we instantiate this condition by contrasting the +F ILLER ,+G AP condition of (5-b) with the variants in (6), where the who filler is immediately discharged as the RC verb's extracted subject: (6)", "a. *The policeman who knows that the criminal shot the politician with his gun shocked during the trial.", "-FILLER ,+G AP", "b. *The policeman who the criminal shot the politician with his gun shocked the jury during the trial.", "+F ILLER ,G AP", "c. The policeman who knows that the criminal shot the politician with his gun shocked the jury during the trial.", "-FILLER ,G AP We created 22 items following the templates of (5-a) ( Subject condition) and (5-b) ( Matrix con-dition); results are shown in the top-right panel of 3.", "The supervised models show a large Figure 3: Model results for the basic properties of fillergap licensing.", "wh-licensing interaction effect for a gap inside the subject-modifying relative clausewith the RNNG demonstrating more licensing interaction than the ActionLSTMand neither model inappropriately generalizes this licensing effect to a matrix-clause gap.", "The LSTM shows no wh-licensing effects in either position, suggesting that syntactic supervision facilitates appropriately generalized filler-gap dependencies for subject-modifying relative clauses.", "2 4.3 Robustness to Intervening Material For a model that learns human-like syntactic generalizations and maintains accurate phrase-like representations throughout a string, fillergap dependencies should be robust to linearly intervening material that does not change the tree-structural relationship between the filler and the gap.", "Wilcox et al. (2018) found that the large-data RNNs described earlier exhibit a robust wh-interaction of this type, by introducing an optional 2 Results for the Larger Data LSTM models for the Hierarchy and Unboundedness experiments presented here can be found in the appendix.", "postnominal modifier between filler and gap to sentence templates like (7), with no modification (7-a), short (35 word) modifiers (7-b), medium (68 word) modifiers (7-c), and long (812 word) modifiers (7-d).", "(7)", "a. I know what your friend gave to Alex last weekend.", "b. I know what your friend in the hat gave to Alex last weekend.", "c. I know what your friend who you ate lunch with yesterday gave to Alex last weekend.", "d. I know what your friend who recently took you on a walking tour of the city gave to Alex last weekend.", "We adapted their materials for the small training dataset and tested our three models; results are shown in 3, bottom-left panel.", "The RNNG shows a robust licensing interaction that does not diminish with additional intervening material (all d > 1 . 3).", "The LSTM shows smaller wh-licensing interactions across the board; these are still substantial in the No Modifier and Short Modifier conditions ( d = 0 . 88 , d = 0 . 98, respectively), but are smaller in the Medium Modifier and Long Modifier conditions ( d = 0 . 45 , d = 0 . 37 respectively), suggesting less robustness to intervening material.", "The ActionLSTM shows strong interactions in the No Modifier condition ( d = 1 . 02), but weak interaction once any modifying material is introduced ( d < 0 . 4 in all other conditions).", "This result is significant, as it indicates that RNNG is able to leverage the structural locality afforded by the neural stack to maintain robust gap expectancy.", "For humans, fillergap dependencies are not only robust to linearly intervening material that does not change their tree-structural relationship, they can be STRUCTURALLY NON-LOCAL as well, propagating through intervening syntactic structures (subject to constraints examined in Section 5).", "For example, a filler can be extracted from multiply-nested complement clauses as in (8-b): (8)", "a. I know who your aunt insulted at the party.", "Humans show sensitivity to a single layer of sentential embedding when processing fillergap dependencies in an offline complexity rating' task (Phillips et al., 2005).", "This may due to the relative frequency of single versus doubly-embedded fillergap dependencies.", "In our training data there were 13,907 examples of fillergap dependencies, however only 758 examples that spanned two layers of sentential embedding and 19 that spanned three layers.", "There were no instances of fillergap dependencies spanning over more than three sentential embeddings, as in (8-b).", "The unboundedness of fillergap dependencies has not previously been tested for contemporary language models.", "To do this, we constructed 22 test items like (8), varying embedding depth within-item between zero, one, two, three, and four levels, and measured the resulting licensing interactions.", "The results are in Figure 3, bottom-right panel.", "No model's fillergap dependency is perfectly robust to clausal embedding.", "The LSTM's wh-licensing interaction starts out small and diminishes with embedding depth.", "The RNNG and ActionLSTM show strong wh-licensing interaction in the unembedded condition but no significant wh-licensing interaction after even a single layer of embedding.", "Since these experimental materials are new, we also tested the large-data LSTMs on them, which exhibited much larger and more robust fillerdependency effects (Appendix B).", "Hence the syntactic supervi X filler Figure 4: Anatomy of an island constraint.", "sion explored here is not sufficient to guarantee that learned fillergap dependencies can be structurally unbounded.", "A crucial exception to the flexibility and unboundedness of fillergap dependencies is that ISLANDCONSTRAINTS prevent association of a filler and a gap through certain types of syntactic nodes, illustrated in Figure 4 (Ross, 1967).", "Contemporary theories variously attribute island effects to grammatical rules, incremental processing considerations, or discourse-structural factors (Ambridge and Goldberg, 2008; Hofmeister and Sag, 2010; Sprouse and Hornstein, 2013).", "In our setting, a language model is sensitive to an island constraint if it fails to show a wh-licensing interaction between a filler and a gap that cross an island.", "Wilcox et al. (2018) found evidence that large-data LSTMs are sensitive to some island constraints (although see Chowdhury and Zampar-elli (2018) for a contrasting view), but not to others.", "Here we investigate whether LSTMs would learn these from smaller training datasets, and if an RNNG's syntactic supervision provides a learning advantage for island constraints.", "In this section we measure the wh-licensing interaction in the material immediately following the potential gap site, which is guaranteed to implicate the model's (lack of) expectation for a gap inside the island, rather than throughout the entire embedded clause, which also implicates filler-driven expectations after the end of the island.", "Adjunct clauses block the fillergap dependency.", "Wilcox et al. (2018) found evidence that large-data LSTMs are sensitive to adjunct islands, as evidenced by attenuated and often fully eliminated wh-licensing interactions for materials like (9-b) (9-c) relative to (9-a) below.", "(In this and the subsequent subsections, the post-gap material used for wh-interaction computation is in bold .) (9)", "a. The director discovered what the robbers stole last night .", "[OBJECT ]", "b. *The director discovered what the security guard slept while the robbers stole last night .", "[ADJ-BACK ]", "c. *The director discovered what, while the robbers stole last night , the security guard slept.", "[ADJ-FRONT ] We adapted these materials; results are in Figure 5, upper-left panel.", "The RNNG shows a strong licensing interaction in the baseline main-clause object extraction position, but no licensing interaction for a gap in an adjunct either at the back or front of the main clause.", "Because RNNGs failed our test for unboundedness of fillergap dependency, however (Section 4.4), this result is inconclusive as to whether anything corresponding to an island constraint is learned.", "The LSTM and the ActionLSTM show no sign of fillergap dependency attenuation from adjunct islands, in contrast to previous findings using the LSTM architecture on much larger training datasets.", "Embedded sentences introduced by whwords are also islands; hence, (10-c) is anomalous but (10-a) and (10-b) are not.", "(10)a.", "I know what the guide said the lion devoured yesterday .", "[NULLCOMP ]", "b. I know what the guide said that the lion devoured yesterday .", "[THATCOMP ]", "c. *I know what the guide said whether the lion devoured yesterday .", "[WHCOMP ] Wilcox et al. (2018) found that the large-data LSTMs learned this island constraint: the wh-licensing interaction was eliminated or severely attenuated for the WH-COMP lementizer variant but not for the other variants.", "Results for our three models are in Figure 5, top-right panel.", "These materials paint a slightly more optimistic picture than the results of Section 4.4 for the RNNG's ability to propagate a gap expectation from a filler down one level of clausal embedding.", "However, no models show an appreciable attenuation in the WHCOMP condition that would suggest an island constraint-like generalization.", "Complex Noun Phrase Constraint.", "For example, (11-b) and (11-c) are unacceptable object extractions compared with (11-a); the same acceptability pattern holds for subject extractions.", "(11)a.", "I know what the collector bought last week .", "[ARGUMENT extraction]", "b. *I know what the collector bought the painting which depicted last week .", "[WHCOMPLEXNP]", "c. *I know what the collector bought the painting that depicted last week .", "[THATCOMPLEXNP] Wilcox et al. (2018) found that large-data LSTM behavior reflected this island constraint, with attenuated wh-licensing interactions for complex NPs like (11-b)(11-c) and for analogous complex NPs involving subject extractions.", "Our results for adaptations of their materials are shown in Figure 5, bottom-left panel.", "All three models show attenuated wh-licensing interactions inside complex NPs in subject position, with the licensing interaction in the grammatical ARGUMENT STRUCTURE position greatest for the RNNG and ActionLSTM.", "This may be taken as an indication of Complex NP Constraint-like learning, but is inconclusive due to the models' general failure to propagate gap expectations into embedded clauses (Section 4.4).", "Prepositional phrases attaching to subjects are islands: this is the Subject Constraint, and accounts for the unacceptability of (12-d) compared to (12-c) (Huang, 1998).", "(12)a.", "I know what the collector bought yesterday .", "[OBJVERBAL-ARG ]", "b. I know what the collector bought a painting of yesterday .", "[OBJPREP-ARG ]", "c. I know what sold for a high price at auction.", "[SUBJVERBAL-ARG ]", "d. *I know what a painting of sold for a high price at auction.", "[SUBJPREP-ARG ] Wilcox et al. (2018) found that the wh-licensing interactions of large-data LSTMs fail to distinguish between subject-modifying PPs, which cannot be extracted from, and object-modifying PPs, which can.", "Our results for adaptations of their materials can be seen in Figure 5, bottom right panel.", "The syntactically supervised models show a significant decrease between the verbal argument and prepositional argument conditions in subject position ( p < 0 . 001 for RNNG; p < 0 . 01 for ActionL-STM), and no significant difference between the two conditions in object position (however, note that the licensing in object position is significantly less than the licensing in the grammatical, Verbal Argument Subject position, following the pattern in 4.1).", "LSTMs fare worse, showing a clear wh-Figure 5: Model results for Syntactic Islands.", "(cid:78) indicates grammatical conditions in which models should display strong wh-licensing interaction, indicates ungrammatical conditions in which models should display reduced wh-licensing interaction.", "licensing interaction for subject-modifying PPs, which should be islands, and no wh-licensing interaction for object-modifying PPs.", "In this paper we have argued that structural supervision provides advantages over purely string-based training of neural language models in acquiring more human-like generalizations about non-local grammatical dependencies.", "We have also demonstrated how the neural compositional-ity of the RNNG architecture can provide even further advantages, especially at maintaining expectations into structurally-local but linearly distant material.", "We compared RNNG, ActionLSTM and LSTM models using recently developed controlled experimental materials, and developed additional experimental materials to further test several characteristics of grammatical dependency learning for neural language models (Sections 4.2, 4.4).", "We found advantages for syntactic supervision in learning conditions for Negative Polarity Item licensing and a majority of tests involving fillergap dependencies , showing particularly strong wh-licensing effects in tree-structurally-local contexts.", "On basic fillergap dependency properties the RNNG significantly outperformed the LSTM in 8/13 and the ActionLSTM outperformed the LSTM on 5/13 cases where strong licensing interaction was expected.", "While the RNNG, and to some extent the ActionLSTM, exhibited more humanlike behavior than the LSTM for a number of Island Constraints , the tests were inconclusive due to the models' failure to propagate gap expectation into embedded clauses: island-like behavior may merely be sensitivity to general syntactic complexity, not the highly-specific syntactic arrangements that constitute the family of island constructions.", "Thus, major-category supervision does not provide enough information for the neural component to learn fully robust and human-like fillergap dependencies from 1-million words alone.", "However, for some dependencies tested (i.e. NPIs) structural supervision on 1 million words provides better outcomes than even large-data LSTMs.", "Scaling the gains derived from structural supervision is a challenge for data-scarce NLP and is the basis for future work.", "This work was supported by the MIT-IBM Watson AI Lab." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "The recent surge of text-based online counseling applications enables us to collect and analyze interactions between counselors and clients.", "A dataset of those interactions can be used to learn to automatically classify the client utterances into categories that help counselors in diagnosing client status and predicting counseling outcome.", "With proper anonymization, we collect counselor-client dialogues, define meaningful categories of client utterances with professional counselors, and develop a novel neural network model for classifying the client utterances.", "The central idea of our model, ConvMFiT, is a pre-trained conversation model which consists of a general language model built from an out-of-domain corpus and two role-specific language models built from unlabeled in-domain dialogues.", "The classification result shows that ConvMFiT outperforms state-of-the-art comparison models.", "Further, the attention weights in the learned model confirm that the model finds expected linguistic patterns for each category.", "Some mental disorders are known to be treated effectively through psychotherapy.", "However, people in need of psychotherapy may find it challenging to visit traditional counseling services because of time, money, emotional barriers, and social stigma (Bearse et al., 2013).", "Recently, technology-mediated psychotherapy services emerged to alleviate these barriers.", "Mobile-based psychotherapy programs (Mantani et al., 2017), fully automated chatbots (Ly et al., 2017; Fitzpatrick et al., 2017), and intervention through smart devices (Torrado et al., 2017) are examples.", "Among them, text-based online counseling services with professional counselors are becoming popular because clients can receive these services without traveling to an office and with reduced financial burden compared to traditional face-to-face counseling sessions (Hull, 2015).", "In text-based counseling, the communication environment changes from face-to-face counseling sessions.", "The counselor cannot read nonverbal cues from their clients, and the client uses text messages rather than spoken utterances to deliver their thoughts and feelings, resulting in changes of dynamics in the counseling relationship.", "Previous studies explored computational approaches to analyzing the dynamic patterns of relationship between the counselor and the client by focusing on the language of counselors (Imel et al., 2015; Althoff et al., 2016), clustering topics of client issues (Dinakar et al., 2014), and looking at therapy outcomes (Howes et al., 2014; Hull, 2015).", "Unlike previous studies, we take a computational approach to analyze client responses from the counselor's perspective .", "Client responses in counseling are crucial factors for judging the counseling outcome and for understanding the status of the client.", "So we build a novel categorization scheme of client utterances, and we base our categorization scheme on the cognitive behavioral theory (CBT), a widely used theory in psychotherapy.", "Also, in developing the categories, we consider whether they are adequate for the unique text-only communication environment, and appropriate for the annotation of the dialogues as training data.", "Then using the corpus of text-based counseling sessions annotated according to the categorization scheme, we build a novel conversation model to classify the client utterances.", "This paper presents the following contributions: First, we build a novel categorization method as a labeling scheme for client utterances in text-based counseling dialogues.", "Second, we propose a new model, Conversation Model Fine-Tuning (ConvMFiT) to classify the utterances.", "We explicitly integrate pre-trained language-specific word embed-dings, language models and a conversation model to take advantage of the pre-trained knowledge in our model.", "Third, we empirically evaluate our model in comparison with other models including a state-of-the-art neural network text classification model.", "Also, we show typical phrases of counselors and clients for each category by investigating the attention layers.", "Client responses provide essential clues to understanding the client's internal status which can vary throughout counseling sessions.", "For example, client's responses describing their problems prevail at the early stage of counseling (Hill, 1978; E. Hill et al., 1983), but as counseling progresses, problem descriptions decrease while insights and discussions of plans continue to increase (Seeman, 1949).", "Client responses can also help in predicting counseling outcomes.", "For example, a higher proportion of insights and plans in client utterances indicates a high positive effect of counseling (Hill, 1978; E. Hill et al., 1983).", "Categorization Objective.", "Our final aim is to build a machine learning model to classify the client utterances.", "Thus, the categorization of the utterances should satisfy the following criteria: Suitable for the text-only environment : Categories should be detected only using the text response of clients.", "Available as a labeling scheme : The number of categories should be small enough for manual annotation by counseling experts.", "Meaningful to counselors : Categories should be meaningful for outcome prediction or counseling progress tracking.", "Previous studies in psychology proposed nine and fourteen categories for client and counselor verbal responses, respectively, by analyzing transcriptions from traditional face-to-face counseling sessions (Hill, 1978; Hill et al., 1981).", "But these categories were developed for face-to-face spoken interactions, and we found that for online text-only counseling dialogues, these categories are not directly applicable.", "Using text without non-verbal cues, a client's responses are inherently different from the transcriptions of verbally spoken responses which include categories such as silence' (no response for more than 5 seconds) and non-verbal referent' (physically pointing at a person).", "Another relevant study, derived from text-based counseling sessions with suicidal adolescents proposes 19 categories which we judged to be too many to be practical for manual annotation (Kim, 2010).", "The last criterion of meaningful to counselors is perhaps the most important.", "To meet that criterion, we base the categorization process on the cognitive behavioral theory (CBT) which is the underlying theory behind psychotherapy counseling.", "The details of using CBT for the categorization process is explained next.", "Categorization Process and Results.", "In developing the categories, we follow the Consensual Qualitative Research method (Hill et al., 1997).", "Two professional counselors with clinical experience participated in this qualitative research method to define the categorization.", "To begin, we randomly sample ten client cases considering demographic information including age, gender, education, job, and previous counseling experiences.", "We then start the categorization process with the fundamental components of the CBT which are events , thoughts , emotions , and behavior (Hill et al., 1981).", "The professional counselors annotate every client utterance to with those initial component categories with tags that add detail.", "For example, if an utterance is annotated as emotion', we add positive/negative' or concrete label such as hope'.", "If these tags are categorized to be a new category, we add that category to the list until it is saturated.", "When the number of categories becomes more than 40, we define higher level categories that cover the existing categories.", "In the second stage, annotators discuss and merge these categories into five high-level categories.", "Category 1 is informative responses to counselors , and category 2 is providing factual information and experiences .", "Categories 3 and 4 are related to the client factors, expressing appealing problems and psychological changes .", "The last category is about the logistics of the counseling sessions including scheduling the next session.", "The categories in detail are as follows: Characteristic Informative Client Factors Process Category Name Factual Information (Fact.) Anecdotal Experience (Anec.) Appealing Problem (Prob.) Psychological Change (Chan.) Counseling Process (Proc.) Explanation Brief mention of categorical information Clients experience contributing to the appealing problem Clients factors related to the appealing problem Statement at the resolution stage of the appealing problem Statement of counseling structure and relationship Examples Objective Fact Experience with others Negative Emotion Positive Prediction A message to counselor Living conditions Comments from others Cognitive distortion Expectation, Determination Gratitude, Greetings Demographic information Trauma Interpersonal problems Coping behaviors Time appointment Limited conditions Interpersonal situations Family problems Self-awareness Questions about the consultation Table 1: Final Categorization of client utterances.", "Factual information (Fact.) Informative responses to counselor's utterances, including age, gender, occupation, education, family, previous counseling experience, etc.", "Anecdotal Experience (Anec.) Responses describing past incidents and current situations related to the formation of appealing problems.", "Responses include traumatic experiences, interactions with other people, comments from other people, and other anecdotal experiences.", "Appealing Problems (Prob.) Utterances addressing the main appealing problem which is yet to be resolved, including client's internal factors or their behaviors related to the problems.", "Specifically, the utterances include cognition, emotion, physiological reaction, and diagnostic features of the problem and desire to be changed.", "Psychological Change (Chan.) Utterances describing insights, cognition of small and big changes in internal factors or behaviors.", "That is, an utterance at the point where the appealing problem is being resolved.", "Counseling Processes (Proc.) Utterances that include the objective of counseling, requests to the counselor, plans about the counseling sessions, and counseling relationship.", "This category also covers greetings and making an appointment for the next session.", "We summarize the category explanations and examples in Table 1.", "In this section, we explain how counseling dialogues differ from general dialogues, describe the dialogues we collected and annotated, and explain how we preprocessed the data.", "The counseling dialogues consist of multiple turns taken by a counselor and a client, and each turn can contain multiple utterances.", "Here we describe two unique characteristics of text-based online counseling conversations compared to general non-goal oriented conversations.", "Distinctive roles of speakers.", "Counseling conversations are goal-oriented with the aim to produce positive counseling outcomes, and the two speakers have distinctive roles.", "The client gives objective information about themselves and subjective experiences and feelings to the counselor to appeal the problems they are suffering from.", "Then the counselor establishes a therapeutic relationship with the client and elicits various strategies to induce psychological changes in the client.", "These distinct roles of the conversational participants distinguish counseling conversations from (cid:58)(cid:75)(cid:92)(cid:3)(cid:70)(cid:68)(cid:81)(cid:10)(cid:87)(cid:3)(cid:92)(cid:82)(cid:88)(cid:3)(cid:71)(cid:82)(cid:3)(cid:87)(cid:75)(cid:68)(cid:87)(cid:34) (cid:44)(cid:3)(cid:90)(cid:68)(cid:81)(cid:87)(cid:3)(cid:87)(cid:82)(cid:3)(cid:69)(cid:72)(cid:3)(cid:68)(cid:70)(cid:87)(cid:76)(cid:89)(cid:72)(cid:3)(cid:76)(cid:81)(cid:3)(cid:80)(cid:68)(cid:76)(cid:81)(cid:87)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74)(cid:3)(cid:80)(cid:92)(cid:3)(cid:85)(cid:72)(cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:75)(cid:76)(cid:83)(cid:86)(cid:15) (cid:37)(cid:88)(cid:87)(cid:3)(cid:76)(cid:87)(cid:3)(cid:77)(cid:88)(cid:86)(cid:87)(cid:3)(cid:69)(cid:82)(cid:87)(cid:75)(cid:72)(cid:85)(cid:86)(cid:3)(cid:80)(cid:72)(cid:17) (cid:80)(cid:82)(cid:81)(cid:72)(cid:92)(cid:15)(cid:3)(cid:73)(cid:68)(cid:80)(cid:76)(cid:79)(cid:76)(cid:72)(cid:86)(cid:15)(cid:3)(cid:85)(cid:72)(cid:79)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:86)(cid:75)(cid:76)(cid:83)(cid:86)(cid:3257)(cid:80)(cid:68)(cid:78)(cid:72)(cid:86)(cid:3)(cid:80)(cid:72)(cid:3)(cid:73)(cid:72)(cid:72)(cid:79)(cid:3)(cid:71)(cid:82)(cid:90)(cid:81) (cid:44)(cid:87)(cid:3)(cid:86)(cid:82)(cid:88)(cid:81)(cid:71)(cid:86)(cid:3)(cid:79)(cid:76)(cid:78)(cid:72)(cid:3)(cid:92)(cid:82)(cid:88)(cid:3)(cid:90)(cid:68)(cid:81)(cid:87)(cid:3)(cid:87)(cid:82)(cid:3)(cid:69)(cid:72)(cid:3)(cid:87)(cid:75)(cid:72)(cid:3)(cid:82)(cid:90)(cid:81)(cid:72)(cid:85)(cid:3)(cid:82)(cid:73)(cid:3)(cid:92)(cid:82)(cid:88)(cid:85)(cid:3)(cid:79)(cid:76)(cid:73)(cid:72)(cid:15) (cid:38)(cid:82)(cid:88)(cid:81)(cid:86)(cid:72)(cid:79)(cid:82)(cid:85) (cid:38)(cid:82)(cid:88)(cid:81)(cid:86)(cid:72)(cid:79)(cid:82)(cid:85) (cid:38)(cid:79)(cid:76)(cid:72)(cid:81)(cid:87) (cid:38)(cid:79)(cid:76)(cid:72)(cid:81)(cid:87) (cid:38)(cid:79)(cid:76)(cid:72)(cid:81)(cid:87) (cid:3247)(cid:36)(cid:83)(cid:83)(cid:72)(cid:68)(cid:79)(cid:76)(cid:81)(cid:74)(cid:3)(cid:51)(cid:85)(cid:82)(cid:69)(cid:79)(cid:72)(cid:80)(cid:3248) Figure 1: Translated example conversation between a counselor and a client.", "Multiple utterances in a turn.", "We define an utterance as a single text bubble.", "Counselors and clients can generate multiple utterances in a turn.", "Especially in a client turn, various information not to be missed by a counselor may occur across multiple utterances.", "Thus, we treat every utterance separately, as shown in Fig. 1 3.2 Collected Dataset Total dialogues.", "We collect counseling dialogues of clients with their corresponding professional counselors from Korean text-based online counseling platform Trost.", "1 Overall, we use 1,448 counseling dialogues which are anonymized before researchers obtain access to the data, by removing personally identifiable information in the dialogues.", "All meta-data of the dialogues are not provided to the researchers, and named entities such as client's and counselor's names are replaced with random numeric identifiers.", "The research process including data anonymization and pre-processing is validated by the KAIST Institutional Review Board (IRB).", "Based on these categories, five professional counselors annotated their own client's every utterance in the conversations, as shown in Fig. 1.", "Note that each utterance can have multiple labels if it includes information across multiple categories.", "Table.", "2 shows the descriptive statistics of our labeled dataset.", "The first two rows present the average lengths of each utterance of counselors and clients in terms of words and characters, showing there is a small difference between the counselor utterance and the client utterance.", "On the other hand, the average number of utterances in a single counseling session differs; on average, clients write more utterances than counselors.", "We intentionally leave in punctuations and emo-jis since they can help to infer the categories of the client utterances, treating them as separate tokens.", "Then we construct triples from the labeled dialogues consisting of 1) counselor's utterances (blue in Fig. 1), 2) client's context utterances (green), and 3) client's target utterances to be categorized.", "(yellow)", "We split the dataset into train, validation, and test sets.", "Table.", "3 shows the number of triples in each set, showing factual information (Fact.) and psychological change (Chan.) categories appear less frequently than the others.", "We introduce ConvMFiT (Conversation Model Fine-Tuning), fine-tuning pre-trained seq2seq based conversation model to classify the client's utterances.", "Background.", "Our corpus of 100 labeled conversations is not large enough to fully capture the linguistic patterns of the categories without external knowledge.", "The small size of the dataset is dif-ficult to overcome because the labeling by professional counselors is costly.", "We found a poten-Counselor's LM Client's LM Seq2Seq Layers Attention & Dense Classification(Sigmoid) Counselor's U Client's Context U Client's Target U Task-specific Layers Seq2SeqLayers Pre-trainedConversation Model Figure 2: ConvMFiT (Conversation Model Fine-Tuning) model architecture.", "tially effective solution for a small labeled dataset by using pre-trained language models for various NLP tasks (Ramachandran et al., 2017; Howard and Ruder, 2018).", "Therefore, we focus on transferring knowledge from unlabeled in-domain dialogues as well as a general out-of-domain corpus.", "Overview.", "We illustrate the overall model architecture.", "We first stack pre-trained seq2seq layers which represent a conversation model (see Fig. 2, lower colored part).", "Then we stack additional seq2seq layers and classification layers over it to capture task-specific features, and an attention layer is added between the two to enhance the model's interpretability (see Fig. 2, upper white part).", "using a language model based conversation model for transfer learning.", "The model can be regarded as an extended version of ULMFiT (Howard and Ruder, 2018) with modifications to fit our task.", "In ConVMFiT, the model accepts a pre-trained seq2seq conversation model that requires two pre-trained language models, like an encoder and a decoder, learning the dependencies between them on the seq2seq layers.", "This approach is shown to be effective for machine translation, applying a source language model for the encoder and target language model for the decoder (Ramachan-dran et al., 2017).", "Word Vectors.", "Counseling dialogues consist of natural Korean text which is morphologically rich, so we train word vectors specifically developed for the Korean language (Park et al., 2018a).", "We train the vectors over a Korean corpus including out-of-domain general documents, 1) Korean Wikipedia, 2) online news articles, and 3) Sejong Corpus, as well as in-domain unlabeled counseling dialogues.", "The corpus contains 0.13 billion tokens.", "To validate the trained vector quality, we check performance on word similarity task (WS353) for Korean (Park et al., 2018a).", "The Spearmans correlation is 0.682, which is comparable to the state-of-the-art performance.", "These vectors are used as inputs (see Fig. 2, yellow).", "Pre-trained Language Models.", "We assume that counselors and clients have different language models because they have distinctive roles in the dialogue.", "Therefore, we train a counselor language model and a client language model separately.", "We collect counselor utterances in the total dialogue dataset, except dialogues in the test set, to train a counselor language model, and all others are used for training a client language model.", "Then, We fine-tune the two trained LMs with utterances in the labeled dialogues.", "For each model, we train word-level language models by using multilayer LSTMs.", "We apply various regularization techniques, weight tying (Inan et al., 2017), embedding dropout and variational dropout (Gal and Ghahramani, 2016), which are used to regularize LSTM language models (Inan et al., 2017; Ramachandran et al., 2017; Merity et al., 2018).", "We use a 3-layer LSTM model having 300 hidden units for every layer.", "We set embedding dropout and output dropout for each layer to 0.2, 0.1, respectively.", "The pre-trained language models generate inputs to seq2seq layer of a conversation model (Fig. 2, blue and red).", "Pre-trained Conversation Model.", "Next, we train a seq2seq conversation model (Vinyals and Le, 2015).", "We use the pre-trained counselor language model as an encoder, and the client language model as a decoder.", "The dependency of the decoder on the encoder is trained by seq2seq layers, stacked over the pre-trained models.", "(Fig. 2, green)", "We stack 2-layer LSTMs over the pre-trained counselor and client language models, respectively.", "The final states of the LSTMs on the counselor language model is used as an initial state of the LSTMs on the client language model.", "We set the hidden unit size of 300 for every LSTMs and set output dropout to 0.05.", "The outputs of pre-trained conversation models is used for inputs to seq2seq layers of the task-specific layers.", "During training the conversation model, we regularize the parameters of the model by adding cross entropy losses of the pre-trained counselor and client language models to seq2seq cross entropy loss of the conversation model.", "Three losses are weighted equally.", "This prevents catastrophic forgetting of the pre-trained language models and is important to achieve high performance (Ra-machandran et al., 2017).", "Also, there is room for improvement of the architecture of a conversation model, which integrates pre-trained language models, to capture dialogue patterns better and thus leads to higher classification performance.", "We will explore the architecture in future work.", "Task-specific layers.", "By leveraging pre-trained language model based conversation model, we fi-nally add layers for classification.", "In order to capture task-specific features, we first stack seq2seq layers over the conversation model.", "Then we add attention mechanism for document classification (Yang et al., 2016).", "Lastly, we use a sigmoid function as an output layer to predict whether the information is included in utterances because multiple categories can appear in a single utterance.", "(Fig. 2, gray)", "We use a 2-layer LSTM model for the seq2seq layers.", "We set 300 hidden units for every LSTMs and set output dropout to 0.05.", "The size of attention layer is set to 500.", "Thus the model is trained by three steps: 1) training word vectors and two language models, 2) training seq2seq conversation model with pre-trained LMs, and 3) fine-tuning task-specific classification layers, after removing softmax of the conversation model.", "In the last step, we compute binary logistic loss between predicted probability for each category and label as a loss function.", "We use Adam with default parameters for the optimizer, in order to train language models, conversation model, and fine-tuning the classifier.", "Also, gradual unfreezing is applied while training the model, starting updating parameters from the task-specific layers and unfreezing the next lower frozen layer for each epoch.", "We unfreeze layers every other epoch until all layers are tuned, and we stop training when the validation loss is minimized.", "All hyperparameters are tuned over the development set.", "We compare our model with baseline models in Table", "4. Models 1-5 are classifiers which only use the target client utterance to classify it, and models 6-8 are conversation model-based classifiers considering counselor's utterances and client's context utterances.", "All models use the same pre-trained word vectors for a fair comparison.", "(1) Random Forest, (2) SVM with RBF kernel.", "We represent an utterance by an average of all word vectors in it, then feed it to the classifier input.", "(3) CNN for text classification (Kim, 2014).", "In the convolution layer, we set the filter size to 1-10, and 30 filters each, then we apply max-over-time pooling.", "Then, a dense layer and sigmoid activation is applied over the layer.", "(4) RNN Bidirectional LSTM is used where the final states for each direction are concatenated to represent the utterances.", "Then, a dense layer and a sigmoid activation are stacked.", "(5) ULMFiT.", "A pre-trained client language model is used as a universal language model.", "Details are the same as described in Section 5.3.", "In addition, 2-LSTM layers and dense layer with sigmoid activations are stacked over the LM for classification.", "Gradual unfreezing is applied during training (Howard and Ruder, 2018).", "(6) Seq2Seq.", "Like encoder-decoder based conversation model (Vinyals and Le, 2015), three LSTMs are assigned for each Counselor's utterances, client's context/target utterances.", "The initial states of the client language models are set to the final states of preceding utterances.", "Then dense & sigmoid layers for classification are stacked over the final state of the client's target utterances.", "(7) HRED.", "Hierarchical encoder-decoder model (HRED) is used as a conversation model (Serban et al., 2016).", "For the encoder RNN, counselor's utterances and client's context utterances are given as inputs, and their information is stored in context RNN, which delivers it to the decoder accepting client's target utterances.", "Like (6) Seq2Seq, dense & sigmoid layers for classification are stacked over the final state of the client's target utterances.", "(1-4) Adding pre-trained models.", "Model 1-4 have the same architecture as ConvMFiT, which is Model (8) in Table.", "4. Model (1) in Table.", "5 initializes every parameter randomly.", "Model (2) starts training only with pre-trained word vectors.", "Model (3) leverages counselor and client language models as well, and Model (4) shows the performance of ConvMFiT.", "As Model (3) and (4) use more than two pre-trained components, gradual unfreezing in applied, unfreezing shallower layers first during training.", "(4-1)", "Task-specific Seq2Seq Layers.", "Model (4-1) removes task-specific Seq2Seq layers in the model, which leaves only attention and dense layer to capture task-specific features.", "It may result in an insufficient model capacity to capture relevant features for the task.", "(4-2)", "Effect of Gradual Unfreezing.", "Model (4-2) is trained without gradual unfreezing, allowing the parameters of every layer in the model change by the gradients from the first epoch.", "We show the performance of our model and comparison models in Table.", "4. (1) Random Forests and (2) Support Vector Machines underperform to classify utterances correctly which belong to rarely occurred classes.", "Compared to (1) and (2), (3) CNN and (4) RNN show better performance since they look at the sequence of words (.416, .431, respectively).", "RNN shows slightly better performance than CNN.", "(6) ULMFiT outperforms the others by using a pre-trained client language models (.455).", "The client target utterances have their context, and they also depend on the counselor's preceding utterances, so using the preceding counselor utterance as well as the client context utterances helps to improve the classification performance.", "When integrating the information using simple (6) seq2seq model, it shows better performance (.530) than (5) ULMFiT.", "(7) HRED adds a higher-level RNN to seq2seq models, but we find there is little performance gain (.001) which makes the model overfit easily.", "(8) ConvMFiT employs pre-trained conversation model based on pre-trained LMs and so outperforms all other baseline models (.642).", "This is because ConvMFiT integrates conversational contexts and the counselor language model, which helps to capture the patterns of client's language better.", "Also, the improvement is higher for the class (Fact.) and (Chan.) which have small num-bers of examples since the ConvMFiT could leverage pre-trained knowledge to classify them.", "With an ablation study applying pre-trained components step by step to our model, we show the", "sources of performance improvement in Table", "5. Model (1) uses the same architecture but no pre-trained models are applied, showing poor performance due to overfitting (.455).", "The f1 score is lower than that of the simple seq2seq model.", "Model (2) only uses pre-trained word vectors, and it helps to increase the performance slightly (.494).", "Also, Model (3) adds two pre-trained LMs with gradual unfreezing, resulting in a substantial performance increase (.626.), so we find pre-trained LMs are essential to the improvement.", "Lastly, Model (4) adds the dependency between two language models to fully leverage the pre-trained conversation model.", "This also helps classify utterances better (.642).", "In addition, we find only adding attention and dense layers are not sufficient to learn task-specific features, so providing the model more capacity helps improving the performance.", "Without task-specific seq2seq layers, model (4-1) shows decreased f1 score (.563).", "Meanwhile, model (4-2) shows careful training scheme affects to the performance as well.", "To investigate how linguistic patterns of counselors and clients differ in the various categories, we report qualitative results based on the activation values of the attention layer.", "To protect the anonymity of the clients, we explore key phrases from utterances rather than publish parts of the conversation in any form.", "To this end, we compute the relative importance of n-grams.", "For any n of n-gram in an utterance of length N where n < N , the relative importance r is computed as follows: ( n (cid:89) i =1 a i ( 1 /N ) n ) / ( 1 /N ) n (1) where a i is corresponding attention value of a word, (cid:81) ni =1 a i is the product of the values for every words in the n-gram, normalized by the expected attention weights (1 /N ) n .", "The normalization term considers the length of utterance since a word in short utterances tends to have high attention value because of (cid:80) Ni =1 a i = 1 .", "We name this measure relative importance' r meaning that the degree of the n-gram is how much it is attended to compared to the expectation.", "Based on this, we select examples from 100 top-ranked key phrases for each category and the results are presented in Appendix (Table. 6).", "All of the presented key phrases are translated to English from Korean.", "Factual Information.", "Clients provide demographic information and previous experience of visits to counselors or psychiatrists.", "In some case, clients talk more about the motivation of counseling.", "Since this information is explored in an early stage of counseling sessions, we also find counselor greetings with client's names.", "Anecdotal Experience.", "Clients describe their experiences by using past tense verbs.", "Usually, utterances include phrases such as I thought that', I was totally wrong'.", "Counselors show simple responses well..', and reflections.", "Appealing Problem.", "Like anecdotal experience, clients describe their problems, but with using the present tense of verbs.", "They are appealing their thinking and emotions.", "Counselors also show simple responses or reflections.", "Since some clients immediately start pouring out their problems right after counseling sessions starts, so greetings from counselor appear in key phrases.", "Psychological Change.", "Clients obviously report their change of feelings, emotions or thoughts.", "It includes looking back on the past and then determining to change in the future.", "Counselors give supportive responses and empathetic understanding.", "Counseling Process.", "Clients and counselors exchange greetings with each other.", "Also, they discuss making an appointment for the next session.", "In some cases, counselors respond to client's questions about the logistics of the sessions.", "Researchers have explored psycho-linguistic patterns of people with mental health problems (Gkotsis et al., 2016), depression (Resnik et al., 2015), Asperger's and autism (Ji et al., 2014) and Alzheimer's disease (Orimaye et al., 2014).", "In addition, these linguistic patterns can be quantified, for example, overall mental health (Loveys et al., 2017; Coppersmith et al., 2014), and schizophrenia (Mitchell et al., 2015).", "To aid people with those mental issues, large portion of studies are dedicated to detecting those issues from natural language.", "Depression (Morales et al., 2017; Jamil et al., 2017; Fraser et al., 2016), anxiety (Shen and Rudzicz, 2017), distress (Desmet et al., 2016), and self-harm risk (Yates et al., 2017) can be effectively detected from narratives or social media postings.", "In this paper, we developed five categories of client utterances and built a labeled corpus of counseling dialogue.", "Then we developed the ConvMFiT for classifying the client utterances into the five categories, leveraging a pre-trained conversation model.", "Our model outperformed comparison models, and this is because of transferring knowledge from the pre-trained models.", "We also explored and showed typical linguistic patterns of counselors and clients for each category.", "Our ConvMFiT model will be useful in other classification tasks based on dialogues.", "ConvMFiT is a seq2seq model for counselor-client conversation, however, another approach would be to model with existing non-goal oriented conversation models incorporating Variational Autoen-coder (VAE) (Serban et al., 2017; Park et al., 2018b; Du et al., 2018).", "We plan to attempt these models in future work.", "We expect to apply our trained model to various text-based psychotherapy applications, such as extracting and summarizing counseling dialogues or using the information to build a model addressing the privacy issue of training data.", "We hope our categorization scheme and our ConvMFiT model become a stepping stone for future computational psychotherapy research.", "The study reported in this paper was approved by the KAIST Institutional Review Board (#IRB-17-95).", "This research was supported by the Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korean Government MSIT (NRF-2018R1A5A1059921)." ]
[ "method", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "objective", "objective", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "objective", "objective", "result", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "method", "other" ]
[ "Multilingual neural machine translation (NMT) enables training a single model that supports translation from multiple source languages into multiple target languages.", "In this paper, we push the limits of multilingual NMT in terms of the number of languages being used.", "We perform extensive experiments in training massively multilingual NMT models, translating up to 102 languages to and from English within a single model.", "We explore different setups for training such models and analyze the trade-offs between translation quality and various modeling decisions.", "We report results on the publicly available TED talks multilingual corpus where we show that massively multilingual many-to-many models are effective in low resource settings, outperforming the previous state-of-the-art while supporting up to 59 languages.", "Our experiments on a large-scale dataset with 102 languages to and from English and up to one million examples per direction also show promising results, surpassing strong bilingual baselines and encouraging future work on massively multilingual NMT.", "Neural machine translation (NMT) (Kalchbren-ner and Blunsom, 2013; Bahdanau et al., 2014; Sutskever et al., 2014) is the current state-of-the-art approach for machine translation in both academia (Bojar et al., 2016, 2017, 2018) and industry (Wu et al., 2016; Hassan et al., 2018).", "Recent works (Dong et al., 2015; Firat et al., 2016a; Ha et al., 2016; Johnson et al., 2017) extended the approach to support multilingual translation, i.e. training a single model that is capable of translating between multiple language pairs.", "of the number of required models and model parameters, enabling simpler deployment.", "Another benefit is transfer learning; when low-resource language pairs are trained together with high-resource ones, the translation quality may improve (Zoph et al., 2016; Nguyen and Chiang, 2017).", "An extreme case of such transfer learning is zero-shot translation (Johnson et al., 2017), where multilingual models are able to translate between language pairs that were never seen during training.", "While very promising, it is still unclear how far one can scale multilingual NMT in terms of the number of languages involved.", "Previous works on multilingual NMT typically trained models with up to 7 languages (Dong et al., 2015; Firat et al., 2016b; Ha et al., 2016; Johnson et al., 2017; Gu et al., 2018) and up to 20 trained directions (Cettolo et al., 2017) simultaneously.", "One recent exception is Neubig and Hu (2018) who trained many-to-one models from 58 languages into English.", "While utilizing significantly more languages than previous works, their experiments were restricted to many-to-one models in a low-resource setting with up to 214k examples per language-pair and were evaluated only on four translation directions.", "In this work, we take a step towards practical universal NMT training massively multilingual models which support up to 102 languages and with up to one million examples per language-pair simultaneously.", "Specifically, we focus on training English-centric many-to-many models, in which the training data is composed of many language pairs that contain English either on the source side or the target side.", "This is a realistic setting since English parallel data is widely available for many language pairs.", "We restrict our experiments to Transformer models (Vaswani et al., 2017) as they were shown to be very effective in recent benchmarks (Ott et al., 2018), also in the context of multilingual models (Lakew et al., 2018; Sachan and Neubig, 2018).", "We evaluate the performance of such massively multilingual models while varying factors like model capacity, the number of trained directions (tasks) and low-resource vs. high-resource settings.", "Our experiments on the publicly available TED talks dataset (Qi et al., 2018) show that massively multilingual many-to-many models with up to 58 languages to-and-from English are very effective in low resource settings, allowing to use high-capacity models while avoiding overfitting and achieving superior results to the current state-of-the-art on this dataset (Neubig and Hu, 2018; Wang et al., 2019) when translating into English.", "We then turn to experiment with models trained on 103 languages in a high-resource setting.", "For this purpose we compile an English-centric in-house dataset, including 102 languages aligned to-and-from English with up to one million examples per language pair.", "We then train a single model on the resulting 204 translation directions and find that such models outperform strong bilingual baselines by more than 2 BLEU averaged across 10 diverse language pairs, both to-and-from English.", "Finally, we analyze the tradeoffs between the number of involved languages and translation accuracy in such settings, showing that massively multilingual models generalize better to zero-shot scenarios.", "We hope these results will encourage future research on massively multilingual NMT.", "The main question we wish to answer in this work is how well a single NMT model can scale to support a very large number of language pairs.", "The answer is not trivial: on the one hand, training multiple language pairs together may result in transfer learning (Zoph et al., 2016; Nguyen and Chiang, 2017).", "This may improve performance as we increase the number of language pairs, since more information can be shared between the different translation tasks, allowing the model to learn which information to share.", "On the other hand, adding many language pairs may result in a bottleneck; the model has a limited capacity while it needs to handle this large number of translation tasks, and sharing all parameters between the different languages can be sub-optimal (Wang et al., 2018) especially if they are not from the same typological language family (Sachan and Neubig, 2018).", "We begin tackling this question by experimenting with the TED Talks parallel corpus compiled by Qi et al. (2018) 1 , which is unique in that it includes parallel data from 59 languages.", "For comparison, this is significantly more multilingual than the data available from all previous WMT news translation shared task evaluations throughout the years the latest being Bojar et al. (2016, 2017, 2018), which included 14 languages so far.", "2 We focus on the setting where we train English-centric models, i.e. training on all language pairs that contain English in either the source or the target, resulting in 116 translation directions.", "This dataset is also highly imbal-anced, with language pairs including between 3.3k to 214k sentence pairs for training.", "Table 9 in the supplementary material details the languages and training set sizes for this dataset.", "Since the dataset is already tokenized we did not apply additional preprocessing other than applying joint subword segmentation (Sennrich et al., 2016) with 32k symbols.", "Regarding the languages we evaluate on, we begin with the same four languages as Neubig and Hu (2018) Azerbeijani (Az), Belarusian (Be), Galician (Gl) and Slovak (Sk).", "These languages present an extreme low-resource case, with as few as 4.5k training examples for Belarusian-English.", "In order to better understand the effect of training set size in these settings, we evaluate on four additional languages that have more than 167k training examples each Arabic (Ar), German (De), Hebrew (He) and Italian (It).", "Using the same data, we trained three massively multilingual models: a many-to-many model which we train using all 116 translation directions with 58 languages to-and-from English, a one-to-many model from English into 58 languages, and a many-to-one model from 58 languages into English.", "We follow the method of Ha et al. (2016); Johnson et al. (2017) and add a target-language 1 github.com/neulab/ word-embeddings-for-nmt 2 Chinese, Czech, English, Estonian, Finnish, French, German, Hindi, Hungarian, Latvian, Romanian, Russian, Spanish, Turkish.", "According to http://www.statmt.", "org/wmtXX prefix token to each source sentence to enable many-to-many translation.", "These different setups enable us to examine the effect of the number of translation tasks on the translation quality as measured in BLEU (Papineni et al., 2002).", "We also compare our massively multilingual models to bilingual baselines and to two recently published results on this dataset (Neubig and Hu (2018); Wang et al. (2019)).", "Regarding the models, we focused on the Transformer in the Base configuration.", "We refer the reader to Vaswani et al. (2017) for more details on the model architecture.", "Specifically, we use 6 layers in both the encoder and the decoder, with model dimension set at 512, hidden dimension size of 2048 and 8 attention heads.", "We also applied dropout at a rate of 0.2 in the following components: on the sum of the input embeddings and the positional embeddings, on the output of each sub-layer before added to the previous layer input (residual connection), on the inner layer output after the ReLU activation in each feed-forward sublayer, and to the attention weight in each attention sub-layer.", "This results in a model with approximately 93M trainable parameters.", "For all models we used the inverse square root learning rate schedule from Vaswani et al. (2017) with learning-rate set at 3 and 40k warmup steps.", "All models are implemented in Tensorflow-Lingvo (Shen et al., 2019).", "In all cases we report test results for the checkpoint that performed best on the development set in terms of BLEU.", "For the multilingual models we create a development set that includes examples we uniformly sample from a concatenation of all the individual language pair development sets, resulting in 13k development examples per model.", "Another important detail regarding multilingual training is the batching scheme.", "In all of our multilingual models we use heterogeneous batching, where each batch contains examples which are uniformly sampled from a concatenation of all the language pairs the model is trained on.", "Specifi-cally, we use batches of 64 examples for sequences shorter than 69 tokens and batches of 16 examples for longer sequences.", "We did not use over-sampling as the dataset is relatively small.", "We use tokenized BLEU in order to be comparable with Neubig and Hu (2018).", "Table 1 shows Az-En Be-En Gl-En Sk-En Avg.", "the results of our experiments when evaluating on the same language pairs as they did.", "The results under Neubig & Hu 18 are their bilingual baselines and their best many-to-one models.", "Their many-to-one models use similar-language-regularization, i.e. fine-tuning a pre-trained many-to-one model with data from the language pair of interest together with data from a language pair that has a typologically-similar source language and more training data (i.e. Russian and Belarusian, Turkish and Azerbaijani).", "The results under Ours are our many-to-one and many-to-many models we trained identically in terms of model architecture and hyper-parameters.", "We first note that our many-to-many model outperforms all other models when translating into English, with 1.82 BLEU improvement (when averaged across the four language pairs) over the best fine-tuned many-to-one models of Neubig and Hu (2018) and 2.44 BLEU improvement over our many-to-one model when averaged across the four low-resource language pairs (Table 1).", "This is surprising as it uses the same X En data, model architecture and capacity as our many-to-one model, while handling a heavier burden since it also supports 58 additional translation tasks ( from English into 58 languages).", "Our models also outperform the more complex models of Wang et al. (2019) which use Soft Decoupled Encod-ing for the input tokens, while our models use a simple subword segmentation.", "One possible explanation is that the many-to-one model overfits the English side of the corpus as it is multi-way-parallel: in such setting the English sentences are overlapping across the different language pairs, making it much easier for the model to memorize the training set instead of generalizing (when enough capacity is available).", "On the other hand, the many-to-many model is trained on additional target languages other than English, which can act as regularizers for the X En tasks, reducing such overfitting.", "To further illustrate this, Figure 1 tracks the BLEU scores on the individual development sets during training for Italian (It), Romanian (Ro), Dutch (Nl), German (De) and Arabic (Ar) into English (left), together with BLEU scores on a subset of the training set for each model.", "We can see that while the many-to-one model degrades in performance on the development set, the many-to-many model still improves.", "Note the large gap in the many-to-one model between the training set BLEU and the development set BLEU, which points on the generalization issue that is not present in the many-to-many setting.", "We also note that our many-to-one model is on average 0.75 BLEU behind the best many-to-one models in Neubig and Hu (2018).", "We attribute this to the fact that their models are fine-tuned using similar-language-regularization while our model is not.", "We find an additional difference between the results on the resource-scarce languages (Ta-ble", "1) and the higher-resource languages (Table 2).", "Specifically, the bilingual baselines outperform the many-to-one models only in the higher-resource setting.", "This makes sense as in the low-En-Az En-Be En-Gl En-Sk Avg.", "resource setting the baselines have very few training examples to outperform the many-to-one models, while in the higher resource setting they have access to more training data.", "This corroborates the results of Gu et al. (2018) that showed the sensitivity of such models to similar low resource conditions and the improvements gained from using many-to-one models (however with much fewer language pairs).", "Table 3 shows the results of our massively multilingual models and bilingual baselines when evaluated out-of-English.", "In this case we see an opposite trend: the many-to-many model performs worse than the one-to-many model by 2.53 BLEU on average.", "While previous works (Wang et al., 2018; Sachan and Neubig, 2018) discuss the phenomena of quality degradation in English-to-many settings, this shows that increasing the number of source languages also causes additional degradation in a many-to-many model.", "This degradation may be due to the English-centric setting: since most of the translation directions the model is trained on are into English, this leaves less capacity for the other target languages (while still performing better than the bilingual baselines on all 8 language pairs).", "We also note that in this case the results are consistent among the higher and lower resource pairs the one-to-many model is better than the many-to-many model, which outperforms the bilingual baselines in all cases.", "This is unlike the difference we saw in the X En experiments since here we do not have the multi-way-parallel overfitting issue.", "From the above experiments we learn that NMT models can scale to 59 languages in a low-resource, imbalanced, English-centric setting, with the following observations: (1) massively multilingual many-to-many models outperform many-to-one and bilingual models with similar capacity", "capacity and identical training conditions when averaged over 8 language pairs into English.", "We attribute this improvement over the many-to-one models to the multiple target language pairs which may act as regularizers, especially in this low-resource multi-way-parallel setting that is prone to memorization.", "(2) many-to-many models are inferior in performance when going out-of-English in comparison to a one-to-many model.", "We attribute this to English being over-represented in the English-centric many-to-many setting, where it appears as a target language in 58 out of 116 trained directions, which may harm the performance on the rest of the target languages as the model capacity is limited.", "3 It is important to stress the fact that we compared the different models under identical training conditions and did not perform extensive hyper-parameter tuning for each setting separately.", "However, we believe that such tuning may improve performance even further, as the diversity in each training batch is very different between the different settings.", "For example, while the baseline model batches include only one language in the source and one language in the target, the many-to-many model includes 59 languages in each side with a strong bias towards English.", "These differences may require tailored hyper-parameter choices for each settings (i.e. different batch sizes, learning rate schedules, dropout rates etc.) which would be interesting to explore in future work.", "In the following experiments we investigate whether these observations hold using (1) an even larger set of languages, and (2) a much larger, balanced training corpus that is not multi-way-parallel.", "In this setting we scale the number of languages and examples per language pair further when training a single massively multilingual model.", "Since we are not aware of a publicly available resource for this purpose, we construct an in-house dataset.", "This dataset includes 102 language pairs which we mirror to-and-from English, with up to one million examples per language pair.", "This results in 103 languages in total, and 204 translation directions which we train simultaneously.", "More details about this dataset are available in Table 4, and Table 10 in the supplementary material details all the languages in the dataset.", "4 Similarly to our previous experiments, we compare the massively multilingual models to bilingual baselines trained on the same data.", "We tokenize the data using an in-house tokenizer and then apply joint subword segmentation to achieve an open-vocabulary.", "In this setting we used a vocabulary of 64k subwords rather than 32k.", "Since the dataset contains 24k unique characters, a 32k symbol vocabulary will consist of mostly characters, thereby increasing the average sequence length.", "Regarding the model, for these experiments we use a larger Transformer model with 6 layers in both the encoder and the decoder, model dimension set to 1024, hidden dimension size of 8192, and 16 attention heads.", "This results in a model with approximately 473.7M parameters.", "5 Since the model and data are much larger in this case, we used a dropout rate of 0.1 for our multilingual models and tuned it to 0.3 for our baseline models as it improved the translation quality on the development set.", "We evaluate our models on 10 languages from different typological families: Semitic Arabic (Ar), Hebrew (He), Romance Galician (Gl), Italian (It), Romanian (Ro), Germanic German (De), Dutch (Nl), Slavic Belarusian (Be), Slovak (Sk) and Turkic Azerbaijani (Az) and Turkish (Tr).", "We evaluate both to-and-from English, where each language pair is trained on up to one million examples.", "As in the previous experiment, we report test results from the model that performed best in terms of BLEU on the development set.", "4 The average number of examples per language pair is 940k, as for 13 out of the 102 pairs we had less than one million examples available.", "5 This is larger than the Transformer Big configuration, which includes approximately 213M trained parameters.", "Table 5 describes the results when translating into English.", "First, we can see that both multilingual models perform better than the baselines in terms of average BLEU.", "This shows that massively multilingual many-to-many models can work well in realistic settings with millions of training examples, 102 languages and 204 jointly trained directions to-and-from English.", "Looking more closely, we note several different behaviors in comparison to the low-resource experiments on the TED Talks corpus.", "First, the many-to-one model here performs better than the many-to-many model.", "This shows that the previous result was indeed due to the pathologies of the low-resource dataset; when the training data is large enough and not multi-way-parallel there is no overfitting in the many-to-one model, and it outperforms the many-to-many model in most cases while they are trained identically.", "One particular outlier in this case is German-to-English, where the many-to-one model is 2 BLEU points below the many-to-many model.", "We examine the BLEU score of this language pair on its dedicated German-English development set during training in the many-to-one model and find that it highly fluctuates.", "We then measure the performance on the test set for this language pair by choosing the best checkpoint on the dedicated German-English development set (instead of on the mixed multilingual development set) and find it to be 38.07, which is actually higher in 1 BLEU than the best result of the many-to-many model.", "This shows that while training many languages together, there is no silver bullet: some languages may suffer from severe interference during training (i.e. a reduction of 3 BLEU in this case, from 38.07 to 35.05) while other languages continue to improve with more updates.", "Table 6 describes the results when translating out-of-English.", "Again, both of the massively multilingual models perform better than the baselines when averaged across the 10 evaluated language pairs, while handling up to 102 languages to-and-from English and 204 translation tasks simultaneously.", "In this case the results are similar to those we observed on the TED talks corpus, where the one-to-many model performs better than the many-to-many model.", "Again, this advantage may be due to the one-to-many model handling a smaller number of tasks while not being biased towards English in the target side like the many-to-many model.", "The above results show that massively multilingual NMT is indeed possible in large scale settings and can improve performance over strong bilingual baselines.", "However, it was shown in a somewhat extreme case with more than 100 languages trained jointly, where we saw that in some cases the joint training may harm the performance for some language pairs (i.e. German-English above).", "In the following analysis we would like to better understand the trade-off between the number of languages involved and the translation accuracy while keeping the model capacity and training configuration fixed.", "We first study the effect of varying the number of languages on the translation accuracy in a supervised setting, where we focus on many-Ar-En", "to-many models.", "We create four subsets of the in-house dataset by sub-sampling it to a different number of languages in each subset.", "In this way we create four additional English-centric datasets, containing 5, 25, 50 and 75 languages each to-and-from English.", "We make sure that each subset contains all the languages from the next smaller subsets i.e. the 25 language subset contains the 5 language subset, the 50 language subset contains the 25 language subset and so on.", "We train a similar-capacity large Transformer model (with 473.7M parameters) on each of these subsets and measure the performance for each model on the 8 supervised language pairs from the smallest subset { Arabic, French, Russian, Ukrainian } English.", "In this way we can analyze to what extent adding more languages improves or harms translation quality while keeping the model capacity fixed, testing the capacity vs. accuracy saturation point.", "Table 7 shows the results of this experiment, reporting the test results for the models that performed best on the multilingual development set.", "We can see that in most cases the best results are obtained using the 5-to-5 model, showing that there is indeed a trade off between the number of languages and translation accuracy when using a fixed model capacity and the same training setup.", "One may expect that the gaps between the different models should become smaller and even close with more updates, as the models with more languages see less examples per language in each batch, thus requiring more updates to improve in terms of BLEU.", "However, in our setting these gaps did not close even after the models converged, leaving 2.73 average BLEU difference between the 5-to-5 and the 103-to-103 model.", "We then study the effect of the number of languages on zero-shot translation accuracy.", "Since we find zero-shot accuracy as an interesting measure for model generalization, we hypothesize that by adding more languages, the model is forced to create a more generalized representation to better utilize its capacity, which may improve zero-shot performance.", "We choose four language pairs for this purpose: Arabic French which are distant languages, and Ukrainian Russian which are similar.", "Table 8 shows the results of our models on these language pairs.", "For Arabic French the BLEU scores are very low in all cases, with the 50-to-50 and 25-to-25 models being slightly better than rest on Ar-Fr and Fr-Ar respectively.", "On Russian Ukrainian we see clear improvements when increasing the number of languages to more than five.", "Figure 2 further illustrates this, showing the better generalization performance of the massively multilingual models under this zero-shot setting.", "While the zero-shot performance in this case is low and unstable for the 5-to-5 and 25-to-25 mod-update 5-to-5 25-to-25 50-to-50 75-to-75 103-to-103 0 0 0 0 0 0 25000 3.360397369 7.345648855 7.028211653 6.687645614 6.629930437 50000 2.003555186 10.08476391 11.66040972 11.46485135 11.34905592 100000 2.383616194 9.54657495 15.44517726 14.72926438 13.17522973 150000 2.588021383 7.121089101 14.5408541 15.54533243 15.89359492 200000 2.589718811 10.99432111 15.82096368 17.38970876 16.26121253 250000 2.854427323 14.00393397 16.53215885 15.97282737 15.3215304 300000 2.823847346 6.934611499 16.78672731 16.52613729 16.16107672 350000 2.950227261 4.779103771 16.73545986 16.12752229 17.03165472 400000 4.301280901 6.631205231 16.16190225 17.38892049 17.55904406 450000 3.882381693 6.804813445 18.20554733 17.48778224 17.9339543 500000 3.45445089 6.358428299 17.32598543 16.71108902 15.97367823 550000 3.18 6.38 18.65 18.28 17.4 600000 2.86 9.5 18.46 14.92 17.12 650000 2.55 12.2 18.98 15.68 16.19 700000 2.98 8.44 20.16 15.4 18.52 Figure 2: Zero-shot BLEU during training for Ukranian to Russian els, it is much better for the 50-to-50, 75-to-75 and 103-to-103 models.", "Given these results we can say that the balance between capacity and generalization here favors the mid range 50-to-50 model, even when using models with more than 473M trained parameters.", "This may hint at the necessity of even larger models for such settings, which is a challenging avenue for future work.", "We also note that our 103 language corpus includes up to one million examples per language pair while in real-world MT deployments, systems are trained on much more examples per pair.", "This again emphasizes the need for better techniques for training such massively multilingual models as we may already be hitting the capacity barrier in our setting.", "Dong et al. (2015) extended the NMT model of Bahdanau et al. (2014) to one-to-many translation (from English into 4 languages) by adding a dedicated decoder per target language, showing improvements over strong single-pair baselines.", "Firat et al. (2016a,b) proposed many-to-many models (with up to 6 languages) by using separate encoders and decoders per language while sharing the attention mechanism.", "They also introduced the notion of zero-resource translation, where they use synthetic training data generated through pivoting to train translation directions without available training data.", "Ha et al. (2016) and Johnson et al. (2017) proposed to use a shared encoder-decoder-attention model for many-to-many translation (with up to 7 languages in the latter).", "In order to determine the target language in such scenarios they proposed adding dedicated target-language symbols to the source.", "This method enabled zero-shot translation, showing the ability of the model to generalize to unseen pairs.", "Recent works propose different methods for parameter sharing between language pairs in multilingual NMT.", "Blackwood et al. (2018) propose sharing all parameters but the attention mechanism and show improvements over sharing all parameters.", "Sachan and Neubig (2018) explore sharing various components in self-attentional (Trans-former) models.", "Lu et al. (2018) add a shared in-terlingua layer while using separate encoders and decoders.", "Zaremoodi et al. (2018) utilize recurrent units with multiple blocks together with a trainable routing network.", "Platanios et al. (2018) propose to share the entire network, while using a contextual parameter generator that learns to generate the parameters of the system given the desired source and target languages.", "Gu et al. (2018) propose a Universal Language Representation layer together with a Mixture-of-Language-Experts component to improve a many-to-one model from 5 languages into English.", "While the mentioned studies provide valuable contributions to improving multilingual models, they apply their models on only up to 7 languages (Johnson et al., 2017) and 20 trained directions (Cettolo et al., 2017) in a single model, whereas we focus on scaling NMT to much larger num-bers of languages and trained directions.", "Regarding massively multilingual models, Neubig and Hu (2018) explored methods for rapid adaptation of NMT to new languages by training multilingual models on the 59-language TED Talks corpus and fine-tuning them using data from the new languages.", "While modeling significantly more languages than previous studies, they only train many-to-one models, which we show are inferior in comparison to our proposed massively multilingual many-to-many models when evaluated into English on this dataset.", "Tiedemann (2018) trained an English-centric many-to-many model on translations of the bible including 927 languages.", "While this work pointed to an interesting phenomena in the latent space learned by the model where it clusters representations of typologically-similar languages together, it did not include any evaluation of the produced translations.", "Similarly, Malaviya et al. (2017) trained a many-to-English system including 1017 languages from bible translations, and used it to infer typological features for the different languages (without evaluating the translation quality).", "In another relevant work, Artetxe and Schwenk (2018) trained an NMT model on 93 languages and used the learned representations to perform cross-lingual transfer learning.", "Again, they did not report the performance of the translation model learned in that massively multilingual setting.", "We showed that NMT models can successfully scale to 102 languages to-and-from English with 204 trained directions and up to one million examples per direction.", "Such models improve the translation quality over similar single-pair baselines when evaluated to and from English by more than 2 BLEU when averaged over 10 diverse language pairs in each case.", "We show a similar result on the low-resource TED Talks corpus with 59 languages and 116 trained directions.", "We analyze the trade-offs between translation quality and the number of languages involved, pointing on capacity bottlenecks even with very large models and showing that massively multilingual models can generalize better to zero-shot settings.", "We hope this work will encourage future research on massively multilingual NMT, enabling easier support for systems that can serve more people around the globe.", "There are many possible avenues for future work, including semi-supervised learning in such settings, exploring ways to reduce the performance degradation when increasing the number of languages, or using such models for multilingual transfer learning (McCann et al., 2017; Eriguchi et al., 2018; Artetxe and Schwenk, 2018).", "Understanding and improving zero-shot performance in such scenarios is also a promising direction for future work.", "We would like to thank the Google Brain and Google Translate teams for their useful inputs and discussions.", "We would also like to thank the entire Lingvo development team for their foundational contributions to this project." ]
[ "abstain", "method", "method", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "method", "result", "method", "objective", "result", "result", "result", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "other", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "objective", "other", "method", "other", "other", "other", "result", "abstain", "result", "result", "method", "abstain", "abstain", "other", "other" ]
[ "The goal of database question answering is to enable natural language querying of real-life relational databases in diverse application domains.", "Recently, large-scale datasets such as Spider and WikiSQL facilitated novel modeling techniques for text-to-SQL parsing, improving zero-shot generalization to unseen databases.", "In this work, we examine the challenges that still prevent these techniques from practical deployment.", "First, we present KaggleDBQA, a new cross-domain evaluation dataset of real Web databases, with domain-specific data types, original formatting, and unrestricted questions.", "Second, we re-examine the choice of evaluation tasks for text-to-SQL parsers as applied in real-life settings.", "Finally, we augment our in-domain evaluation task with database documentation , a naturally occurring source of implicit domain knowledge.", "We show that KaggleDBQA presents a challenge to state-of-the-art zero-shot parsers but a more realistic evaluation setting and creative use of associated database documentation boosts their accuracy by over 13.2%, doubling their performance.", "Text-to-SQL parsing is a form of database question answering (DBQA) that answers a user's natural-language (NL) question by converting it into a SQL query over a given relational database.", "It can facilitate NL-based interfaces for arbitrary end-user applications, thereby removing the need for domain-specific UX or learning query languages.", "As such, DBQA attracted significant attention in academia and industry, with development of supervised datasets (Yu et al., 2018), large-scale models (Wang et al., 2020b; Zeng et al., 2020), and novel modeling techniques (Yu et al., 2020; Deng et al., 2020).", "The key challenge of text-to-SQL parsing is zero-shot generalization to unseen domains, i.e. to new database schemas and dierently distributed NL questions.", "Large-scale annotated datasets like Spider (Yu et al., 2018) and WikiSQL (Zhong et al., 2017) evaluate cross-domain generalization of text-to-SQL parsers by restricting overlap between train and test domains.", "Such challenging benchmarks facilitate rapid progress in DBQA.", "State-of-the-art (SOTA) accuracy on Spider rose from 12.4% to 70.5% in just two years since its release, demonstrating the value of well-chosen evaluation settings.", "Despite impressive progress in DBQA, deployment of SOTA parsers is still challenging.", "They often lack robustness necessary to deploy on real-life application domains.", "While many challenges underlie the gap between SOTA DBQA and its real-life deployment, we identify three specific discrepancies.", "First, Spider and WikiSQL datasets normalize and preprocess database schemas or rely on academic example databases that originate with human-readable schemas (Suhr et al., 2020).", "In contrast, industrial databases feature abbreviated and obscure naming of table, columns, and data values, often accrued from legacy development or migrations.", "Figure 1 shows a characteristic example.", "After deployment, text-to-SQL parsers struggle with schema linking to domain-specific entities because they do not match the distribution seen in their pre-training ( e.g. BERT) or supervised training ( e.g. Spider).", "Second, the NL questions of Spider and WikiSQL have high column mention percentage (Deng et al., 2020), which makes their language unrealistic.", "This can be an artifact of rule-generated NL templates (as in WikiSQL) or annotation UIs that prime the annotators toward the schema (as in Spider).", "Either way, real-world deployment of a text-to-SQL parser optimized on Spider faces a distribution shift in NL, which reduces its realistic performance.", "Finally, the standard evaluation setting of cross-domain text-to-SQL parsing assumes no in-domain Database: Student Math Score Table FINREV_FED_17 : state_code school_district yr_data t_fed_rev c14 c15 33 NEW YORK CITY SCHOOL DISTRICT 17 2061297 956851 439209 47 FAIRFAX CO SCHS 17 126916 21035 36886 Column Descriptions: t_fed_rev Total federal revenue through the state to each school district c14 Federal revenue through the state-Title 1 (no child left behind act) c15 Federal revenue through the state Child Nutrition A Table FINREV_FED_17_KEY : state_code state #_Records 1 Alabama 137 50 Wisconsin 425 51 Wyoming 48 Example Question: Which school district received the most of federal revenue through state in Wisconsin?", "supervision.", "This simplifies parser evaluation and raises the challenge level for zero-shot generalization.", "However, it does not leverage knowledge sources commonly present in real-world applications, both explicit (annotated in-domain examples) and implicit ( e.g. database documentation, SQL queries in the application codebase, or data dis-tributions).", "A well-chosen alternative evaluation setting would facilitate development of DBQA technologies that match their real-world evaluation.", "KaggleDBQA We introduce KaggleDBQA, a new dataset and evaluation setting for text-to-SQL parsers to bridge the gap between SOTA DBQA research and its real-life deployment.", "1 It systematically addresses three aforementioned challenges: To test database generalization, it includes real-world databases from Kaggle, 2 a platform for data science competitions and dataset distribution.", "They feature abbreviated and obscure column names, domain-specific categorical values, and minimal preprocessing (Section 3.1).", "To test question generalization, we collected unrestricted NL questions over the databases in KaggleDBQA.", "Importantly, the annotators were not presented with original column names, and given no task priming (Section 3.2).", "Out of 400 collected questions, one-third were out of scope for SOTA text-to-SQL parsers.", "The remaining 1 Available at https://aka.ms/KaggleDBQA .", "Finally, we augment KaggleDBQA with database documentation , common metadata for real-world databases and a rich source of implicit domain knowledge.", "Database documentation includes column and table descriptions, categorical value descriptions (known as data dictionaries ), SQL examples, and more (Section 3.3).", "We present a technique to augment SOTA parsers with column and value descriptions, which significantly improves their out-of-domain accuracy (Section 4).", "Figure 1 shows a representative example from the dataset.", "Aligning federal revenue and t_fed_rev is hard without domain knowledge.", "In addition to more realistic data and questions, we argue that evaluation of real-world text-to-SQL performance should assume few-shot access to 10 in-domain question-SQL examples rather than measuring zero-shot performance.", "In practical terms, few-shot evaluation assumes up to 1-2 hours of effort by a target database administrator or application developer, and translates to significant performance benefits.", "In a few-shot evaluation setting, augmenting a SOTA text-to-SQL parser (RAT-SQL by Wang et al. (2020b)) with database documentation almost doubled its performance from 13.56% to 26.77%.", "See Section 4.", "Text-to-SQL Semantic Parsing Semantic parsing has been studied extensively for decades (Liang, 2016).", "Key in-domain datasets such as GeoQuery (Zelle and Mooney, 1996) and ATIS (Dahl et al., 1994) acted as initial catalyst for the field by providing an evaluation measure and a training set for learned models.", "Applying a system to a domain with a dierent distribution of questions or parses required out-of-domain data or domain transfer techniques.", "Recently, cross-domain datasets WikiSQL (Zhong et al., 2017) and Spider (Yu et al., 2018) proposed a zero-shot evaluation methodology that required out-of-domain generalization to unseen database domains.", "This inspired rapid development of domain-conditioned parsers that work out of the box such as RAT-SQL (Wang et al., 2020b) and IRNet (Guo et al., 2019).", "We use the same exact match accuracy metric as these works.", "Recent work (Zhong et al., 2020) has proposed evaluating SQL prediction via semantic accuracy by computing denotation accuracy on automatically generated databases instead.", "Few-shot learning In this paper, we propose a few-shot evaluation to inspire future research of practical text-to-SQL parsers.", "Like zero-shot, few-shot has access to many out-of-domain examples, but it also has access to a small number of in-domain examples as well.", "Few-shot learning has been applied to text classification in (Mukherjee and Awadallah, 2020), and has also been applied to semantic parsing.", "Common techniques include meta-learning (Huang et al., 2018; Wang et al., 2020a; Li et al., 2021; Sun et al., 2020) and adversarial learning (Li et al., 2020).", "Generalization and Practical usability Recent work has begun to question whether existing datasets are constructed in a way that will lead to models that generalize well to new domains.", "Suhr et al. (2020) identified a number of challenges with text-to-SQL datasets, one of which is an artificially high overlap between words in a question and words in the tables.", "This issue appears in Spider and is a byproduct of the fact that question authors view the database schema as they write their question.", "The Spider-Realistic (Deng et al., 2020) dataset aims to reduce this by explicitly rewriting the questions to avoid overlapping terms.", "Other works has studied the problem of the gap between academic datasets and their practical usability (de Vries et al., 2020; Radhakrishnan et al., 2020; Zhang et al., 2020), including highlighting the need for data to be real.", "Our goal was to create an evaluation dataset and metric that minimizes this gap; our dataset is constructed from real data found on Kaggle that has been used for competitions or other analyses.", "Another direction of generalization being explored is compositionality.", "Keysers et al. (2020) used rules to generate a large-scale semantic parsing dataset that specifically tests models for compos-ability.", "Leveraging other resources for learning Ras-togi et al. (2020) provide NL descriptions for slots and intents to help dialogue state tracking.", "Lo-geswaran et al. (2019) use descriptions to facilitate zero-shot learning for entity linking.", "Weller et al. (2020) use descriptions to develop a system that can perform zero-shot learning on new tasks.", "We follow by including documentation on each included real-world database.", "Notably, this documentation was written for human consumption of the database rather than prepared for KaggleDBQA, and thus is a natural source of domain knowledge.", "It provides similar benefits to codebase documentation and comments, which improve source code encoding for AI-assisted software engineering tasks (Pan-thaplackel et al., 2020; Wei et al., 2019).", "The goal of the KaggleDBQA evaluation dataset is to more closely reflect the data and questions a text-to-SQL parser might encounter in a real-world setting.", "As such, it expands upon contemporary cross-domain text-to-SQL datasets in three key aspects:", "(i) its databases are pulled from real-world data sources and not normalized;", "(ii) its questions are authored in environments that mimic natural question answering;", "(iii) its evaluation assumes the type of system augmentation and tuning that could be expected from domain experts that execute text-to-SQL parser deployment.", "We describe each of these components in turn in this section.", "We chose to obtain databases from Kaggle, a popular platform for hosting data science competitions and sharing datasets and code.", "Their hosted datasets are by definition real as they are used by members of the site for research.", "Competition hosts upload their data unnormalized, and the Table 1: Comparison of text-to-SQL datasets.", "data content and formatting matches its domain-specific usage (see Figure 1 for an example).", "To construct KaggleDBQA, we randomly selected 8 Kaggle datasets that satisfied the following criteria:", "(a) contained a SQLite database;", "(b) licensed under a republishing-permissive license;", "(c) had associated documentation that described the meaning of the tables and columns.", "For each database, we asked five annotators to write ten domain-specific questions that they think someone might be interested in and that can be answered using the database.", "We use five annotators per database to help guarantee diversity of questions.", "Each annotated two databases, for a total of 20 annotators and 400 questions.", "The annotators are not required to possess SQL knowledge so their questions are more reflective of natural user interests.", "Importantly, to discourage users from using the same terms from the database schema in their questions, we replace the original column names with the column descriptions .", "When annotating the questions, the annotators are shown a paragraph description of the database, table names, column descriptions and ten sampled rows for each table.", "We do not provide any constraints or templates other than asking them to avoid using exact phrases from the column headings in the questions.", "Appendix A.2.3 shows the full guidelines.", "Separately, each question is annotated with its SQL equivalent by independent SQL experts.", "They are given full access to all of the data content and database schema.", "One-third of the questions were yes/no, percentage, temporal, or unexpressible in SQL and were not considered in our evaluation of SOTA models (see Appendix A.2.2 for details), leaving 272 questions in total.", "Each database has associated plain-text documentation that can assist text-to-SQL parsing.", "It is commonly found as internal documentation for database administrators or external documentation accompanying a dataset release.", "The contents vary but often contain an overview of the database domain, descriptions of tables and columns, sample queries, original sources, and more.", "While all of these types of information could be leveraged to assist with domain transfer, in this work we focus on the column descriptions .", "They help address the schema linking problem of text-to-SQL parsing, i.e. aligning entity references in the question with database columns (Wang et al., 2020b).", "For example, federal revenue in Figure 1 must be aligned to the column t_fed_rev even though its abbreviated name makes alignment non-obvious.", "We manually extract the column descriptions from the database documentation and provide the mapping from column to description as part of KaggleDBQA.", "The descriptions are free text and sometimes contain additional information such as defining the values in an categorical column.", "Such information could help with the value-linking problem (mapping a value in the question to the column Table 2: Average partial match % of columns descriptions across examples. We check whether 1to 3-grams in the question are part of any column descriptions. Type of n-gram 1 2 3 % Cols matched in golden SQL 56.27 21.47 4.80 # Cols matched in golden SQL 1.06 0.37 0.07 # Cols matched not in the SQL 4.69 1.29 0.13 that likely contains it).", "We leave the entire description as a single field and leave it to future work to explore these uses further.", "In addition to column descriptions, we also include the original unstructured documentation which can be used for future research on automatically extracting descriptions or leveraging other domain knowledge.", "The current cross-domain datasets Spider (Yu et al., 2018) and WikiSQL (Zhong et al., 2017) evaluate models in a zero-shot setting, meaning the model is trained on one set of domains and evaluated on a completely disjoint set.", "This evaluation encourages the development of systems that work well \"out of the box\" and has spurred great development in cross-domain text-to-SQL systems that are able to generalize to new domains.", "However, we believe the zero-shot setting is overly-restrictive compared to how text-to-SQL systems are likely to be actually used in practice.", "We postulate that it is more realistic to assume a setting where an application author spends 1-2 hours authoring examples and adapting existing database documentation.", "This time investment is a small fraction of the time required to prepare an application itself and so we believe application authors would devote the time if it resulted in increased text-to-SQL accuracy.", "In informal experiments, we have found SQL annotators can author 10-20 examples in an hour.", "Thus, the KaggleDBQA evaluation setting is few-shot : 30% of the questions for each domain (6-15 depending on the domain) are designated as in-domain and may be used as part of training for that domain, along with documentation.", "The remaining 70% are used for evaluation.", "We report accuracy in both the few-shot as well as the standard zero-shot (cross-domain) setting in this paper, but consider the few-shot setting to be the primary evaluation setting for KaggleDBQA.", "Evaluation is conducted on the same 70% portion regardless of setting, to ensure comparable results.", "We compare KaggleDBQA with previous benchmark datasets using key metrics in Table 1. KaggleDBQA has the lowest value mention percentage among all datasets, and also exhibits a low overlap between question terms and column names similar to that in all of the datasets besides Spider, making it more in line with what would be expected in a real-world setting where the people asking questions are not familiar with the actual database schema and terminology.", "This is likely a result of replacing column names with descriptions in the question annotation task.", "We also analyze the overlap between question terms and column descriptions in Table 2. Because the descriptions are significantly longer than column names, we require only that they share an n-gram in common (ignoring stop-words) rather than requiring exact match as was done for column mention percent.", "Unigram overlap is reasonably high (56% of correct columns match the question) but also results in many false-positive matches with other columns.", "Increasing n-gram size decreases false-positives but also rapidly decreases the correct column match percent.", "Thus, column descriptions may help guide the model, but are not as strong of a signal as found in Spider which suers from high exact column name match overlap.", "This was our intention in asking our annotators to avoid using the descriptions verbatim when writing questions.", "To measure the complexity of SQL in KaggleDBQA, we adopt the hardness criteria of Spider and report the numbers in Figure 2. The queries are on average more complex than Spider's, with significantly more hard and extra-hard ones.", "EditSQL (Zhang et al., 2019): EditSQL (with BERT) is the highest-performing model on the Spider dataset that also provides an open-source implementation along with a downloadable trained model.", "3 The model was built for edit-based multiturn parsing tasks, but can also be used as a single-turn parser for Spider or KaggleDBQA.", "It employs a sequence-to-sequence model with a question-table co-attention encoder for schema encoding.", "RAT-SQL (Wang et al., 2020b): RAT-SQL (v3 + BERT) is the model with highest accuracy on the Spider leaderboard that also provides an open-source implementation.", "4,5 It adds string matching to the encoder through the use of relation-aware self-attention and adopts a tree-based decoder to ensure the correctness of the generated SQL.", "Throughout this paper, we use the same exact-match accuracy metric introduced by the Spider dataset.", "Although our primary evaluation setting is few-shot, we first examine the traditional zero-shot setting to present an unbiased comparison with previous results.", "Table 3 compares the performance of these two models (both trained on Spider).", "As can be seen, the performance of both models is significantly lower on KaggleDBQA.", "This echoes the findings of Suhr et al. (2020) who found that a model trained on Spider did not generalize well to other datasets.", "Also, KaggleDBQA has much fewer column mentions and much more complex SQL than Spider (see Table 1 and Figure 2).", "For all further experiments on KaggleDBQA that emulate real-world evaluation, we choose RATSQL as the best performing parser.", "To apply RAT-SQL to KaggleDBQA's few-shot setting, for each domain we create a model by fine-tuning on its 30% in-domain data.", "See Appendix A.3 for implementation details.", "This fine-3 https://github.com/ryanzhumich/ editsql 4 As of one month before paper authoring.", "tuning is always performed as the last step before evaluation.", "As Table 4 shows, fine-tuning on a small amount of in-domain data dramatically increases overall accuracy from 13.56% to 17.96% (rows", "(a) and", "(e)), Although the few-shot setting is our primary setting, we also present results in the zero-shot setting to compare to previous work (Table 4 rows", "(e)-(h)).", "However, in the remainder of the paper we will be focusing on the few-shot setting.", "The database schemas in KaggleDBQA are obscure, making the task dicult without leveraging the database documentation.", "We consider only the column descriptions, but other portions of the documentation may prove useful in future work.", "The best approach for incorporating column descriptions into a text-to-SQL model is model-specific.", "RAT-SQL makes use of relations between question tokens and schema terms to assist with schema-linking .", "We extend the same functionality to column descriptions by appending the column descriptions to the column names (separated by a period) and recomputing matching relations.", "The concatenated column name is also presented to the transformer encoder for schema encoding.", "Simply adding these descriptions results in mismatch between the training set (Spider) which does not have descriptions, and the evaluation set (KaggleDBQA) which does.", "To alleviate it, we first augment the schemas in Spider with artificial descriptions.", "For column of table , the description for is the of the .", "We then retrain RAT-SQL on Spider with these artificial descriptions.", "Since the artificial descriptions simply restate information from the schema, the model may not learn to leverage them for any further information about schema linking and simply treat them as noise.", "Therefore, we also evaluate RAT-SQL adapted to the general domain of KaggleDBQA so that it", "(a) Table 4: Exact match accuracy and standard error on KaggleDBQA, mean of three runs with dierent random seeds.", "experiences useful descriptions and", "(b) adapts to the language distribution of KaggleDBQA.", "We evaluate the benefits of this adaptation using leave-one-out: for each domain in KaggleDBQA, we fine-tune the model on all other domains except for the target (with the same fine-tuning parameters as for few-shot learning).", "Adapting in this way is predictive of the performance of a novel domain with similar characteristics.", "As with the other few-shot results, the model is then fine-tuned on the few examples of target domain data.", "Adaptation and fine-tuning are two separate training processes.", "Adaptation is meant to adapt to the real-world distribution.", "Fine-tuning is meant to adjust for in-domain knowledge.", "The most eective setting for a target database in our experiments is to conduct adaptation first, followed by fine-tuning .", "Table 4 (row", "(d)) shows the results.", "Using column descriptions in the context of adaptation increases model accuracy from 17.96% to 26.77%.", "Ablations show that adaptation and descriptions each contribute approximately half of this gain (row", "(c)).", "Descriptions provide no benefit without adaptation (row", "(b)), likely due to the train-test mismatch between artificial descriptions and real ones.", "With-Table 5: Exact match accuracy and standard error on schema-normalized KaggleDBQA, average of three runs with dierent random seeds.", "out any artificial descriptions, accuracy drops even further so they are critical to leveraging in-domain knowledge.", "Overall, incorporating in-domain data ( i.e. a few-shot setting and database documentation) nearly doubles model accuracy from 13.56% to 26.77% on KaggleDBQA.", "One of the major challenges in KaggleDBQA is that column names are often obscure or abbreviated.", "A natural question is whether this creates diculty because the model struggles to understand the meaning of a column or because it leads to a low overlap between question and column terms.", "In an attempt to tease these factors apart, we created a normalized version of KaggleDBQA by replacing the obscure column names with normalized column names such as one might find in the Spider dataset.", "This was done manually using column descriptions to help clarify each column and without introducing any extra knowledge into the column names except for the expansion of abbreviations ( e.g. t_fed_rev total federal revenue ).", "In Table 5 we give the results of evaluation on the normalized KaggleDBQA, following the same setup as Table 4.", "Normalization provides a significant boost in performance (row", "(c) vs. row", "(a)).", "The trend is similar to Table 4.", "Without adaptation, models with descriptions are not better than those without (row", "(b) vs. row", "(a), row", "(d) vs. row", "(c)).", "After adaptation, the train-test mismatch is partly mitigated and the performance improves (row", "(f) vs. row", "(e), row", "(h) vs. row", "(g)).", "Normalization and descriptions provide complementary knowledge augmentation, jointly improving accuracy by 5% (row", "(h) vs. row", "(e)), more than either alone.", "Normalization helps clarify the obscure column names of KaggleDBQA.", "chal-Table 6: Examples where description-augmented (desc.) models solve a question that unaugmented models (no desc.) do not.", "Both models are adapted and fine-tuned.", "Both omit values, as per the ocial Spider metric.", "lenges such as low column mention percentage and in-domain schema conventions still leave significant room for improvement.", "We provide the full experimental results on normalized tables in the Appendix.", "Table 6 shows examples of improvements due to descriptions.", "First, column descriptions help the parser correctly identify columns to select.", "For instance, it chooses STAT_CAUSE_CODE over STAT_CAUSE_DESCR when asked for the most common cause of the fire (code).", "Second, they clarify necessary constraints.", "For instance, when asked how many samples come from other countries?, the parser chooses the correct origin column rather than superficially-matching country in the clause WHERE sampledata15.origin = \"2\" .", "Table 7 shows a distribution of error types in KaggleDBQA using 10 randomly-selected erroneous predictions for each domain.", "The error categories mostly follow Suhr et al. (2020), modulo", "(a) removing unobserved categories,", "(b) separating semantically equivalent predictions into their own Equivalent category, and", "(c) categorizing significant structural errors as Understanding Er-rors.", "We also provide more characteristics of each database in Table 8 in an attempt to understand the dierence in performance across databases.", "Our model performs worst on the databases with the most columns ( Pesticide , Baseball and Soccer ).", "The only database with lower accuracy is MathScore which has multiple tables and a relatively small fine-tuning set.", "The most common error types and their examples are summarized in Table 9.", "(i) The most common type is Incorrect Final Column (33.75%), illustrating the diculty of schema linking in KaggleDBQA even with documentation and fine-tuning.", "(ii) 32.5% of the errors are in Missing Con-straints.", "In KaggleDBQA questions, users sometimes use implications instead of directly mentioning the desired constraint, e.g. in preparation for Status = \"Under Construction\" .", "(iii) 31.25% of the errors are in Incorrect Constraint, e.g. failing to parse highest into the top-1 result in Table 8: Statistics of each database in KaggleDBQA.", "descending order.", "(iv) 15% of the errors are in Entity-column matching, e.g. aligning Salford to Location rather than LSOA .", "This illustrates the diculty of value linking , partly mitigated by value descriptions for categorical columns in the database documentation.", "KaggleDBQA provides two resources to facilitate real-world applications of text-to-SQL parsing.", "First, it encourages an evaluation regime that bridges the gap between academic and industrial settings, leveraging in-domain knowledge and more realistic database distribution.", "We encourage adopting this regime for established text-to-SQL benchmarks.", "Second, it is a new dataset of more realistic databases and questions, presenting a challenge to state-of-the-art parsers.", "Despite the addition of domain knowledge in the form of database documentation, our baselines reach only 26.77% accuracy, struggling to generalize to harder questions.", "We hope that better use of documentation and new modeling and domain adaptation techniques will help further advance state of the art.", "The KaggleDBQA dataset is available at https://aka.ms/KaggleDBQA .", "Dataset Collection The data collection process was pre-approved by IRB.", "Each annotator agreed to a consent form before having access to the labeling task.", "Each annotator was rewarded with a $20 e-gift card for the approximately one hour of their time.", "The authors of this paper acted as the SQL annotators and incurred no additional compensation.", "The databases collected for KaggleDBQA were individually reviewed to ensure they were properly licensed for re-distribution.", "For other details of dataset construction, please refer to Section 3. Aside from email addresses, no personal information of annotators was collected during our study.", "Email addresses were not shared and were promptly deleted after compensation had been provided.", "The association between annotator and annotation was deleted before any analysis or distribution was conducted.", "Language Distribution KaggleDBQA only includes question annotations and databases in English, thus evaluating multi-lingual text-to-SQL models on it will require translation.", "The set of annotators included both native and second-language speakers of English, all fluent.", "Usage of DBQA Technology Our goal with KaggleDBQA is to encourage the development of DBQA that will work in real-world settings.", "The actual deployment of a text-to-SQL parser must be conducted with appropriate safeguards in place to ensure users understand that the answers may be incorrect, especially if those answers are to be used in decision making." ]
[ "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "method", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Although pretrained language models can be fine-tuned to produce state-of-the-art results for a very wide range of language understanding tasks, the dynamics of this process are not well understood, especially in the low data regime.", "Why can we use relatively vanilla gradient descent algorithms (e.g., without strong regularization) to tune a model with hundreds of millions of parameters on datasets with only hundreds or thousands of labeled examples?", "In this paper, we argue that analyzing fine-tuning through the lens of intrinsic dimension provides us with empirical and theoretical intuitions to explain this remarkable phenomenon.", "We empirically show that common pre-trained models have a very low intrinsic dimension; there exists a low dimension reparameterization that is as effective for fine-tuning as the full parameter space.", "For example, by optimizing only 200 trainable parameters randomly projected back into the full space, we can tune a RoBERTa model to achieve 90% of the full parameter performance levels on MRPC.", "Furthermore, we empirically show that pretraining implicitly minimizes intrinsic dimension and, perhaps surprisingly, larger models tend to have lower intrinsic dimension after a fixed number of pre-training updates, at least in part explaining their extreme effectiveness.", "Lastly, we connect intrinsic dimensionality with low dimensional task representations and compression based generalization bounds to provide generalization bounds that are independent of the full parameter count.", "Pre-trained language models (Radford et al., 2019; Devlin et al., 2018; Liu et al., 2019; Lewis et al., 2019, 2020) provide the defacto initialization for modeling most existing NLP tasks.", "However, the process of fine-tuning them on often very small target task datasets remains somewhat mysterious.", "Why can we use relatively vanilla gradient descent algorithms (e.g., without strong regularization) to tune a model with hundreds of millions of parameters on datasets with only hundreds or thousands of labeled examples?", "We propose intrinsic dimensionality as a new lens through which fine-tuning can be analyzed (Li et al., 2018).", "An objective function's intrinsic dimensionality describes the minimum dimension needed to solve the optimization problem it de-fines to some precision level.", "In the context of pre-trained language models, measuring intrinsic dimensional will tell us how many free parameters are required to closely approximate the optimization problem that is solved while fine-tuning for each end task.", "For example, we will show that 200 parameters (randomly projected back into the full parameter space) are enough to represent the problem of tuning a RoBERTa model to within 90% of the performance of the full model.", "More generally, we also describe a set of strong empirical and theoretical connections between intrinsic dimensionality, number of parameters, pre-training, and generalization.", "We first empirically show that standard pretrained models can learn a large set of NLP tasks with very few parameters and that the process of pre-training itself implicitly minimizes the intrinsic dimension of later tuning for different NLP tasks.", "We study over a dozen different pre-trained models to show that the number of parameters strongly inversely correlates with intrinsic dimensionality, at least in part justifying the extreme effectiveness of such models.", "We interpret pre-training as providing a framework that learns how to compress the average NLP task.", "Finally, we connect intrinsic dimensional with low dimensional task representations and compression-based generalization bounds to provide intrinsic-dimension-based generalization bounds independent of the full parameter count, further justifying why these methods generalize so well in practice across tasks.", "The contributions of our paper are the following: We empirically show that common NLP tasks within the context of pre-trained representations have an intrinsic dimension several orders of magnitudes less than the full parameterization.", "We propose a new interpretation of intrinsic dimension as the downstream fine-tuning task's minimal description length within the framework of the pre-trained model.", "Within this interpretation, we empirically show that the process of pre-training implicitly optimizes the description length over the average of NLP tasks, without having direct access to those same tasks.", "We measure the intrinsic dimension of a large set of recently developed pre-training method, and how that larger models tend to have smaller intrinsic dimension.", "Lastly, we show that compression based generalization bounds can be applied to our intrinsic dimension framework to provide generalization bounds for large pre-trained models independent of the pre-trained model parameter count.", "Calculating the intrinsic dimension of an objective function in the context of deep-learning was first proposed by Li et al. (2018).", "They analyzed the impact of various architectures on the intrinsic dimensionality of their objective.", "Our work is a direct extension of this approach, focusing on analyzing pre-trained representations instead.", "There is a large collection of literature analyzing pre-trained models from the perspective of capacity.", "For example, a recent line of work has shown that pre-trained models such as BERT are redundant in their capacity, allowing for significant sparsification without much degradation in end metrics (Chen et al., 2020; Prasanna et al., 2020; Desai et al., 2019).", "Houlsby et al. (2019) showed that fine-tuning top layers of pre-trained models is not effective and that alternate methods allow fine-tuning effectively with a couple of percent of the parameters.", "Furthermore, we can view computing the intrinsic dimensionality as a continuous relaxation of the sparsification problem.", "There also exist connections between intrinsic dimensionality, knowledge distillation, and other model compression methods.", "Fundamentally intrinsic dimensionality attempts to find the smallest set of parameters needed to tune to reach satisfactory solutions, which can be thought of as a sparsification or distillation problem (Hinton et al., 2015; Chen et al., 2020).", "Unlike distillation approaches, the approach of intrinsic dimensionality does not change parameter count, sparsity, or architecture but instead looks at the underlying rank of the objective function (Li et al., 2018).", "There are also connections between representing multiple tasks within a pre-trained model and compression which we explore in 5.", "Moreover, standard approaches towards fine-tuning seem to have non-trivial effects on the generalization of pre-trained representations (Agha-janyan et al., 2020, 2021).", "A holistic explanatory picture of the successes of fine-tuning has not yet been painted.", "A clear understanding of the underlying mechanisms which lead to the incredible generalization of fine-tuned pre-trained representations is currently missing.", "Moreover, we still do not understand why various pre-training methodology manifests in universally useful representations, although recent line of works have attempted to cover this gap by looking at loss landscapes, and the learned linguistic properties of pre-trained models (Hao et al., 2019; Clark et al., 2019a).", "Background The intrinsic dimension of an objective function measures the minimum number of parameters needed to reach satisfactory solutions to the respective objective (Li et al., 2018).", "Alternatively, the intrinsic dimension represents the lowest dimensional subspace in which one can optimize the original function to within a certain level of approximation error.", "Computing the exact intrinsic dimensional of the objective function is computation intractable; therefore, we resort to heuristic methods to calculate an upper bound.", "Let D = [ 0 , 1 , ..., m ] be a set of D parameters that parameterize some model f ( , ) .", "Instead of optimizing the empirical loss in the original parameterization ( D ), the subspace method fine-tunes the model via the following re-parameterization in the lower-dimensional d -dimensions: D = D 0 + P ( d ) (1) where P : R d RD projects from a parameter from a lower-dimensional d to the higher dimensional D and D 0 is the original model parameterization.", "Intuitively, we project using an arbitrary random projection onto a much smaller space; usually, a linear projection, we then solve the optimization problem in that smaller subspace.", "If we reach a satisfactory solution, we say the dimensionality of that subspace is the intrinsic dimension.", "This methodology was proposed in the seminal paper by Li et al. (2018).", "Concretely Li et al. (2018) proposed three different parameteric forms for P ; a random linear dense projection ( d W ), random linear sparse projection ( d W sparse ) and random linear projection via the Fastfood transform (Le et al., 2013).", "The factorization of M consists of H , a Hadamard matrix, G , a random diagonal matrix with independent standard normal entries, B a random diagonal matrix with equal probability 1 entries, and a random permutation matrix.", "Furthermore, the matrix multiplication with a Hadamard matrix can be computed in O ( D log d ) via the Fast Walsh-Hadamard Transform.", "Everything except d is fixed; therefore, the optimization problem lies only in d -dimensions.", "1 We use the Fastfood transform due to its computational complexity.", "Specifically, using Hadamard matrices instead of dense matrices allows us to compute a linear projection significantly faster than a dense matrix projection.", "Furthermore, when working with large models such as RoBERTa, the memory required to store even a low-dimensional dense matrix to calculate intrinsic dimension is unreasonable ( d = 1000 , 330 , 000 , 000 1000 4 bytes = 1 . 32 terabytes).", "The standard method of measuring the intrinsic dimensionality of an objective as proposed by (Li et al., 2018) requires searching over various d , training using standard SGD over the subspace reparameterization D and selecting the smallest d which provides us with a satisfactory solution ( d 90 ).", "(Li et al., 2018) defined the satisfactory solution as being 90% of the full training metric.", "For example, 1 If we place a constraint of M being a binary matrix, we recover the sparsification problem; therefore, we can also view finding intrinsic dimensionality as a continuous relaxation of the sparsification problem.", "if we reach 85% accuracy training a model with all of its parameters, the goal is to find the smallest d , which would reach 0 .", "9 85% = 76 .", "5% accuracy; we call this dimension d 90 .", "2 The way (Li et al., 2018) define a satisfactory solution reduces the dependence of the dataset size on the calculation of intrinsic dimension.", "For a small dataset, we will generally have worse end metrics; therefore, we have a lower d 90 cut-off; inversely, a larger dataset will require a more nontrivial d 90 cut-off.", "Structure Aware Intrinsic Dimension Due to the large size of pre-trained language models (gen-erally in the hundreds of millions of parameters), the only computationally reasonable subspace optimization method is one that utilizes the Fastfood transform.", "For example, if we are interested in subspace training with d = 1000 for the RoBERTa-Large model using a dense matrix, we would require 1.42 terabytes of memory to store just the projection matrix.", "Unfortunately, the method of finding the intrinsic dimension proposed by (Li et al., 2018) is unaware of the layer-wise structure of the function parameterized by .", "Existing literature argues that in attention-based pre-trained models, individual layers specialize separately (Clark et al., 2019b); therefore, it is useful to incorporate a notion of structure when computing d 90 .", "We define Structure-Aware Intrinsic Dimension (SAID) as the following Di = D 0 ,i + i P ( d m ) i (3) For m layers, we trade m parameters from our subspace parameter d to allow for layer-wise scaling through jointly learned , thus d becomes [ d m , ] .", "This allows the SAID method to focus a larger capacity of d m towards specific layers what might carry more relevant information for the task at hand.", "Conversely, we will refer to the layer unaware method (Equation 2) as the Direct Intrinsic Dimension (DID) method.", "We first empirically calculate the intrinsic dimension of various pre-trained models on a set of sentence prediction tasks from the GLUE Benchmark 2 Initializing d = 0 we recover the original parameterization", "parameterization D 0 which in the context of fine-tuning represents the original weights of the pre-trained model.", "(Wang et al., 2018).", "We focus on analyzing BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) at both the base and large model sizes.", "We chose to experiment with MRPC (Dolan and Brockett, 2005) and QQP (Iyer et al., 2017) as reference examples of small and large tuning datasets.", "MRPC is a binary classification task for predicting semantic equivalency for two paraphrases with roughly 3700 training samples, while QQP is a binary classification task for predicting semantic equality of two questions, with roughly 363k samples.", "For every dataset and every model, we run 100 subspace trainings with d ranging from 10 to 10000 on a log scale.", "For every training run, we do a small hyperparameter search across four learning rates.", "We initialize every d to the zero vector to allow for our starting point to be the original pretrained model.", "Our subspace optimization method also operates over the randomly initialized sentence classification head to ensure we have exactly d parameters to optimize.", "We use both the SAID and DID subspace optimization methods, which we implemented in the Huggingface Transformers library (Wolf et al., 2019).", "We present the results in Figure", "1. 4.2 Analysis The first takeaway is the incredible low dimensionality of viable solutions.", "With RoBERTa-Large, we can reach 90% of the full fine-tuning solution of MRPC using roughly 200 parameters and 800 parameters for QQP (Table 1).", "Recall that our approximation of intrinsic dimension is necessarily crude by using random projections and restricting them to the use of Fastfood transform; therefore, it is likely that the true intrinsic dimension is much lower.", "Furthermore, RoBERTa consistently outperforms BERT across various subspace dimensions d while having more parameters.", "We leave a more in-depth analysis of model parameter size on intrinsic dimensionality to a later section (5.2).", "Lastly, we see that adding a notion of structure in the computation of intrinsic dimension is beneficial with the SAID method consistently improving over the structure unaware DID method.", "One interpretation of the intrinsic parameter vector is that it encodes the task at hand with respect to the original pre-trained representations.", "Therefore, we can interpret d as the minimal description length of the task within the framework dictated by the pretrained representations (Hinton and Zemel, 1993).", "Under this interpretation of intrinsic dimensionality, we hypothesize that pre-training is implicitly lowering the intrinsic dimensionality of the average NLP task, and therefore compressing the minimal description length of those same tasks.", "What do we more precisely mean by intrinsic parameter encoding a task within the framework provided by the pre-trained representations?", "Traditionally, a finetuned model (e.g. for a classification tasks) simply consists of a classification head g , parameterized by w g applied to fine-tuned representations f , parameterized by w f per sample x .", "Therefore, to fully describe a task, we need to pack together parameterizations and weights { g, f, w g , w f } .", "This model description is completely decoupled from the original weights of the pre-trained representation w f 0 , therefore to represent n classification tasks, we need to maintain n { w g , w f } ; additionally, the task representation is incredibly high dimensional.", "Conversely, fine-tuning utilizing SAID in d -dimensions requires storing only d per task, a single random seed used to generate M and the original pre-trained weights w f 0 .", "Therefore, we can represent arbitrary NLP tasks within a single pre-trained model framework with d + 1 parameters.", "For example, in the last section, we represented MRPC with roughly 200 parameters, which translates to needing less than a kilobyte of data to encode a complex natural language task within the framework provided by RoBERTa.", "We hypothesize that the better the pre-trained models are, the fewer bits (description length) are needed to represent the average NLP task, as we will demonstrate empirically in the next section.", "To verify our hypothesis of pre-training optimizing intrinsic dimension, we retrain a RoBERTa-Base from scratch and measure the intrinsic dimension of various NLP tasks at different training checkpoints, using the SAID method.", "We completely replicate the setting as described by Liu et al. (2019) apart from only training for a total of 200k steps (in-stead of 500k) with half the batch size (1k).", "To calculate the intrinsic dimension more efficiently, we reuse the best learning rates discovered in Section 4 for d < 10000 and use a fixed learning rate for anything else.", "To find d 90 we do a binary search across d per each checkpoint, with a minimum d of 100 and a maximum of 4 million.", "The full solution that we use when deciding d 90 cutoff is computed by fine-tuning the checkpointed model in the standard way.", "We compute SAID on six datasets; MRPC , QQP , Yelp Polarity (Zhang et al., 2015), SST-2 (Socher et al., 2013), MNLI (Williams et al., 2018) and ANLI using all rounds of data (Nie et al., 2019).", "Although we focus on bench-marking sentence classification tasks the selected set of tasks contains variety, from sentiment classification ( Yelp Polarity , SST-2 ) to Natural Language Inference ( MNLI , ANLI ) to question similarity ( QQP ).", "We present our results in Figure", "2. The intrinsic dimensionality of RoBERTa-Base monotonically decreases as we continue pre-training.", "We do not explicitly optimize for intrinsic dimensionality, specifically during pre-training (the language model does not have access to downstream datasets!), but none-the-less the intrinsic dimension of these downstream tasks continues to decrease.", "More so, tasks that are easier to solve consistently show lower intrinsic dimensionality across all checkpoints, for example, Yelp Polarity vs. the notoriously tough ANLI dataset.", "The correlation between challenging tasks for RoBERTa and their large intrinsic dimension hints at a connection between generalization and intrinsic dimension.", "We will discuss generalization further in Section 5.3.", "Given our task representation interpretation of intrinsic dimensionality, we argue that the large scale training of Masked Language Models (MLM) learns generic and distributed enough representations to facilitate downstream learning of highly compressed task representations.", "Furthermore, we argue for another perspective of pre-training learning representations that form a compression framework with respect to various NLP tasks.", "We also measure the relationships between the parameter count of arbitrary pre-trained models and", "the intrinsic dimension of downstream NLP tasks.", "The optimal experiment to run would be to fix the pre-training method, e.g., MLM RoBERTa style, vary the architecture size from small to very big, and compute the intrinsic dimension of a group of tasks at every size of the model.", "Unfortunately, such an experiment is computationally infeasible due to the need to train many RoBERTa models.", "Instead, we do an empirical study of many existing pre-trained models, regardless of the pretraining method.", "We show that the trend is strong enough to overcome differences in training methodology.", "We select the following models: BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019), BART (Lewis et al., 2019), Electra (Clark et al., 2020), Albert (Lan et al., 2019), XLNet (Yang et al., 2019), T5 (Raffel et al., 2019), and XLM-R (Con-neau et al., 2019).", "Furthermore, we selected various sizes of these models, as available publicly within the HuggingFace Transformers library (Wolf et al., 2019).", "We use the MRPC dataset and compute intrinsic dimension for every pre-trained model utilizing the same binary search methodology mentioned in the previous section with additional small hyperparameter searches across learning rate (due to the wide range of learning rates needed by various models).", "We present our results in Figure", "3. There is a strong general trend that as the number of parameters increases, the intrinsic dimension of fine-tuning on MRPC decreases.", "We ran this experiment on other datasets to ensure that this is not an data artifact.", "Our experiments showed the same trend; we refer to the Appendix for all trends per dataset.", "Within the same window of number of parameters, the pre-training methodology becomes more important.", "For example, in the regime of 10 8 parameters, RoBERTa pre-training dominates similar sized pre-training methods.", "However, there does not seem to be a method that can overcome the limitations induced by the number of parameters.", "Interpreting these results through the lens of learning a compression framework for NLP tasks is straightforward; the more parameters we have in the model, the less we need to represent a task.", "We have shown strong empirical evidence connecting pre-training, fine-tuning, and intrinsic dimensionality.", "However, we have yet to argue the connection between intrinsic dimensionality and generalization.", "Given that we have seen pre-training minimize intrinsic dimension, we hypothesize that generalization improves as the intrinsic dimension decreases.", "To do so, we will empirically experiment with the connections between d 90 and evaluation set performance by looking at various checkpoints from our RoBERTa experiments in Section 5.1.", "We also plot the relative generalization gap (delta between train time performance and test time performance).", "In Figure 4 we plot the evaluation accuracy's achieved by our pre-training experiment in Section 5.1.", "A lower intrinsic dimension is strongly correlated with better evaluation performance.", "Additionally we are interested in measuring relative generalization gap ( acc train acc eval 1 acc eval ) across intrinsic dimension.", "We select the training accuracy that provides us with the best evaluation metrics when computing this figure.", "We present our results in Figure 5.", "Lower intrinsic dimension once again correlates strongly with a smaller relative generalization gap.", "If we interpret the intrinsic dimension as a measure of complexity, we expect the generalization gap to decrease with intrinsic dimension.", "By applying standard compression based generalization bounds, we can provide theoretical backing to the empirical connection between intrinsic dimension and generalization (Arora et al., 2018).", "Consider the following definition of multi-class classification loss with an optional margin over our supervised dataset D .", "Theorem", "1. Let f be a function which is parameterized by D as described in Equation 1 with a total of d trainable intrinsic parameters on a dataset with m samples.", "Then with a high probability, we can state the following asymptotic generalization bound L 0 ( f ) L 0 ( f ) + O (cid:32)(cid:114) d m (cid:33) (4) Proof.", "We defer the proof Section A.1 in the Appendix.", "We note that this is an extension of the well-known compression based generalization bound (Arora et al., 2018).", "This generalization bound is independent of the underlying parameter count ( D ) of the pre-trained model but depends on the ability to compress the downstream task ( d ).", "Moreover, given that our previous section shows larger models compress better, our bounds are aligned with general intuition and recent empirical evidence that larger pre-trained models generalize better.", "Explicitly, these bounds only apply to pre-trained methods trained with the intrinsic dimension subspace method; research has yet to show that standard SGD optimizes in this low dimensional space (although experimentally, 10 3 10 4 10 5 10 6 d 90 5.0% 10.0% 15.0% 20.0% 25.0% R e l a t i v e G e n e r a li z a t i o n G a p RoBERTa Pre-Training Generalization Study Dataset MRPC QQP Yelp SST-2 MNLI ANLI (R1+R2+R3) Figure 5: The intrinsic dimension and the respective relative generalization gap across a set of varied tasks.", "this seems to be confirmed).", "We leave the theoretical contribution of showing SGD optimizes in this space, possibly resembling intrinsic subspace, for future work.", "We want to highlight that generalization is not necessarily measured by the pre-trained model's parameter count or measure of complexity but the pre-trained model's ability to facilitate the compression of downstream tasks.", "In some sense, if we want to compress downstream tasks better, we must expect pre-trained representations to have a considerable measure of complexity.", "In conclusion, we proposed viewing the various phenomena surrounding fine-tuning and pretraining through the lens of intrinsic dimensionality.", "We empirically showed that common natural language tasks could be learned with very few parameters, sometimes in the order of hundreds, when utilizing pre-trained representations.", "We provided an interpretation of pre-training as providing a compression framework for minimizing the average description length of natural language tasks and showed that pre-training implicitly minimizes this average description length.", "We continued by doing an empirical study of existing pre-training methods and their respective intrinsic dimension, uncovering the phenomena that intrinsic dimensionality decreases as we increase the number of pre-trained representation parameters.", "This phenomenon provides some intuitions to the trend of growing pre-trained representations.", "We connected intrinsic dimensionality with generalization by first showing that pre-trained models with lower intrinsic dimensions across various tasks achieve higher evaluation accuracies and lower relative generalization gaps.", "Furthermore, we explain these empirical results by applying well-known generalization bounds to the intrinsic dimension to get generalization bounds that grow on the order of the intrinsic dimension, not the parameter count.", "Intrinsic dimensionality is a useful tool for understanding the complex behavior of large models.", "We hope that future work will make explicit theoretical connections between SGD and optimizing the intrinsic dimension as well as explain exactly why pre-training methods optimize the intrinsic dimensionailty of tasks before not seen." ]
[ "abstain", "method", "method", "result", "result", "result", "method", "abstain", "abstain", "method", "abstain", "abstain", "objective", "result", "abstain", "objective", "result", "method", "method", "objective", "objective", "result", "objective", "result", "other", "other", "method", "other", "other", "other", "method", "other", "other", "other", "objective", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "objective", "result", "result", "objective", "abstain", "objective", "result", "abstain", "result" ]
[ "Semantic parsing is the task of producing structured meaning representations for natural language sentences.", "Recent research has pointed out that the commonly-used sequence-to-sequence (seq2seq) semantic parsers struggle to generalize systematically, i.e. to handle examples that require recombining known knowledge in novel settings.", "In this work, we show that better systematic generalization can be achieved by producing the meaning representation directly as a graph and not as a sequence.", "To this end we propose LAGr ( L abel A ligned Gr aphs), a general framework to produce semantic parses by independently predicting node and edge labels for a complete multi-layer input-aligned graph.", "The strongly-supervised LAGr algorithm requires aligned graphs as inputs, whereas weakly-supervised LAGr infers alignments for originally unaligned target graphs using approximate maximum-a-posteriori inference.", "Experiments demonstrate that LAGr achieves significant improvements in systematic generalization upon the baseline seq2seq parsers in both stronglyand weakly-supervised settings.", "Recent research has shown that neural models struggle to systematically generalize to examples with unseen combinations of seen rules from the training set (Lake and Baroni, 2018; Finegan-Dollak et al., 2018; Hupkes et al., 2019).", "Systematic generalization is especially important for the task of semantic parsing, which requires models to translate natural language sentences to structured meaning representations (MRs), such as SPARQL database queries or lambda calculus logical forms.", "To generalize systematically in this task, the model must be capable of producing MRs for examples that feature new combinations of meaning construction rules, such as the rule that maps a noun like Corresponding author.", "Work partly done during an internship at ServiceNow Research.", "hedgehog in Figure 1 to its respective predicate hedgehog ( . ) , and the rule that defines which semantic role with respect to the verb (e.g. agent or theme ) the resulting predicate takes.", "Using synthetic (Bahdanau et al., 2019; Kim and Linzen, 2020a; Keysers et al., 2020) and natural benchmarks (Finegan-Dollak et al., 2018; Shaw et al., 2020), researchers have been studying systematic generalization of existing semantic parsing methods as well as proposing new approaches such as using meta-learning (Conklin et al., 2021), pretrained models (Furrer et al., 2020), or intermediate meaning representations (Herzig et al., 2021).", "The dominant framework in these studies is sequence-to-sequence (seq2seq, Sutskever et al., 2014; Bahdanau et al., 2015) learning, whereby the model produces a serialized MR in an autoregressive fashion, by predicting one token at a time, while conditioning on all previously generated tokens.", "We hypothesize that for semantic parsing constructing the MR by combining independent predictions that are not conditioned on each other can generalize more systematically than seq2seq.", "For example, consider the sentence The dog liked that the hippo danced .", "Arguably, the predictions that dog is the agent of like and that hippo is 3295 the agent of danced can be made independently of each other.", "Our intuition is that a model that predicts such aspects of meaning independently of each other can be better at learning context-insensitive rules because the overall context for each individual prediction is reduced.", "Following this intuition, we propose LAGr ( L abel A ligned Gr aphs), a framework to produce semantic parses by independently labelling the nodes and edges of a fully-connected multi-layer output graph that is aligned with the input utterance.", "While the general idea of predicting semantic parses as graphs is not new (Lyu and Titov, 2018), the systematic generalization benefits of doing so have not been investigated prior to this work.", "Importantly, LAGr retains most of the flexibility that seq2seq models have, without the complexity and rigidity that comes with other alternatives to seq2seq, such as grammar-based methods (Herzig and Berant, 2020).", "We first introduce LAGr in the strongly-supervised setting where output graphs are aligned to the input sequences, thus allowing for standard supervised training.", "For the weakly-supervised case when the alignment is not available, we treat it as a latent variable.", "We infer the latent alingment with a simple and novel approximate maximum-a-posteriori (MAP) inference approach which involves solving several minimum cost bipartite matching problems with the Hungarian algorithm (Kuhn, 1955a).", "We then use the resulting aligned graphs to train the model.", "Our experiments demonstrate that in both stronglyand weakly-supervised settings LAGr significantly improves upon comparable seq2seq semantic parsers on the COGS and CFQ datasets (Kim and Linzen, 2020a; Keysers et al., 2020).", "We present LAGr ( L abel A ligned G raphs), a framework for constructing meaning representations (MR) directly as graphs (i.e., MR graphs ).", "When LAGr is used to output logical forms, the graph nodes can be variables, entities, categories and predicates, and graph edges can be the Neo-Davidsonian style semantic role relations that the nodes appear in, e.g. is-agent-of or is-theme-of (Parsons, 1990).", "While this work focuses on predicting logical forms, LAGr can, in principle, also be used to output other kinds of graphs, such as abstract syntax tree parses of SQL queries.", "As illustrated in Figure 2, LAGr predicts the output by labeling the nodes and edges of a fully-connected multi-layer output graph that is aligned with the input utterance.", "We label a multi-layer as opposed to a single-layer graph because some MR graphs have more nodes than the number of input tokens (see Section 4.2 for an example).", "Notation and Terminology Formally, let x = x 1 , x 2 , ..., x N denote a natural language utterance of N tokens.", "LAGr produces an MR graph G by labeling the nodes and edges of a complete graph a with M = L N nodes that are arranged in L layers.", "The layers are aligned with the input sequence x in a way that for each input position i there is a unique corresponding output node in each layer.", "We say that nodes from different layers that are aligned with the position i form a column (an example column in Figure 2b contains the nodes labeled as actor and ?x0 for the word star at the position i = 3 ).", "We write a = ( z, ) to indicate that a complete labeled graph a is characterized by its node labels z V Mn and edge labels VM M e , where V n and V e are node and edge label vocabularies, respectively.", "Both vocabularies also include additional null labels that we use as padding (e.g. grey nodes in Figure 2 are labeled as null ).", "To produce the output MR graph G from a , we remove all null nodes and null edges.", "Lastly, we use z j and jk notations to refer to the labels of node j and of the edge ( j, k ) where j = ( l 1) N + i is a one-dimensional index that corresponds to the i -th node in the l -th layer.", "To label the nodes of a we encode the input utterance x as a matrix of N d -dimensional vectors H = f enc ( x ) RN d , where f enc can be an arbitrary encoder model such as LSTM (Hochreiter and Schmidhuber, 1997) or a Transformer (Vaswani et al., 2017).", "LAGr then defines a factorized distribution p ( z | x ) over the node labels z as follows: O = L || l =1 HW l , (1) = softmax ( O ) , (2) p ( z | x ) = M (cid:89) j =1 p ( z j | x ) = j,z j , (3) 3296 * hedgehog ( x _ 1 ); apple ( x _ 4 ); eat.agent ( x _ 2 ; x _ 1 ) AND eat.theme ( x _ 2 ; x _ 4 ) eat apple * The hedgehog ate an apple.", "where O RM | V n | contains logits for M = N L nodes from all the L graph layers, || denotes the concatenation operation along the node axis, W l denotes the weight matrix for layer l .", "Here and in following equations softmax ( . ) is applied to the last dimension of the input tensor and every multiplication by a weight matrix is followed by the addition of a bias vector which we omit to enhance clarity.", "Our edge labelling computation is reminiscent of the multi-head self-attention by Vaswani et al. (2017), with the key difference that softmax is applied across the edge labels and not across positions: H q = L || l =1 HU ,l , H k = L || l =1 HV ,l , = softmax (cid:20) stack V e (cid:104) H q H kT (cid:105)(cid:21) , where H q and H k contain concatenated key and query vectors for the label V e across all L graph layers, U ,l , V ,l R d | Ve | , d | Ve | are the weights for the edge label , and the stack operator stacks the matrices into a 3D tensor to which softmax is subsequently applied.", "Similarly to p ( z | x ) , we obtain p ( | x ) as follows: p ( | x ) = M (cid:89) j =1 M (cid:89) k =1 p ( jk | x ) = M (cid:89) j =1 M (cid:89) k =1 jk jk .", "The factorized nature of Equations 3 and 4 makes the argmax inference z, = arg max p ( z, | x ) trivial to perform.", "When the groundtruth aligned graph a = ( z , ) for the MR graph G is available, LAGr can be trained by directly optimizing log p ( z = z , = | x ) .", "We refer to this training setting as strongly-supervised LAGr .", "In many practical settings, the alignment between the MR graph G and the sequence x is unavailable, making the aligned graph a unknown.", "To address this common scenario, we propose a weakly-supervised LAGr algorithm based on a latent alignment model.", "Similarly to the strongly-supervised case, we assume that the MR graph can be represented as a labeled complete, multi-layer graph na = ( s V Mn , e VM M e ) , with the difference that in this case the alignment between x and na is not known.", "We assume a generative process whereby na is obtained by permuting the columns of the latent aligned graph a with a random permutation a , where a j is the index of the column in a that becomes the j -th column in na .", "For the rest of this section we focus on the single layer ( L = 1 ) case to simplify the formulas.", "For this case our probabilistic model defines the following distribution over na = ( s, e ) : p ( e, s | x ) = (cid:88) a (cid:88) z (cid:88) p ( e, s, a, z, | x ) = (cid:88) a p ( a ) (cid:89) j p ( z a j = s j | x ) (cid:89) j (cid:89) k p ( a j a k = e jk | x ) , (5) where p ( a ) = 1 /N !", ".", "Computing p ( e, s | x ) exactly is intractable.", "For this reason, we train LAGr by using an approximation of p ( e, s | x ) in which instead of summing over all possible aligments a , we only consider the maximum-a-posteriori (MAP) alignment a = arg max a p ( a | e, s, x ) .", "This approach is sometimes called the hard Expectation-Maximization algorithm in the literature on probabilistic models (Svensen and Bishop, 2007).", "The 3297 training objective thus becomes p ( e, s | a, x ) = (cid:89) j p ( z a j = s j | x ) (cid:89) j (cid:89) k p ( a j , a k = e jk | x ) .", "a = arg max a p ( a | e, s, x ) = arg max a log p ( s | a, x ) + log p ( e | a, x ) = arg max a (cid:104) (cid:88) j log p ( z a j = s j | x ) + (cid:88) j (cid:88) k log p ( a j ,a k = e j,k | x ) (cid:105) (6)", "We are not aware of an exact algorithm for solving the above optimization problem, however if the edge log-likelihood term log p ( e | a, x ) is dropped in the equations above, maximizing the node label probability p ( s | a, x ) is equivalent to a standard minimum cost bipartite matching problem.", "This optimization problem can be solved by a polynomial-time Hungarian algorithm (Kuhn, 1955b).", "We can thus use an approximate MAP alignment a 1 = arg max a (cid:80) j log p ( z a j = s j | x ) .", "While dropping p ( e | a, x ) from Equation 6 is a drastic simplifica-tion, in situations where node labels s are unique and the model is sufficiently trained to output sharp probabilities p ( z j | x ) we expect a 1 to often match a .", "To further improve the MAP alignment approximation and alleviate the reliance on the node label uniqueness, we generate a shortlist of K candidate alignments by solving K noisy matching problems of the form arg max a (cid:80) j log p ( z a j = s j | x )+ (cid:15) ja j , where (cid:15) ja j N (0 , ) .", "We then select the alignment candidate a that yields the highest full log-likelihood log p ( s | a, x ) + log p ( e | a, x ) .", "We refer the reader to Algorithm 1 for a detailed presentation of weakly-supervised LAGr.", "The LAGr approach is heavily inspired by graph-based dependency parsing algorithms (Mcdonald, 2006).", "In neural graph-based dependency parsers (Kiperwasser and Goldberg, 2016; Dozat and Manning, 2017) the model is trained to predict the existence and the label of each of the possible edges between the input words.", "The Abstract Meaning Representation (AMR) parser by Lyu and Titov Algorithm 1: Training LAGr with weak supervision Init: Let K be the number of alignment candidates, T be the number of training steps, and t be the model parameters after t steps.", "(2018) brings similar methodology to the realm of semantic parsing, although they do not consider the systematic generalization implications of using a graph-based parser instead of a seq2seq one.", "Lyu and Titov (2018) only output single layer graphs which requires aggresive graph compression; in LAGr we allow the model to output a multiple layer graph instead.", "Lastly, the amortized Gumbel-Sinkhorn alignment inference used by Lyu and Titov (2018) is much more complex than the Hungarian-algorithm-based approximate MAP inference that we employ here.", "Another important inspiration for LAGr is the UDepLambda method (Reddy et al., 2016) that converts dependency parses into graph-like logical forms.", "LAGr can be seen as an algorithm that produces UDepLambda graphs directly with the neural model, side-stepping the intermediate dependency parsing step.", "Another alternative to seq2seq semantic parsers are span-based parsers that predict span-level actions for building MR expressions from subexpressions (Pasupat et al., 2019; Herzig and Berant, 2020; Liu et al., 2021).", "A prerequisite for using a span-based parser is an MR that can be viewed as a recursive composition of MRs for subspans.", "While this strong compositionality as-sumption holds for the logical forms used in earlier semantic parsing research (e.g. Zettlemoyer and Collins (2005)), an intermediate MR would be re-3298 quired to produce other meaning representations, such as e.g. SPARQL or SQL queries, with a span-based parser.", "The designer for an intermediate MR for a span-based parser must think about MRs for spans and how they should be composed.", "This can sometimes lead to non-trivial corner cases, such as e.g. ternary grammar rules in Herzig and Berant (2020).", "On the contrary, a graph-based parser can in principle produce any graph, although in practice in our experiments we compress the raw graphs slightly to make the learning problem easier.", "Other related semantic parsing approaches include the semantic labeling method by Zheng and Lapata (2020) and the structured reordering approach by Wang et al. (2021).", "Zheng and Lapata (2020) show that labelling the input sequence prior to feeding it to the seq2seq semantic parser improves systematic generalization.", "Compared to that study, our work goes one step further by adding edge labeling, which allows us to let go of the seq2seq model entirely.", "Wang et al. (2021) model semantic parsing as structured permutation of the input sequence followed by monotonic segment-level transduction.", "This approach achieves impressive results, but is considerably more complex than LAGr.", "Finally, Guo et al. (2020) achieve a very high performance on CFQ by combining the sketch prediction approach (Dong and Lapata, 2018) with an algorithm that outputs the MR as a directed acyclic graph (DAG).", "Unlike LAGr, their algorithm produces the DAG in a sequential left-to-right fashion.", "Notably, the non-hierachical version of this algorithm without sketch prediction performs poorly.", "Concurrently with this work, Ontanon et al. (2021) show that semantic parsing by sequence tagging improves systematic generalization.", "Their sequence tags are similar to the aligned graphs that we predict with LAGr when using a single graph layer.", "Ontanon et al. (2021) do not discuss how to infer sequence tags from logical forms when the former are not available.", "We demonstrate the effectiveness of LAGr on two systematic generalization benchmarks for semantic parsing: COGS (Kim and Linzen, 2020a) and Compositional Freebase Questions (CFQ, Keysers et al. (2020)) 1 .", "Dataset COGS (Kim and Linzen, 2020a) is a semantic parsing benchmark that requires models to translate English sentences to Neo-Davidsonian lambda calculus logical forms.", "As shown in Figure 1, the out-of-distribution generalization set of COGS features novel combinations of words and syntactic structures from the training dataset (more examples available in Appendix A.4).", "Graph Construction In order to study LAGr on COGS, we first convert the logical forms to UDepLambda-style (Reddy et al., 2016) MR graphs.", "Specifically, we construct the graph nodes using the oneand two-place predicates and definite articles (e.g. hedgehog, apple, eat and the * nodes in Figure 2a).", "We do not create dedicated nodes for variables, as every variable in COGS is either an argument to a unique one-place predicate (e.g. x 1 is for hedgehog ( x 1 ) ), or the first argument to a unique two-place predicate (e.g. x 2 for eat in eat.agent ( x 2 , x 1 ) ).", "Instead, we let the respective predicate node represent the variable.", "The labeled edges for our graphs are defined by the Neo-Davidsonian role predicates of the logical forms (such as agent , theme , recipient , ccomp , nmod.on , nmod.in , xcomp , nmod.beside ).", "For example, the conjunct eat.agent ( x 2 , x 1 ) results in an agent edge between the eat and hedgehog nodes.", "We also add special article edges to connect definite article nodes (denoted by the * label) to their respective nouns (e.g. hedgehog in Figure 2a).", "We take advantage of the correspondence between variable names and input positions ( x i corresponds to the i -th token) to construct single-layer ( L = 1 ) aligned graphs a for COGS that are suitable for strongly-supervised LAGr, as described in Section 2.1.", "The node and edge vocabularies for the aligned graphs contain 645 and 10 labels respectively, each including a null label.", "Training Details Hyperparameter tuning on COGS is challenging since the performance on the in-distribution development set always saturates to near 100%.", "We adopt the hyperparameter tuning procedure discussed in Conklin et al. (2021) to find the best configuration for our baselines and strongly-supervised LAGr models.", "Specifically, we create a Gen Dev dataset by sampling 1000 random examples from the generalization set and use them to find the best hyperparameter configuration.", "Exact match accuracy (%)", "We find that our Transformer-based seq2seq and LAGr models perform better when embeddings are initialized following He et al. (2015) and when positional embeddings are scaled down by 1 dim .", "The latter techniques were adopted following the recent work of Csordas et al. (2021) under the PED (Positional Embedding Downscaling) name.", "We report the exact match accuracy, i.e., the percentage of examples for which the predicted graphs after serialization yielded the same logical form, as well as the standard deviation over at least 10 random seeds.", "We tune the hyperparameters for strongly-supervised LAGr first; we then use the same configuration for weakly-supervised LAGr and only tune the inference hyperparameters, i.e. the number of candidates K and the noise level .", "Since weakly-supervised LAGr does not always converge on the training set, we implement a restart mechanism that relaunches experiments with a new random seed where a training performance of at least 95% is not achieved.", "Setting K = 10 and = 1 .", "0 allows us to achieve a convergence rate of around 50%.", "For more details on our hyperparameter search, and best configurations, we refer the reader to Appendix A.1.", "Additionally, we observe that the training loss does not go to 0 in the weakly-supervised setting.", "We attribute this to a significant (2.7%) percentage of training examples in which there are three and more nodes with the same label (namely * for definite articles), which presents a challenge to our alignment inference mechanism.", "To remedy this, we cache and append the previously used alignment as the K + 1 st alignment candidate (see lines 3-8 in Algorithm 1).", "This allows the model to remember low-loss alignments and thereby helps achieve full convergence.", "Lastly, we also run weakly-supervised LAGr with retraining, in which we take the final learned alignments for all examples and retrain models with the learned alignments being used as strong supervision.", "Baselines We compare LAGr to LSTMand Transformerbased seq2seq semantic parsers that produce logical forms as sequences of tokens.", "In addition to training our own seq2seq baselines, we also include baseline results from the original COGS paper by Kim and Linzen (2020a) and from follow-up works by Akyurek and Andreas (2021), and Csordas et al. (2021).", "We also compare LAGr to a lexicon-based seq2seq model LSTM+Lex by Akyurek and Andreas (2021) that leverages the copy mechanism in the seq2seq decoder to perform a lexical lookup to generate the output token.", "Results Table 1 shows that our best Transformers trained with LAGr outperform the original (35% from Kim and Linzen (2020b) and 81% from Csordas et al. (2021)) and our reproduced (80.6%) seq2seq Transformer baselines, obtaining 82.5% and 82.3% exact match accuracy in the strongly-and weakly-supervised settings, respectively.", "We experiment with two variations of LAGr: using shared encoders and separating encoders for syntax (i.e., node predictions) and semantics (i.e., edge predictions) reflected in Table 1 by the subindex sh versus sep in the model names respectively.", "We achieve the best result in the 3300 strongly-supervised setting using separate encoders.", "While this setting significantly improves the performance of LAGr in all cases, for the strongly-supervised LSTM-based LAGr models, separating encoders seems to be crucial (71.4% vs 39.0%).", "The use of retraining in weakly-supervised LAGr is helpful.", "It allows us to increase the accuracy of weakly-supervised LAGr to match our strongly-supervised result.", "Finally, LAGr is able to match the performance of the LSTM+Lex approach by Akyurek and Andreas (2021) without relying on the use of lexicons a result we further discuss in Section 5.", "Dataset CFQ (Keysers et al., 2020) is a benchmark for systematic generalization in semantic parsing that requires models to translate English sentences to SPARQL database queries.", "We use CFQ's Maximum Compound Divergence (MCD) splits, which were generated by making the distribution of compositional structures in the train and test sets as divergent as possible.", "SPARQL queries contain two components: a SELECT and a WHERE clause.", "The SELECT clause is either of the form SELECT count(*) for yes/no questions or SELECT DISTINCT", "?x0 for whquestions (those starting with which, what, who, etc.).", "The WHERE clause can contain constrains of three kinds: filter constraints ensuring two variables or entities are distinct (e.g. FILTER ?x0 != M0 ), two-place predicates expressing a relation between two entities (e.g. ?x0 parent ?x1 ), and one-place predicates expressing if an entity belongs to a category (e.g. ?x0 a ns:film.actor ) Graph Construction Before constructing the graphs, similarly to prior work (Furrer et al., 2020; Guo et al., 2020), we compress the SPARQL queries by merging some triples in the WHERE clauses.", "As an example, consider the question Were M2 and M3 directed by a screenwriter that executive produced M1? , where the original MR contains both [M2 directed by ?x0, M3 directed by ?x0] conjuncts.", "To make it easier to align SPARQL queries to the input question, we merge triples by concatenating their subjects and objects, e.g. yielding [[M2, M3] directed by", "?x0] for the above example.", "With this compression, the SPARQL queries can now contain an arbitrary number of entities in the triples.", "To convert the compressed SPARQL queries to graphs we first remove the SELECT clauses.", "To preserve the question type information, for whquestions we replace the", "?x0 variable in the WHERE clause with a special select", "?x0 variable.", "As the example in Figure 2b shows, we define the graph nodes by taking the entities (in-cluding variables, e.g. ?x0, M1 ) and all predicates ( parent, sibling, actor ) from the triples.", "For one-place predicates, we connect the entity nodes to the predicate node with an agent edge label.", "For triples with two-place predicates, we connect the predicate to the left-hand side and right-hand side entities with the agent and theme edge respectively.", "We add a FILTER edge between the variables or entities that participate in a filter constraint.", "The resulting node and the edge vocabularies contain 84 and 4 labels respectively, each also including a null label.", "Training Details Unlike COGS, we use L=2 graph layers in LAGr in order to accommodate for the larger MR graphs in CFQ.", "This is because CFQ contains examples such as Who married M1's female German executive producer? that contains 8 tokens, but induces the following 10 nodes:", "?x1, executive produced, M1, gender, ns:m.02zsn, nationality, ns:m.0345h, select", "In all our CFQ experiments we use a shared Transformer encoder for both node and edge prediction.", "To assess performance, we use exact graph accuracy, which we define as the percentage of examples where the predicted and true graphs are isomorphic.", "The predicted graphs contain enough information to exactly reconstruct the SPARQL query, hence our exact graph accuracy can be compared to the exact match accuracy from the prior work.", "For hyperparameter tuning, we follow Keysers et al. (2020) and use CFQ's in-distribution random split to find the best model configuration.", "We do this by first fixing the number of candidate alignments at K = 1 to search for the best hyperparameters.", "Once we find the best configuration, we tune K and .", "For the best found configuration of K = 5 , = 10 , as well as for the base configuration K = 1 , = 0 , we report the average graph accuracy and standard deviation for 8-11 runs of weakly-supervised LAGr on the MCD1, MCD2, MCD3 and the random split.", "Similarly to COGS, we use the PED initialization technique 3301 Graph Accuracy Random Mean MCD MCD1 MCD2 MCD3 train test test test test test HPD -67.3 ( 4.1) 72.0 ( 7.5) 66.1 ( 6.4) 63.9 ( 5.7) HPD w/o Hierarchical Mechanism --21.3 6.4 10.1 T5-small + IR -47.9 -LSTM + Attn -97.4 ( 0.3) 14.9 ( 1.1) 28.9 ( 1.8) 5.0 ( 0.8) 10.8 ( 0.6) Transformer -98.5 ( 0.2) 17.9 ( 0.9) 34.9 ( 1.1) 8.2 ( 0.3) 10.6 ( 1.1) Universal Transformer -98.0 ( 0.3) 18.9 ( 1.4) 37.4 ( 2.2) 8.1 ( 1.6) 11.3 ( 0.3) Evol.", "from Csordas et al. (2021), and discard runs where weakly-supervised LAGr does not reach at least 99.5% graph accuracy on the training set (around 12% of all runs).", "For further details on our CFQ experiments we refer the reader to Appendix A.2.", "Results We compare LAGr to seq2seq semantic parsing results reported in prior work (Keysers et al., 2020; Furrer et al., 2020), as well as results obtained with compressed SPARQL queries (Guo et al., 2020; Herzig et al., 2021).", "As shown in Table 2, weakly-supervised LAGr outperforms all comparable baselines on all of CFQ's out-of-distribution MCD splits.", "While both K = 1 and K = 5 with = 10 yield impressive performance gains compared to the baselines, we obtain mixed results about the impact of a higher K and the use of noise.", "Specifically, the best result on MCD1 is achieved with K = 1 in contrast to MCD2 and MCD3 where K = 5 with = 10 performs significantly better than when using K = 1 .", "For reference, Table 2 also includes the state-of-the-art Hierarchical Poset Decoding (HPD, Guo et al., 2020) method (see Section 3), which arguably is not a fair baseline to LAGr because of its use of sketch prediction and lexicons.", "Notably, when these techniques are not used, LAGr performs much better than their base HPD algorithm.", "tuned the number of alignment candidates K and the noise level .", "One can see that choosing the best alignment out of K > 1 candidates is indeed helpful, and that noise of high magnitude ( = 10 ) brings the best improvement on the random split.", "These improvements also translate into systematic generalization gains for MCD2 and MCD3, as shown in Table 2 where we see that K = 5 achieves better performance than K = 1 .", "The positive effect of a larger K on these splits is in line with our expectation since 3.7 5.7% of examples in each CFQ split have at least two predicates with identical node labels, which can make it hard to align the MR graph to the input by look-3302 ing at node labels only.", "Interestingly, in contrast to our intuition, when using ten candidate alignments, the random split test performance is slightly worse than when using five.", "We show examples of the node labels that weakly-supervised LAGr predicts in the learned aligned CFQ graphs as well as the corresponding SPARQL queries in Figure 3 (Appendix A.3).", "In this work we have shown that performing semantic parsing by labeling aligned graphs brings significant gains in systematic generalization.", "In our COGS and CFQ experiments, LAGr significantly improves upon sequence-to-sequence baselines in both strongly and weakly-supervised settings.", "Specifically, on COGS, LAGr outperforms our carefully-tuned seq2seq baselines and performs similarly to LSTMs that leverage lexicons.", "Lexicons can also be integrated into LAGr, although we do not expect this to improve LAGr's performance on COGS, as our best performing models already predict node labels perfectly.", "Lexicons also bring their own challenges of dealing with context-dependency and ambiguity, hence it is notable that LAGr matches the performance of a lexicon-equipped model while making less assumptions about the nature of the input-to-output mapping.", "On CFQ, LAGr outperforms all seq2seq baselines on all MCD splits.", "Based on our error analysis (see Appendix A.3), we believe that a modification of LAGr that conditions edge predictions on node labels could bring further improvements.", "Importantly, this modification would be compatible with our current alignment inference algorithm.", "Another obvious direction to improve LAGr's performance is by using a pretrained encoder.", "Lastly, while the current alignment inference algorithm is effective, applying more advanced discrete optimization or amortized inference methods could be an interesting direction for future work.", "We are thankful to Joelle Pineau, Siva Reddy and Christopher Manning for early discussions on this project.", "Furthermore, we also thank Nitarshan Ra-jkumar, Torsten Scholak and the rest of the Human-Machine Interaction Through Language group at ServiceNow for their invaluable feedback, reviews and contributions to this paper.", "This research was supported in part by Canada CIFAR AI Chairs held by Prof. Pineau and Prof.Hamilton, as well as gift grants from Microsoft Research and Samsung AI." ]
[ "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "abstain", "abstain", "method", "objective", "abstain", "abstain", "objective", "abstain", "objective", "result", "abstain", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "method", "other", "other", "other", "other", "other", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "other", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "result", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Recent progress in Natural Language Understanding (NLU) has seen the latest models outperform human performance on many standard tasks.", "These impressive results have led the community to introspect on dataset limitations, and iterate on more nuanced challenges.", "In this paper, we introduce the task of HeadLine Grouping (HLG) and a corresponding dataset (HLGD) consisting of 20,056 pairs of news headlines, each labeled with a binary judgement as to whether the pair belongs within the same group.", "On HLGD, human annotators achieve high performance of around 0.9 F-1, while current state-of-the art Transformer models only reach 0.75 F-1, opening the path for further improvements.", "We further propose a novel unsupervised Headline Generator Swap model for the task of HeadLine Grouping that achieves within 3 F-1 of the best supervised model.", "Finally, we analyze high-performing models with consistency tests, and find that models are not consistent in their predictions, revealing modeling limits of current architectures.", "Headlines are a key component in everyday news consumption.", "As the first piece of text the user interacts with when learning about a story, the headline can play many roles, including: summarize the main points of the story, promote a particular detail, and convince the reader to choose one source over another (Bonyadi and Samuel, 2013).", "News aggregators amass content from many disparate news sources and have become popular, in part because they offer news readers access to diverse sources (Chowdhury and Landoni, 2006).", "Flaxman et al. (2016) find that news aggregators help news readers access content they are unfamiliar with, and potentially on opposite sides of the Author emails: {phillab, lucasbandarkar,hearst}@berkeley.edu Date Headline Group 03/27 Russian-U.S. crew makes belated arrival at space station A 03/28 Russian spacecraft brings 3-man crew to ISS after 2-day delay A 03/29 Space 'makes the heart grow rounder' B 04/01 Astronauts' hearts become spherical during prolonged trips in space, study finds B Figure 1: Snippet of timeline in the HeadLine Grouping dataset (HLGD).", "political spectrum.", "At the heart of a news aggregator is the ability to group relevant content together, to support a reader in finding varying views and angles on the news.", "Natural Language Understanding (NLU) has seen rapid progress in recent years.", "The creation of multi-task benchmarks such as the General Language Understanding Evaluation collection (GLUE), paired with fast-paced progress in Transformer-based architectures has led to models outperforming human baseline performance on many tasks, such as paraphrase identification (Dolan et al., 2004), semantic similarity (Cer et al., 2017), and extractive question-answering (QA) (Rajpurkar et al., 2018).", "This success has led to the questioning of the composition of benchmarks, and the subsequent creation of ever-more challenging datasets, for example by increasing the diversity of texts in textual entailment datasets (Williams et al., 2018), or introducing unanswerable questions in QA datasets (Rajpurkar et al., 2018).", "In this paper, we propose the novel task of HeadLine Grouping.", "Although news articles may discuss several topics, because of length constraints, headlines predominantly describe a single event.", "Therefore, for the task of headline grouping, we define two headlines to be in the same group if they describe the same event: an action that occurred at a specific time and place.", "We do not require headlines to contain fully identical information to be placed into the same group.", "For example, one headline might report an exact number of deaths, while another might report a rounded number, or omit the number altogether.", "Figure 1 shows an example from our dataset.", "The first two headlines are in group A, and the third and fourth are part of group B. The headlines are divided into groups A versus B because they describe different events in the timeline (astronauts arriving at the space station vs. a study about hearts in space).", "The two headlines in B show the lexical and syntactic diversity of groups in this dataset they appear in the same group because they describe the same underlying event.", "Appendix D gives a longer excerpt.", "We build a large dataset for the task of HeadLine Grouping, crowd-sourcing the annotation of large timelines of news headlines in English.", "We cast the task as a binary classification: given a pair of headlines, determine whether they are part of a headline group (1) or whether they relate to distinct events (0).", "Our main contribution, described in Section 3, is the design of the HeadLine Grouping task (HLG), and the creation of the HeadLine Grouping Dataset (HLGD) that is focused on detecting when headlines refer to the same underlying event.", "We show that the human annotations in our dataset have strong inter-annotator agreement (average 0.81), and a human annotator can achieve high performance on our corpus (around 0.9 F-1), while current state-of-the-art Transformer-based model performance stands around 0.75 F-1.", "A second contribution is a novel unsupervised approach for the task of HeadLine Grouping relying on modifying a headline generator model.", "The model achieves the best performance on HLGD amongst unsupervised methods.", "Section 4 presents the performance of this algorithm compared to several baselines, including supervised and unsupervised methods.", "Our final contribution, presented in Section 5, is an analysis of the consistency of the best performing model on HLGD.", "We specifically analyze whether the model follows commutative and transitive behavior expected to be trivially true in HeadLine Grouping.", "Paraphrase Identification, Textual Entailment and Semantic Similarity are three common NLP tasks that resemble HeadLine Grouping.", "In Paraphrase Identification (PI) (Ozmutlu, 2016; Xu et al., 2014), the objective is to determine whether two sentences are semantically equivalent.", "We show in Table 2 that only one third of positive headline pairs in HLGD qualify as paraphrases.", "We further show in Section 4 that a trained model on MRPC (Dolan et al., 2004), a PI dataset of news text, performs poorly on HLGD.", "Textual entailment (Bentivogli et al., 2009), or Natural Language Inference (NLI) (Williams et al., 2018), determines whether a premise text implies a hypothesis.", "Apart from the non-symmetricality of the entailment relationship, we believe entailment is not well-suited to the domain of headlines because of the strict nature of the relationship.", "A large portion of headlines in a group differ in level of detail, and under an entailment task, would need to be labeled as neutral or contradicting.", "Finally, semantic similarity assigns a strength of similarity between two candidate sentences, for example in the Semantic Textual Similarity Benchmark (STS-B) (Cer et al., 2017), similarity is ranked from 1 to 5.", "This flexibility seems like a good fit; however, the lexical and syntactic diversity of headlines about the same underlying content do not correspond well to a similarity range.", "Topic Detection and Tracking (TDT) (Allan, 2002) was a DARPA-sponsored initiative to investigate methods to group news articles by topics (referred to as timelines in this paper).", "We view TDT as a precursor to the task of HeadLine Grouping: in TDT, the focus is on detecting and tracking a timeline of related events, while in HeadLine Grouping, the timeline is given, and the focus is on subdividing it into finer groups.", "We considered using the TDT datasets and annotating them for our purposes.", "However, the TDT developers acknowledge (Graff et al., 2006) several important errors in the way the TDT datasets were acquired (e.g., some publication dates were not properly attributed) that could have an impact on the quality 1 The code, model checkpoints and dataset are available at: https://github.com/tingofurro/headline_grouping of the final dataset.", "News Headlines in NLP.", "Headlines are popular as a challenging source for generation tasks such as summarization (Rush et al., 2015), style transfer (Jin et al., 2020), and style-preserving translation (Joshi et al., 2013).", "Headlines have been leveraged to detect political bias (Gangula et al., 2019), clickbait and fake news phenomena (Bourgonje et al., 2017).", "Finally, sentiment analysis of headlines has received attention (Bostan et al., 2020), with some work showing headline sentiment can be a useful signal in finance (Moore and Rayson, 2017).", "Grouping Headlines has been explored in prior work.", "Wubben et al. (2009) propose a TF-IDF based clustering algorithm, but do not evaluate its agreement with human annotations.", "Pronoza et al. (2015) build a corpus of Russian headlines pairs, but limit pairs in the dataset by filtering out headlines that are distant syntactically.", "We find that headline groups often contain syntactically distant headlines (see Figure 3).", "Bouamor et al. (2012) and Shinyama et al. (2002) present a simple strategy, relying on the assumption that all articles on a topic published on the same day form a group.", "As will be shown below, this assumption is not always correct (see Figure 2).", "Several of the most-used news aggregators, such as Yahoo News 2 , Google News 3 , and Bloomberg's NSTM (Bambrick et al., 2020) present headlines in groups.", "As these systems do not have published algorithms, we cannot comment on their methods; nonetheless we hope that the release of the HLG dataset offers a common evaluation test-bed to benchmark systems.", "We now present the HeadLine Grouping Dataset.", "We describe the dataset of news articles we collected for annotation, our annotation procedure, an analysis of the resulting dataset, and the challenges we propose to the community.", "We collect a set of 10 news timelines from an existing open-source news collection in English (Laban and Hearst, 2017).", "A timeline is a collection of news articles about an evolving topic, consisting of a series of events.", "The timelines we use to build HLGD consist of time-stamped English news arti-2 https://news.yahoo.com 3 https://news.google.com Story Name Size Groups + pairs IAA Tunisia Protests 111 46 219 0.758 Ireland Abortion Vote 180 81 406 0.727 Ivory Coast Army Mutiny 128 45 329 0.781 International Space Station 257 107 499 0.831 US Bird Flu Outbreak 79 36 91 0.924 Human Cloning 119 55 259 0.830 Facebook Privacy Scandal 194 105 274 0.753 Equifax Breach 159 81 261 0.855 Brazil Dam Disaster 273 132 634 0.818 Wikileaks Trials 180 101 550 0.859 Total / Average 1679 789 3522 0.814 Table 1: Names and statistics of the ten news timelines in HLGD .", "cles originating from 34 international news sources.", "The timelines range in size from 80 to 274 news articles, and span 18 days to 10 years.", "We choose to use timelines as the source for the dataset for two reasons.", "First, news timelines center around a theme, and as successive events occur, many pairs of headlines will be semantically close, yielding challenging samples for the dataset.", "Second, this task requires annotating headlines by pairs.", "If there are n headlines, there could be on the order of n 2 headline pairs to annotate.", "By having annotators assign group labels to a chronologically organized timeline, the annotation procedure requires only one label per headline, or n labels total.", "We attempted to diversify topics and geographical locations represented in the 10 selected timelines.", "Topics and statistics are shown in Table 1.", "To reduce the effects of varying judgement inherent to the task, annotations were obtained from five independent judges and merged using the procedure described in the following subsection.", "Annotators worked on an entire timeline at a time, using the following procedure: The timeline was presented in a spreadsheet, in chronological order, with a single headline per row, and the corresponding publication date (year, month, day), Annotators went over the timeline one headline at a time in chronological order, If the headline being annotated did not match a previously created group, the annotator assigned it a new group number , Otherwise, the annotator could assign the headline to a previous group number , grouping it with previously added headlines.", "We note that the annotation relied on annotators' ability to discern an event described by a news headline.", "However, a headline is not always written in an event-centric manner for example when the headline is vague (e.g., A way forward in gene editing in the Human Cloning timeline), or overly brief (e.g., Waste not, want not in the International Space Station timeline).", "Annotators were instructed to create a separate group for such cases, isolating non-event-centric headlines.", "Roughly one fifth of the annotations were produced by authors of the paper, and the remaining annotations were obtained by recruiting 8 crowd-workers on the Upwork platform.", "4 The crowd-workers were all native English speakers with experience in either proof-reading or data-entry, and were remunerated at $14/hour.", "Annotators were first trained by reading a previously annotated timeline, and given the opportunity to clarify the task before starting to annotate.", "Exact instructions given to the annotators are transcribed in Appendix A. 3.3 Merging Annotations In order to merge the five annotations, we follow a standard procedure to produce a single grouping that represents an aggregate of annotations.", "We create a graph G , with each headline in a timeline represented by a node n i .", "An edge ( n i , n j ) is added to G if a majority of the annotators (three or more of the five) put the two headlines in the same group.", "We apply a community detection algorithm, the Louvain method (Blondel et al., 2008), to G to obtain a grouping of the headlines that we call the global groups .", "We compare the groups of each annotator to the global groups for each timeline, and measure agreement between annotator groups and a leave-one-out version of the global groups using the standard Adjusted Mutual Information (Vinh et al., 2010).", "The average inter-annotator agreement is 0.814, confirming that consensus amongst annotators is high.", "Inter-annotator agreement is reported for each timeline in Table 1.", "Section 4 provides individual annotator performance on HLGD, which obtain the highest performance of about 0.9 F-1, further confirming that the task is well defined for human annotators.", "We transform the global groups into a binary classification task by generating pairs of headlines in the timelines: labeling the pair with a 1 if it belongs to the same group, and 0 otherwise.", "With this procedure, we obtain 3,522 distinct positive headline pairs, and 154,156 negative pairs.", "This class imbalance is expected: two headlines picked at random in a timeline are unlikely to be in the same group.", "In order to reduce class imbalance, we down-sample negative pairs in the dataset.", "Figure 2 shows the distribution of differences in publication dates for pairs of headlines in the final dataset.", "Publication date is indeed a strong signal to determine whether headlines are in the same group, as most positive pairs are published on the same day or one day apart.", "However, we show in Section 4 that using time as a sole indicator is not enough to perform well on the dataset.", "In Figure 2, it can also be observed that 98 % of positive headline pairs are published within 4 days of each other.", "Therefore, we only retain negative pairs that are within a 4 day window, filtering out simpler negative pairs from the final dataset.", "This final dataset has a class imbalance of roughly 1 positive pair to 5 negative pairs, for a total of 20,056 of labeled headlines pairs.", "This is similar in size to other NLU datasets, such as MRPC (5,801 Positive Examples Reasoning Description Example Headline Pair Percentage Difference in Detail A headline conveys additional details, such as a name, or a cause NASA delays work on Moon rocket during virus pandemic Nasa's Moon plans take a hit 37% Exact Paraphrase Both headlines convey the same information Equifax takes web page offline after reports of new cyber attack Equifax takes down web page after reports of new hack 30% Difference in Focus Headlines focus on a different aspect of the event group Astronauts to Get Thanksgiving Feast in Space A Brief History of Thanksgiving Turkey in Space 26% Pun, Play-on-word, etc.", "Figure 3 shows the distribution of Levenshtein Ratio (Levenshtein, 1966) defined as: Ratio ( S 1 , S 2 ) = 1 Levenshtein ( S 1 , S 2 ) max ( | S 1 | , | S 2 | ) (1) for positive pairs ( S 1 , S 2 ) in MRPC and STS-B, two common NLU datasets, as well as HLGD, computed at the character level.", "The average similarity in HLGD (0.51) is lower than in the two others (0.72 and 0.74, respectively).", "Furthermore, a clas-sifier using solely the Levenshtein Ratio obtains an F-1 score of 0.81 on MRPC, but only 0.485 on HLGD.", "This suggests lexical distance alone does not contain a strong signal for good performance on HLGD.", "To gain insight into the linguistic phenomena that occur within and outside headline groups, the first author manually inspected 200 positive and 200 negative headline pairs in HLGD.", "Positive pairs were selected from randomly sampled large groups, and negative samples from same-day negative pairs, because headlines that appear on the same day but are not in the same group cannot be distinguished using time information and are likely to overlap semantically the most.", "In Table 2, we list the phenomena we observed, give an example for each, and show the frequency in our sample.", "Within a group, headlines can be exact paraphrases, differ in detail level, differ in the element of focus, or involve stylistic elements such as puns.", "Negative headline pairs analyzed were either about independent events, related sub-events or involved a headline that was not specific enough.", "Additionally, around 4 % of the negative samples analyzed were judged as borderline, interpretable as either positive or negative, showing that some ambiguity in the task is unavoidable.", "We believe this diversity in phenomena are ingredients that make HeadLine Grouping challenging and interesting for NLU research.", "To allow for diversity in approaches to HeadLine Grouping, we propose to sub-divide HLGD into several challenges, limiting in each the data used to solve the classification task:", "Challenge 1: Headline-only .", "Access to the headline pairs only; similar to Paraphrase Identification and Textual Similarity tasks.", "Challenge 2: Headline + Time .", "Access to the headline pairs and their publication dates.", "Challenge 3: Headline + Time + Other .", "Access to the headline pairs, publication dates, and other information such as full content, author(s), and news source (a URL to the original article provides this access).", "We believe these different challenges provide flexibility to probe a diversity of methods on the HLGD task.", "Challenge 1 fits the standard text-pair classification of NLU, similar to paraphrase identification, textual similarity and NLI, while additional meta-data available in Challenge 3 might be more compatible with the goals of the information retrieval community.", "In Table 3, we report the performance of a human annotator and a baseline, as well as unsupervised and supervised methods on HLGD.", "We chose Electra (Clark et al., 2020) for experiments based on a bi-directional Transformer (Vaswani et al., 2017), as initial experiments with other BERT (Devlin et al., 2019) variants performed similarly.", "Implementation details, model sizes and hyper-parameters are listed in Appendix B. 4.1 Human Performance and Baseline Human Performance reports the F-1 score of human annotators performing the task.", "Human performance is estimated by obtaining a sixth set of annotations for each timeline in the development and testing set, beyond the five used for dataset creation.", "These annotations were completed after several hours of practice on the training set timelines.", "Human performance is distinct from the inter-annotator agreement (IAA) analysis presented in 3.4.", "IAA was performed on the five annotations used to create the dataset.", "We note that human C1 H1 C2 H2 C1 C2 H2 HeadlineGenerator P(H2|C1) P(H1|C2) + S(H1,H2) H1 Figure 4: Schematic of the Headline Generator Swap model.", "performance can theoretically achieve a perfect F-1 score of 1.0 if the sixth annotator grouped the headlines identically to the global group.", "Time only reports the performance of a logistic regression baseline based on the difference in days of publication between the two headlines.", "Data plotted in Figure 2 shows that a majority of positive pairs are published within two days of each other.", "Electra MRPC Zero-shot stands for an Electra model trained on the Microsoft Paraphrase Corpus (MRPC), achieving an F-1 of 0.92 on its development set.", "The objective is to evaluate whether a competitive paraphrase identification system achieves high performance on HLGD.", "The threshold to predict a label of one is tuned on the training portion of HLGD.", "This model only accesses headlines, and falls under Challenge 1 .", "Electra MRPC Zero-shot + Time corresponds to the previous model, adding publication time into the model in the following way: P (cid:48) ( Y = 1 | X ) = P ( Y = 1 | X ) e T (2) where X represents the pair of headline inputs, P ( Y = 1 | X ) represents the model's confidence of the headline pair being in the same group, and T the difference in days of publication of the headlines.", "is tuned on the training set.", "Because this method leverages headline and time information, it falls under Challenge 2 .", "Headline Generator Swap is a novel approach we propose for zero-shot headline grouping, summarized in Figure 4.", "Challenge HLGD Dev F-1 HLGD Test F-1", "a word sequence.", "As the first step in Headline Generator Swap, we use the GPT-2 model to create a headline generator to estimate the likelihood of a (headline, content) pair: PLM ( H | C ) .", "In more detail, we finetune a GPT-2 model to read through the first 512 words of a news article and generate its headline.", "The headline generator is trained with teacher-forcing supervision, and a large corpus of 6 million (content, headline) pairs (Laban and Hearst, 2017), not overlapping HLGD.", "The second step in Headline Generator swap is to use this probability to produce a symmetric score for two articles A 1 = ( H 1 , C 1 ) and A 2 = ( H 2 , C 2 ) : S ( A 1 , A 2 ) = PLM ( H 2 | C 1 ) + PLM ( H 1 | C 2 ) (3) This score evaluates the likelihood of a swap of headlines between articles A 1 and A 2 , according to the GPT-2 language model.", "We argue that if the model believes a swap is likely, the headlines must be part of the same group.", "The threshold above which S ( A 1 , A 2 ) predicts a 1 is determined using the training portion of the data.", "Because this model uses the headline and content of the article, it falls under Challenge 3 .", "Headline Gen. Swap + Time corresponds to the Headline Generator Swap model, adding publication date information similarly to the Electra MRPC Zero-shot + Time model: S (cid:48) ( A 1 , A 2 ) = S ( A 1 , A 2 ) e T (4) This model uses the headline, publication data and content of the article, and falls under Challenge 3 .", "Unsupervised models were allowed to pick a single hyper-parameter based on training set performance: to learn the threshold in score differentiating between class 1 and class 0.", "Strictly speaking, because we tune this single parameter, the methods could be seen as supervised.", "However, we label them as unsupervised because model parameters were not modified.", "Electra Finetune stands for an Electra model fine-tuned on the training set of HLGD, inputting the two headlines, divided by a separator token.", "Headline order is chosen randomly at each epoch.", "Because we train a model for several epochs (see Appendix B), a model is likely to see pairs in both orders.", "This model only uses headlines of articles for prediction, and falls under Challenge 1 .", "Electra Finetune on content represents a similar model to that described above, with the difference that the model makes predictions based on the first 255 words of the contents of the two news articles, instead of the headline.", "This evaluates the informativeness of contents in determining headline groups.", "This experiment requires the contents and falls under Challenge 3 .", "Electra Finetune + Time corresponds to an Electra model with time information.", "The model's output goes through a 768 x 1 feed-forward layer, and is concatenated with the day difference of publication, which is run through a 2 x 2 feed-forward, and a softmax layer.", "This model uses headline and time information, and falls under Challenge 2 .", "Human performance can be high, close to 0.9 F-1 both on development and test timelines.", "Using time alone gives a lower-bound baseline on HLGD, achieving an F-1 of 0.585 on the test set, and confirming that publication date of an article is not enough to perform competitively on HLGD.", "Regarding Unsupervised and Zero-shot approaches, the Headliner Generator Swap outperforms Electra MRPC Zero-shot .", "With additional time information ( + time ), the generator-based model is able to get close to strong supervised models.", "The model benefits from pre-training on a large corpus of (content, headline) pairs, having learned a good representation for headlines.", "Unsurprisingly, best performance on HLGD is achieved by a supervised approach, Electra Finetune HLGD + Time , which uses both headline and time information.", "With an F-1 performance on the development set of 0.753, the model is still 0.13 F-1 points below human performance (0.07 F-1 difference on the test set).", "When finetuning the Electra model with contents instead of headlines, performance drops by 0.07 F-1 points.", "This is particularly surprising as it could be expected that content contains strictly more information than the headline.", "We interpret this performance of the content-based model as evidence that the contents are more broad and do not solely focus on the distinguishing fact that is necessary to perform the grouping.", "Finally, publication date yields a performance gain of 0.025 to 0.1 F-1 points over models without time information.", "This confirms that even though time information alone does not achieve high performance, it can be used to enhance models effectively.", "Because human annotators read timelines chronologically and had access to publication date while annotating, we do not have an upper-bound of human performance without using time.", "Checking whether deep learning models are consistent across predictions has recently become a subject of interest, for example with QA systems with text (Ribeiro et al., 2019) and image (Shah et al., 2019) inputs.", "We analyze model consistency by probing the Electra Finetune + Time model, which achieves highest performance in terms of F-1 score.", "We propose a commutative test and transitive test, both illustrated in Figure 5.", "In order to evaluate consistency across training runs, we trained six versions of the Electra Finetune + Time model with the same hyper-parameters.", "Because each training run processes through the data in a different order, the models are distinct from each other.", "With regard to performance, the models perform very similarly, achieving within 0.01 F-1 of each other on the development and test sets.", "The HeadLine Grouping task requires two sentences to be compared, both playing a symmetric role.", "Most model architectures process the headline pair as a single sequence, and an arbitrary ordering of the pair is chosen for processing.", "We study whether this arbitrary choice has an impact on the model's prediction.", "Specifically, we make predictions for all pairs of headlines in the development portion of HLGD, running each pair in both ( A, B ) and ( B, A ) order.", "On average across the 6 model checkpoints, swapping the order of headlines is enough to make the model change its prediction (put higher probability on 0 in one case and 1 in the other) on 6 .", "3%( 0 . 5) of the pairs.", "Furthermore, in other cases when the prediction does not change, the probability of the predicted class fluctuates by 0 .", "061( 0 . 005) on average, showing the impact sentence order has on all predictions.", "The relatively small standard deviations across training runs indicates that this phenomenon is inherent to the training procedure and not only existent in a subset of models.", "A remedy is to build a symmetric classifier: PS ( Y | A, B ) = P ( Y | A, B ) + P ( Y | B, A ) 2 (5) where PS follows the symmetric rule by design, by predicting for both ( H 1 , H 2 ) and ( H 2 , H 1 ) and averaging.", "When applying this patch to models presented in Section 4, we observe an average gain in F-1 performance of 0.01.", "Even though encouraging, this gain is a post hoc fix, and enforcing symmetry during training might yield further gains.", "Transitivity involves triplets of headlines A, B and C. The assumption is that if A and B are part of the same group, and A and C are part of the same group, then B and C must be in the same group as well.", "The procedure followed during annotation assigning group IDs to headlines implies that the transitivity is preserved, as all headline pairs within the same group are positive pairs.", "To test a model's consistency with regards to the transitive rule, we use the Electra Finetune + Time model to produce a prediction for all pairs of headlines in the development portion of HLGD.", "For each triplet (A,B,C) of headlines in the timeline, the model produces three predictions for the (A,B), (A,C), and (B,C) pairs.", "We focus our attention on triplets where the model has predicted at least 2 positive pairs: if the third pair is predicted to be positive, transitivity is conserved (111 trian-gle), but if it is predicted to be negative, the triplet breaks the transitivity rule (110 triangle).", "On average across the six model checkpoints, we find that of the 60,660 triplets for which the model predicted at least 2 positives pairs, 44,627 triplets had a negative third prediction, and 16,033 had a positive one.", "In short, the model is consistent only 26 .", "4%( 1 . 4) of the time on these triplets.", "Improving model consistency with regards to transitivity is challenging, as it would involve presenting the model with triples in some way.", "Imposing this constraint could yield performance improvements on the task.", "We note however that transitivity is a strong assumption, as it is possible for groups of headlines to have stronger and weaker subgroups.", "It is possible that human annotations would not always follow transitivity if tasked to do so.", "For this reason, we do not expect models to be 100% consistent, but there is room for improvement.", "In this work we present the new task of HeadLine Grouping (HLG) a new challenging NLU task, with", "an accompanying dataset (HLGD).", "Even though state-of-the-art NLU models have achieved close to human performance on many NLU tasks, we show that there is a considerable gap between best model performance (0.75 F-1) and human performance (about 0.9 F-1) on HLGD.", "We therefore propose this dataset as a challenge for future NLU benchmarks.", "We propose to repurpose a Headline Generator for the task of headline grouping, based on prompting it for the likelihood of a headline swap, and achieve within 3 F-1 of the best supervised model, paving the way for other unsupervised methods to repurpose generators for NLU.", "Analysis of models on HLGD reveals that they are not consistent in trivial ways, suggesting further improvements needed to NLU models.", "We would like to thank the Upwork crowd-workers for their assistance in creating HLGD, as well as Katie Stasaski, Dongyeop Kang and the ACL reviewers for their helpful comments.", "This work was supported by a Bloomberg Data Science grant.", "We also gratefully acknowledge support received from an Amazon Web Services Machine Learning Research Award and an NVIDIA Corporation GPU grant." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "abstain", "abstain", "other", "method", "other", "other", "other", "abstain", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "objective", "objective", "abstain", "other", "other", "other" ]
[ "Code search is to search reusable code snippets from source code corpus based on natural languages queries.", "Deep learning-based methods on code search have shown promising results.", "However, previous methods focus on retrieval accuracy, but lacked attention to the efficiency of the retrieval process.", "We propose a novel method CoSHC to accelerate code search with deep hashing and code classification, aiming to perform efficient code search without sacri-ficing too much accuracy.", "To evaluate the effectiveness of CoSHC, we apply our method on five code search models.", "Extensive experimental results indicate that compared with previous code search baselines, CoSHC can save more than 90% of retrieval time meanwhile preserving at least 99% of retrieval accuracy.", "Code reuse is a common practice during software development process.", "It improves programming productivity as developers' time and energy can be saved by reusing existing code.", "According to previous studies (Brandt et al., 2009; Lv et al., 2015), many developers tend to use natural language to describe the functionality of desired code snippets and search the Internet/code corpus for code reuse.", "Many code search approaches (Brandt et al., 2009; McMillan et al., 2011; Lv et al., 2015; Du et al., 2021) have been proposed over the years.", "With the rapid growth of open source code bases and the development of deep learning technology, recently deep learning based approaches have become popular for tackling the code search problem (Gu et al., 2018; Husain et al., 2019; Gu et al., 2021).", "Some of these approaches adopt neural network models to encode source code and query descriptions into representation vectors in the same Work done while this author was an intern at Microsoft Research.", "embedding space.", "The distance between the representation vectors whose original code or description are semantically similar should be small.", "Other approaches (Feng et al., 2020; Guo et al., 2021; Du et al., 2021) regard the code search task as a binary classification task, and calculate the probability of code matching the query.", "In the past, deep learning-based methods focused on retrieval accuracy, but lacked attention to the efficiency of retrieval on large-scale code corpus.", "However, both types of these deep learning-based approaches directly rank all the source code snippets in the corpus during searching, which will incur a large amount of computational cost.", "For the approaches that separately encode code and description representation vectors, the similarity of the target query vector with all code representation vectors in the corpus needs to be calculated for every single retrieval.", "In order to pursue high retrieval accuracy, a high dimension is often set for the representation vectors.", "For example, in CodeBERT, the dimension of the final representation vector is 768.", "The similarity calculation between a pair of code and query vectors will take 768 multiplications and 768 additions between two variables with double data type.", "The total calculation of single linear scan for the whole code corpus containing around 1 million code snippets is extremely large around 1 billion times of multiplications and additions.", "As for the approaches adopting binary classification, there is no representation vectors stored in advance and the inference of the target token sequence with all the description token sequences needs to be done in real time for every single retrieval.", "Due to the large number of parameters in the current deep learning models, the computation cost will be significant.", "Hashing is a promising approach to improve the retrieval efficiency and widely adopted in other retrieval tasks such as image-text search and image-image search.", "Hashing techniques can convert high 2534 dimensional vectors into low dimensional binary hash code, which greatly reduce the cost of storage and calculation (Luo et al., 2020).", "Hamming distance between two binary hash code can also be calculated in a very efficient way by running XOR instruction on the modern computer architectures (Wang et al., 2016).", "However, the performance degradation is still not avoidable during the conversion from representation vectors to binary hash codes even the state-of-the-art hashing models are adopted.", "The tolerance of performance degradation from most users is quite low and many of them are willing to sweep the performance with efficiency.", "In order to preserve the performance of the original code search models that adopt bi-encoders for the code-query encoding as much as possible, we integrate deep hashing techniques with code classification, which could mitigate the performance degradation of hashing model in the recall stage by filtering out the irrelevant data.", "Specifically, in this paper, we propose a novel approach CoSHC (Accelerating Semantic Code Search with Deep Hashing and Code Classification) for accelerating the retrieval efficiency of deep learning-based code search approaches.", "CoSHC firstly clusters the representation vectors into different categories.", "It then generates binary hash codes for both source code and queries according to the representation vectors from the original models.", "Finally, CoSHC gives the normalized prediction probability of each category for the given query, and then CoSHC will decide the number of code candidates for the given query in each category according to the probability.", "Comprehensive experiments have been conducted to validate the performance of the proposed approach.", "The evaluation results show that CoSHC can preserve more than 99% performance of most baseline models.", "We summarize the main contributions of this paper as follows: We propose a novel approach, CoSHC, to improve the retrieval efficiency of previous deep learning based approaches.", "CoSHC is the first approach that adopts the recall and re-rank mechanism with the integration of code clustering and deep hashing to improve the retrieval efficiency of deep learning based code search models.", "We conduct comprehensive experimental evaluation on public benchmarks.", "The results demonstrate that CoSHC can greatly improve the retrieval efficiency meanwhile preserve almost the same performance as the baseline models.", "In this subsection, we briefly review some deep learning based code search approaches.", "Sachdev et al. (2018) firstly propose the neural network based model NCS to retrieve the source code from a large source code corpus according to the given natural language descriptions.", "Cambronero et al. (2019) propose a neural network model UNIF based on bag-of-words, which embeds code snippets and natural language descriptions into a shared embedding space.", "Gu et al. (2018) propose to encode source code representation with API sequences, method name tokens and code tokens.", "Yao et al. (2019) treat code annotation and code search as dual tasks and utilize the generated code annotations to improve code search performance.", "Husain et al. (2019) explore different neural architectures for source code representation and discover that the self-attention model achieves the best performance.", "Gu et al. (2021) extract the program dependency graph from the source code and adopt long short term memory (LSTM) networks to model this relationship.", "Feng et al. (2020) propose a pre-trained model for source code representation and demonstrate its effectiveness on the code search task.", "In this subsection, we briefly introduce some representative unsupervised cross-modal hashing methods.", "In order to learn a unified hash code, Ding et al. (2014) propose to adopt collective matrix factorization with latent factor model from different modalities to merge multiple view information sources.", "Zhou et al. (2014) firstly utilize sparse coding and matrix factorization to extract the latent features for images and texts, respectively.", "Then the learned latent semantic features are mapped to a shared space and quantized to the binary hash codes.", "Wang et al. (2014) suggest using stacked auto-encoders to capture the intraand inter-modal semantic relationships of data from heterogeneous sources.", "He et al. (2017) and Zhang et al. (2018) adopt adversarial learning for cross-modal hash codes generation.", "Wu et al. (2018) propose an approach named UDCMH that integrates deep learning and matrix factorization with binary latent factor models to generate binary hash codes for multi-2535 modal data retrieval.", "By incorporating Laplacian constraints into the objective function, UDCMH preserve not only the nearest neighbors but also the farthest neighbors of data.", "Unlike using Laplacian constraints in the loss function, Su et al. (2019) construct a joint-semantic affinity matrix that integrates the original neighborhood information from different modalities to guide the learning of unified binary hash codes.", "We propose a general framework to accelerate existing Deep Code Search (DCS) models by decoupling the search procedure into a recall stage and a re-rank stage.", "Our main technical contribution lies in the recall stage.", "Figure 1 illustrates the overall framework of the proposed approach.", "CoSHC consists of two components, i.e., Offline and Online.", "In Offline, we take the code and description embeddings learned in the given DCS model as input, and learn the corresponding hash codes by preserving the relations between the code and description embeddings.", "In Online, we recall a candidate set of code snippets according to the Hamming distance between the query and code, and then we use the original DCS model to re-rank the candidates.", "Multiple Code Hashing Design with Code Classification Module Since the capacity of binary hashing space is very limited compared to Euclidean space, the Hamming distance between similar code snippets will be too small to be distinguishable if we adopt a single Hashing model.", "To be specific, we cluster the codebase using K-Means algorithm with the code embeddings learned from the given DCS model.", "The source code whose representation vectors are close to each other will be classified into the same category after the clustering.", "Deep Hashing Module The deep hashing module aims at generating the corresponding binary hash codes for the embeddings of code and description from the original DCS model.", "Figure 2 illustrates the framework of the deep hashing module.", "To be specific, three fully-connected (FC) layers with tanh( ) activation function are adopted to replace the output layer in the original DCS model to convert the original representation vectors into a soft binary hash code.", "representations of code pairs and description pairs approaching the Euclidean distance between the corresponding embeddings.", "Thus, we need to calculate the ground truth similarity matrix between code pairs and description pairs firstly.", "For performance consideration, we calculate the similarity matrix within a mini-batch.", "To construct such a matrix, we first define the code representation vectors and the description representation vectors in the original code search model as VC = { v (1) c , ..., v ( n ) c } and VD = { v (1) d , ..., v ( n ) d } , respectively.", "VC and VD represent the representation vectors matrix for the entire batch, while v ( i ) c and v ( i ) d represent the representation vector for the single code snippet or query.", "After normalizing VC , VD to VC , VD with l 2 -norm, we can calculate the code similarity matrices SC = VC VTC and summary similarity matrices SD = VD VTD to describe the similarity among code representation vectors and summary representation vectors, respectively.", "In order to integrate the similarity information in both SC and SD , we combine them with a weighted sum: S = S C + (1 ) SD , [0 , 1] (1) where is the weight parameter.", "Since the pairwise similarity among the code representation vectors and description representation vectors still cannot comprehensively present the distribution condition of them in the whole embedding space, we involve a matrix S ST to describe a high order neighborhood similarity that two vectors with high similarity should also have the close similarity to other vectors.", "Finally, we utilize a weighted equation to combine both of these two matrices as follows: S = (1 ) S + S ST m , (2) where is a hyper-parameter and m is the batch size which is utilized to normalize the second term in the equation.", "Since we hope the binary hash codes of the source code and its corresponding description to be the same, we replace the diagonal elements in the similarity matrix with one.", "The final high order similarity matrix is: SF ij = (cid:40) 1 , i = j S ij , otherwise (3) Binary Hash Code Training We propose to replace the output layer of the original code search 2536 Embedding Space Code Description Embedding Space Hashing Space Offline Stage Recall Embedding Space Online Stage <Code, Description> Hashing Space Re-rank Query Category Prediction Hashing Clustering Figure 1: Overview of the proposed CoSHC.", "model with three FC layers with tanh( ) activate function.", "We define the trained binary hash code for code and description as BC = { b (1) c , ..., b ( n ) c } and BD = { b (1) d , ..., b ( n ) d } , respectively.", "To ensure that the relative distribution of binary hash codes is similar to the distribution of representation vectors in the original embedding space, the following equation is utilized as the loss function of the deep hashing module: L ( ) = min BC ,B D (cid:107) min( S F , 1) BCBTD d (cid:107) 2 F + 1 (cid:107) min( S F , 1) BCBTC d (cid:107) 2 F + 2 (cid:107) min( S F , 1) BDBTD d (cid:107) 2 F , s.t. BC , BD { 1 , +1 } m d , (4) where are model parameters, is the weighted parameters to adjust the similarity score between different pairs of code and description, 1 , 2 are the trade-off parameters to weight different terms in the loss function, and d is the dimension of the binary hash code generated by this deep hashing module.", "These three terms in the loss function are adopted to restrict the similarity among binary hash codes of the source codes, the similarity among binary hash codes of the descriptions, and the similarity between the binary hash codes of source code and description, respectively.", "Note that we adopt BCBTD /d to replace cos( BC , BD ) because cos( BC , BD ) only measures the angle between two vectors but neglects the length of the vectors, which makes cos( BC , BD ) can still be a very large value even the value of every hash bits is close to zero.", "Unlike cos( BC , BD ) , BCBTD /d can only achieve a high value when every bit of the binary hash code is 1 or -1 since the value of BCBTD /d will be close to zero if the value of every hash bits is close to zero.", "Since it is impractical to impose on the output of neural network to be discrete values like 1 and -1, we adopt the following equation to convert the output of deep hashing module to be strict binary hash code: B = sgn( H ) { 1 , +1 } m d , (5) where H is the output of the last hidden layer without the activation function in the deep hashing module and sgn( ) is the sign function and the output of this function is 1 if the input is positive and the output is -1 otherwise.", "the vanishing gradients problem and affect model convergence.", "To address this problem, we follow the previous research (Cao et al., 2017; Hu et al., 2019) and adopt a scaling function: B = tanh( H ) { 1 , +1 } m d , (6) where is the parameter which is increased during the training.", "The function of tanh( H ) is an approximate equation of sgn( H ) when is large enough.", "Therefore, the output of Eq.", "6 will finally be converged to 1 or -1 with the increasing of during the training and the above problem is addressed.", "Recall and Re-rank Mechanism The incoming query from users will be fed into the description category prediction module to calculate the normalized probability distribution of categories at first.", "Then the number of code candidates R i for each category i will be determined according to this probability distribution.", "The Hamming distance between the hash code of the given query and all the code inside the database will be calculated.", "Then code candidates will be sorted by Hamming distance in ascending order and the top R i code candidates in each category i will be recalled.", "In the re-rank step, the original representation vectors of these recalled code candidates will be retrieved and utilized for the cosine similarity calculation.", "Finally, code snippets will be returned to users in descending order of cosine similarity.", "predict the category of source code that meets user's requirement according to the given natural language description.", "The model adopted for category prediction is the same as the original code search model, except that the output layer is replaced with a one-hot category prediction layer and the cross-entropy function is adopted as the loss function of the model.", "Since the accuracy of the description category prediction module is not perfect, we use the probability distribution of each category instead of the category with the highest predicted probability as the recall strategy for code search.", "We define the total recall number of source code as N , the normalized predicted probability for each code category as P = { p 1 , ..., p k } , where k is the number of categories.", "The recall number of source code in each category is: R i = min( (cid:98) p i ( N k ) (cid:99) , 1) , i 1 , ..., k, (7) where R i is the recall number of source code in category i .", "To ensure that the proposed approach can recall at least one source code from each category, we set the minimum recall number for a single category to", "1. 4 Experiments 4.1 Dataset We use two datasets (Python and Java) provided by CodeBERT (Feng et al., 2020) to evaluate the performance of CoSHC.", "CodeBERT selects the data from the CodeSearchNet (Husain et al., 2019) 2538 dataset and creates both positive and negative examples of <description, code> pairs.", "Since all the baselines in our experiments are bi-encoder models, we do not need to predict the relevance score for the mismatched pairs so we remove all the negative examples from the dataset.", "Finally we get 412,178 <description, code> pairs as the training set, 23,107 <description, code> pairs as the validation set, and 22,176 <description, code> pairs as the test set in the Python dataset.", "We get 454,451 <description, code> pairs as the training set, 15,328 <descrip-tion, code> pairs as the validation set, and 26,909 <description, code> pairs as the test set in the Java dataset.", "In the code classification module, we set the number of cluster to 10.", "In the deep hashing module, we add three fully connected (FC) layer in all the baselines, the hidden size of each FC layer is the same as the dimension of the original representation vectors.", "Specifically, the hidden size of FC layer for CodeBERTa, CodeBERT, GraphCodeBERT is 768.", "The hidden size of FC layer for UNIF is 512 and for RNN is 2048.", "The size of the output binary hash code for all the baselines is 128.", "The hyper parameters , , , 1 , 2 are 0.6, 0.4, 1.5, 0.1, 0.1, respectively.", "The parameter is the epoch number and will be linear increased during the training.", "In the query category prediction module, a cross-entropy function is adopted as the loss function and the total recall number is 100.", "The learning rate for CodeBERTa, CodeBERT and GraphCodeBERT is 1e-5 and the learning rate for UNIF, RNN is 1.34e-4.", "All the models are trained via the AdamW algorithm (Kingma and Ba, 2015).", "We train our models on a server with four 4x Tesla V100 w/NVLink and 32GB memory.", "Each module based on CodeBERT, GraphCodeBERT and CodeBERTa are trained with 10 epochs and Each module based on RNN and UNIF are trained with 50 epochs.", "The early stopping strategy is adopted to avoid overfitting for all the baselines.", "The time efficiency experiment is conducted on the server with Intel Xeon E5-2698v4 2.2Ghz 20-core.", "The programming for evaluation is written in C++ and the program is allowed to use single thread of CPU.", "We apply CoSHC on several state-of-the-art and representative baseline models.", "UNIF (Cam-bronero et al., 2019) regards the code as the sequence of tokens and embeds the sequence of code tokens and description tokens into representation vectors via full connected layer with attention mechanism, respectively.", "RNN baseline adopts a two-layer bi-directional LSTM (Cho et al., 2014) to encode the input sequences.", "CodeBERTa 1 is a 6-layer, Transformer-based model trained on the CodeSearchNet dataset.", "CodeBERT (Feng et al., 2020) is a pre-trained model based on Transformer with 12 layers.", "Similar to CodeBERT, GraphCodeBERT (Guo et al., 2021) is a pre-trained Transformer-based model pre-trained with not only tokens information but also dataflow of the code snippets.", "As we introduced, the inference efficiency of cross-encoder based models like CodeBERT is quite low and the purpose of our approach is to improve the calculation efficiency between the representation vectors of code and queries.", "Here we slightly change the model structure of CodeBERTa, CodeBERT, and GraphCodeBERT.", "Rather than concatenating code and query together and inputting them into a single encoder to predict the relevance score of the pair, we adopt the bi-encoder architecture for the baselines, which utilize the inde-pendent encoder to encoding the code and queries into representation vectors, respectively.", "Also, cosine similarity between the given representation vector pairs is adopted as the training loss function to replace the cross entropy function of the output relevance score.", "SuccessRate @ k is widely used by many previous studies (Haldar et al., 2020; Shuai et al., 2020; Fang et al., 2021; Heyman and Cutsem, 2020).", "The metric is calculated as follows: SuccessRate @ k = 1 | Q | Q (cid:88) q =1 ( FRank q k ) , (8) where Q denotes the query set and F Rank q is the rank of the correct answer for query q .", "If the correct result is within the top k returning results, ( F Rank q k ) returns 1, otherwise it returns", "0. A higher R @ k indicates better performance.", "In this section, we present the experimental results and evaluate the performance of CoSHC from the aspects of retrieval efficiency, overall retrieval performance, and the effectiveness of the internal classification module.", "Table 1 illustrates the results of efficiency comparison between the original code search models and CoSHC.", "Once the representation vectors of code and description are stored in the memory, the retrieval efficiency mainly depends on the dimension of representation vectors rather than the complexity of the original retrieval model.", "Therefore, we select CodeBERT as the baseline model to illustrate efficiency comparison.", "Since code search process in both approaches contains vector similarity calculation and array sorting, we split the retrieval process into these two steps to calculate the time cost.", "In the vector similarity calculation step, CoSHC reduces 97.29% and 96.90% of time cost in the dataset of Python and Java respectively, which demonstrates that the utilization of binary hash code can effectively reduce vector similarity calculation cost in the code retrieval process.", "In the array sorting step, CoSHC reduces 53.61% and 37.74% of time cost in the dataset of Python and Java, respectively.", "The classification module makes the main contribution on the improvement of sorting efficiency.", "The sorting algorithm applied in both original code search model and CoSHC is quick sort, whose time complexity is O ( n log n ) .", "Classification module divides a large code dataset into several small code datasets, reducing the average time complexity of sorting to O ( n log nm ) .", "The reason why the improvement of sorting in the Java dataset is not so significant as in the Python dataset is that the size of Java dataset is much smaller than the size of Python dataset.", "However, the combination of the algorithm of divide and conquer and max-heap, rather than quick sort, is widely applied in the big data sorting, which can greatly shrink the retrieval efficiency gap between these two approaches.", "Therefore, the improvement of efficiency in the sorting process will not be as large as what shown in Table", "1. In the overall code retrieval process, the cost time is reduced by 94.09% and 93.51% in the dataset of Python and Java, respectively.", "Since the vector similarity calculation takes most of cost time in the code retrieval process, CoSHC still can reduce at least 90% of cost time, which demonstrates the effectiveness on the efficiency improvement in the code search task.", "Table 2 illustrates the retrieval performance comparison between the original code search models and CoSHC.", "We have noticed that the performance of the conventional approaches like BM25 (Robert-son and Zaragoza, 2009) is not good enough.", "For example, we set the token length for both code and queries as 50, which is the same as the setting in CodeBERT, and apply BM25 to recall top 100 code candidates for the re-rank step on the Python dataset.", "BM25 can only retain 99.3%, 95.6% and 92.4% retrieval accuracy of CodeBERT in terms of R @1 , R @5 and R @10 on the Python dataset.", "Here we only compare the performance of our approach with the original code search models since the purpose of our approach is to preserve the performance of the original code search models.", "As can be observed, CoSHC can retain at least 99.5%, 99.0% and 98.4% retrieval accuracy of most original code search models in terms of R @1 , R @5 and R @10 on the Python dataset.", "CoSHC can also retain 99.2%, 98.2% and 97.7% of the retrieval accuracy as all original code search baselines in terms of R @1 , R @5 and R @10 on the Java dataset, respectively.", "We can find that CoSHC can retain more than 97.7% of performance in all metrics.", "R @1 is the most important and useful metric among these metrics since most users hope that the first returned answer is the correct answer during the search.", "CoSHC can retain at least 99.2% of performance on R @1 in both datasets, which demonstrates that CoSHC can retain almost the same performance as the original code search model.", "It is interesting that CoSHC presents a relatively better performance when the performance of the original code retrieval models is worse.", "CoSHC CodeBERTa even outperforms the original baseline model in Java dataset.", "CoSHC RNN and CoSHC UNIF outperform the original model in both Python and Java datasets.", "The integration of deep learning and code classification with recall make the contribution on this result.", "The worse performance indicates more misalignment between the code representation vectors and description representation vectors.", "Since the code classification and deep hashing will filter out most of irrelevant codes in the recall stage, some irrelevant code representation vectors but has high cosine similarity with the target description representation vectors are fil-tered, which leads the improvement on the final retrieval performance.", "Table 2 illustrates the performance comparison between the CoSHC variants which adopt different recall strategies with query category prediction results.", "CoSHC w / o classification represents CoSHC Model Python Java Acc.", "without code classification and description prediction module.", "CoSHC one classification represents the CoSHC variant that recalls N k + 1 candidates in the code category with highest prediction probability and one in each of the rest categories.", "CoSHC ideal classification is an ideal classification situation we set.", "Assuming the correct description category is known, N k + 1 candidates are recalled in the correct category and one candidate is recalled in each of the rest categories.", "Note that the display of CoSHC ideal classification is only to explore the upper threshold of performance improvement of the category prediction module and will not be counted as a variant of CoSHC we compare.", "tween CoSHC ideal classification and CoSHC w / o classification , we can find that correct classification can significantly improve the retrieval performance.", "With the ideal category labels, CoSHC can even outperform all baseline models.", "As mentioned in Sec. 4.5.2, code classification can mitigate the problem of vector pairs misalignment via filtering out wrong candidates whose representation vectors has high cosine similarity with the target representation vectors in the recall stage.", "The more serious the misalignment problem, the more effective the code classification.", "That is the reason why the improvement of CoSHC with ground-truth labels on UNIF, RNN, and CodeBERTa is more significant than the improvement of it on CodeBERT and GraphCodeBERT since the retrieval accuracy of former models is much lower than the latter ones.", "Similar conclusions can also be drawn at the aspect of binary hash code distribution via the comparison between CoSHC and CoSHC ideal classification since CoSHC utilizes the distribution of the original representation vectors as the guidance for model training.", "Therefore, the distribution of binary hash codes will be similar to the distribution of original representation vectors.", "Since we have explored the theoretical upper limit of the effectiveness of code classification for code retrieval, the effectiveness of code classification for code retrieval in the real application will be validated.", "By comparing the experimental results between CoSHC w / o classification and CoSHC one classification , we can find that the performance of CoSHC with predicted labels is even worse than the performance of CoSHC without code classification module.", "The reason is that the accuracy of description category prediction is far from the satisfactory.", "Table 3 illustrates the accuracy of description category prediction module in all baseline models.", "We regard the category with the highest probability as the predicted category from the description category prediction module and check whether the module could give a correct prediction.", "It can be seen that the classification accuracy is not very high (less than 75%).", "By observing the experimental results of CoSHC in GraphCodeBERT on the Java dataset, we can also find that low accuracy greatly affect the performance of CoSHC oneclassification , which makes 7.8%, 11.6%, and 13.9% performance drop in terms of R @1 , R @5 , and R @10 , respectively.", "Fortunately, although the description category prediction module cannot accurately tell the exact category which this description belongs to, the module still can give a relative high predicted probability on the correct category.", "By comparing the experimental results among all the variants of CoSHC, we can find the performance is increased significantly once the recall strategy is replaced to that the number of code candidates for each category is determined by the normalized predication probability.", "CoSHC with new recall strategy almost achieve the best performance in all metrics on all baseline models.", "Even on RNN in the Python dataset, CoSHC still achieve the same performance as CoSHC without classification under R @1 and achieve similar performance in other metrics.", "Above experimental results have demonstrated the effectiveness of the adoption of code classification in code search.", "To accelerate code search, we present CoSHC, a general method that incorporates deep hashing techniques and code classification.", "We leverage the two-staged recall and re-rank paradigm in information retrieval field and apply deep hashing techniques for fast recall.", "Furthermore, we propose to utilize a code classification module to retrieve better quality code snippets.", "Experiments on five code search models show that compared with the original code search models, CoSHC can greatly improve the retrieval efficiency meanwhile preserve almost the same performance.", "Wenchao Gu's and Michael R. Lyu's work described in this paper was in part supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (CUHK 14210920 of the General Research Fund)." ]
[ "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "other", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "abstain", "other" ]
[ "The multimodality problem has become a major challenge of existing non-autoregressive generation (NAG) systems.", "A common solution often resorts to sequence-level knowledge distillation by rebuilding the training dataset through autoregressive generation (hereinafter known as teacher AG).", "The success of such methods may largely depend on a latent assumption, i.e., the teacher AG is superior to the NAG model.", "However, in this work, we experimentally reveal that this assumption does not always hold for the text generation tasks like text summarization and story ending generation.", "To provide a feasible solution to the multimodality problem of NAG, we propose incorporating linguistic structure (Part-of-Speech sequence in particular) into NAG inference instead of relying on teacher AG.", "More specifically, the proposed POS-constrained Parallel Decoding (POSPD) method aims at providing a specific POS sequence to constrain the NAG model during decoding.", "Our experiments demonstrate that POSPD consistently improves NAG models on four text generation tasks to a greater extent compared to knowledge distillation.", "This observation validates the necessity of exploring the alternatives for sequence-level knowledge distillation.", "Unlike autoregressive generation (AG) that generates tokens step-by-step, non-autoregressive generation (NAG) parallelly generates all tokens in one time step and thus the inference could be significantly speeded up (Ma et al., 2019; Ran et al., 2020; Susanto et al., 2020).", "Despite the computational advantage of NAG, it has faced the multimodality problem (Gu et al., 2018) caused by the conditionally independent decoding.", "A typical example of the problem is illustrated in Figure 1, where either Correspondence to Wenqiang Lei.", "of Thank you. and Many Thanks. is the correct translation (i.e., generation modes ).", "In this example, a mixed mode Many you. / Thank Thanks. will be generated by NAG.", "It is because the conditional dependence among target words will be broken in parallel decoding.", "A typical manifestation is that words are usually missing (e.g., Many you.) and repeating (e.g., Thank Thanks.) in NAG's sentences.", "To solve this problem, the key is helping NAG models to deal with various generation modes.", "To date, one of the most widely used solutions is sequence-level knowledge distillation (Kim and Rush, 2016) which aims to reduce the generation modes of the raw data (Zhou et al., 2019).", "Taking machine translation as an example, the knowledge distillation based methods rebuild the target sequence in the training set by employing an AG model to translate the training samples.", "The assumption is that the target sentences generated by one AG model tend to have less modality.", "Despite the success of the above studies, there are still two major limitations: (1) Most existing works mainly focus on machine translation where the performance of AG is generally assumed to be better than NAG.", "Clearly, such a solution will degrade the performance of NAG on the task where the AG model cannot obtain a better result.", "As demonstrated in our experiments (See 4.5), there are a number of such tasks beyond the assumption like text summarization and story ending generation.", "(2) The knowledge distillation based methods may cost a tremendous amount of time to rebuild a large-scale training set with AG, which runs counter to the initial goal of NAG to improve the speed.", "To overcome the aforementioned limitations, we explore to alleviate the multimodality problem in a different manner.", "In short, we aim to constrain NAG generation modes in the inference stage, rather than directly reducing generation modes in the training stage.", "More specifically, our basic idea is that the linguistic structure of the target sentence could be helpful to alleviate the multimodality problem.", "In this paper, we show that the Part-of-Speech (POS) sequence, one of most simple solutions in modeling the linguistic structure (Cutting et al., 1992), could effectively verify our idea and show promising performance in four different tasks.", "In more details, the proposed POS-constrained Parallel Decoding (POSPD) trains a POS predictor to obtain POS tags of target sequences.", "In the inference stage, POSPD constrains NAG models to choose the final outputs that satisfy the pre-specified POS sequence.", "As the POS predictor with a shallow decoder is separately trained, our POSPD could act as a plug-and-play method to assistant NAG models with negligible extra time.", "Meanwhile, it also shows the speed advantage of our method even considering the time cost in building the POS dataset, since POS tagging is much faster than sentence generating due to the small POS dictionary.", "To conduct a comprehensive empirical evaluation, we examine the generalizability of POSPD by applying it to two widely-used NAG models (i.e., CMLM and DisCo) over four text generation tasks, including text summarization, story ending generation, question generation, and machine translation.", "Experiments demonstrate that POSPD significantly and consistently improves the two NAG models and beats the sequence-level knowledge distillation with a considerable performance gap.", "The main contributions of this work could be summarized as follows: For the first time, we experimentally reveal that the implicit assumption of knowledge distillation does not always hold for the tasks (e.g., text summarization, story ending generation, as demonstrated in our experiments).", "In other words, AG cannot guarantee better performance than NAG, thus resulting in the undesirable performance of NAG if using knowledge distillation to alleviate the multimodality problem.", "This empirical result could provide novel insight to revisiting the role of the knowledge distillation in NAG.", "To alleviate the multimodality problem in various tasks, we propose POSPD by employing POS sequences to constrain the NAG generation modes in the inference stage.", "It is simple but effective, being able to act as a plug-and-play assistant for NAG models.", "Such a linguistic structure based solution shows an effective and efficient alternative to the knowledge distillation paradigm in alleviating the multimodality problem 1 .", "In this section, we first analyze related works on alleviating the multimodality problem.", "Then, we review some representative works which introduce the linguistic structure into some text generation scenarios.", "Recently, various attempts have been made to alleviate the multimodality problem, which can be roughly divided into two types: (1) Reducing the diversity of generation modes in training; (2) Helping models select one generation mode in inference.", "The first type usually trains the NAG model under the guidance of an AG model (called teacher AG), e.g., sequence-level knowledge distillation (Kim and Rush, 2016), learning from AG model's hidden state (Li et al., 2019) and the curriculum learning with AG model (Liu et al., 2020d; Guo et al., 2020a).", "However, these methods implicitly assume that the teacher AG can achieve better performance than NAG models, otherwise it may degrade the performance of the NAG models.", "As two typical methods of the second type, iterative and dynamic programming methods have achieved promising performance.", "In short, iterative models generate the target sentence by iteratively refining the latest output (Ghazvininejad et al., 2019; Kasai et al., 1 The source code and dataset are available at https: //github.com/yangkexin/POSPD Figure 2: An overview of the POS-constrained Parallel Decoding 2020a; Guo et al., 2020b).", "Alternatively, dynamic programming methods use a heuristic searching strategy to select a better output from multiple decoded candidates (Sun et al., 2019; Saharia et al., 2020; Ghazvininejad et al., 2020).", "The biggest difference is prespecifying the linguistic structure to constrain the generation of NAG in a plug-and-play way.", "Extensive experiments verify the effectiveness and efficiency of our idea.", "Text generation involves multiple tasks, such as style transfer (Liu et al., 2020a) and text filling (Liu et al., 2019).", "Dating back to the period of statistical machine translation (Liu et al., 2006; Galley et al., 2006), linguistic structure prediction has long been investigated for it.", "Previous works often model and leverage syntactic structures on the decoder side, such as modeling long-distance word correspondence by syntactic dependency trees (Wu et al., 2017), implicitly incorporate linguistic prior in decoder (Eriguchi et al., 2017) and joint decoding with syntactic structure (Feng et al., 2020).", "In NAG, linguistic structures can also be helpful.", "As a global pattern of target sentence, it could serve as the complementary to the parallel decoding by helping models capture words dependency.", "However, directly incorporating aforementioned methods into NAG are less portable for current NAG models, since they are originally designed for AG.", "In comparison, POSPD can act as a plug-and-play component that uses a separate POS predictor to constrain NAG models during inference.", "Therefore, the NAG model can enjoy the benefits of the syntactical structure constraining while retaining its original model structure.", "In this section, we elaborate our POSPD for the NAG model.", "To ease of presentation, we start from a toy example to illustrate the overview of POSPD in 3.1, and then give a detailed explanation of the implementation in 3.2.", "After that, we present the training details of POSPD in 3.3.", "An overview of our POSPD method is demonstrated in Figure 2, where a toy example of machine translation is used as a showcase.", "To be exact, the German sentence Vielen Dank. is fed simultaneously into both the POS predictor and the NAG model, and then the POS predictor generates a POS sequence JJ NNS PCT which is further converted into a binarized mask matrix through a conversion dictionary.", "Meanwhile, the NAG model generates the primary probability distributions through a softmax layer.", "Here, from Figure 1, words Many and you get the highest probability, resulting in the mix mode Many you if following the primary distribution.", "To avoid such an undesirable result, our POSPD automatically adjusts the probability according to the binarized mask matrix.", "For example, the probability of you is adjusted to 0 , since the POS tag of you is PRP rather than NNS .", "As a result, Many Thanks. gets the highest probability hence to be generated as the output.", "In this part, we detail the POSPD by introducing the conversion dictionary building, the workflow of POSPD, and the core modulethe POS predictor.", "Building a Conversion Dictionary The key idea of POSPD is filtering out the words that dissatisfy the prespecified POS sequence in the primary results of NAG.", "To implement our idea, we need a conversion dictionary D c that contains the mapping from POS tags to words.", "Given a target vocabulary V w with the length of |V w | and a POS tag set V s , each key of D c is a POS tag in V s and the value is a set of words that can be assigned to this POS tag.", "It is worth noting that a word may have multiple POS tags.", "Therefore, one word may appear in multiple sets in D c .", "The POSPD Workflow The workflow of POSPD is as follows: given a source sentence x , POSPD feeds it into both the NAG model's encoder and the POS predictor.", "After that, the POS predictor outputs a POS sequence s = ( s 1 , s 2 , ..., s L ) for the target sentence.", "Meanwhile, the decoder of the NAG model generates a preliminary distribution matrix D = ( d 1 , d 2 , ..., d L ) , where d i represents the distribution of all words 2 in the i -th position.", "Note that, the sentence length follows the length of the predicted POS tag L .", "For the ease of implementation, the POS sequence s is converted into a binarized mask matrix M = ( m 1 , m 2 , ..., m L ) .", "In details, for each POS tag s i , the corresponding binarized vector is m i = ( m 1 i , m 2 i , ..., m |V w | i ) and the j -th position m ji is defined as: m ji = (cid:40) 1 , w j D s i c ; 0 , w j / D s i , c (1) where w j is the j -th word token in V w .", "As a result, the POS sequence s is replaced by M .", "Finally, we get the new generation results by: y = arg max( M D ) .", "The POS Predictor As the core module of the POSPD, our POS predictor is dedicated to output the POS tag sequence of the target sentence when accepting the source sentence as the input.", "To train the POS predictor, we need to create a POS dataset where each sample is a pair consisting of a source sentence and a POS sequence of the target 2 The length of d i is |V w | .", "sentence 3 .", "As shown in Figure 3, the architecture of our POS predictor is a variant of the standard Transformer (Vaswani et al., 2017).", "As shown in the gray arrow flow, the main difference between our POS predictor and the vanilla Transformer is the layer number of encoder and decoder.", "To be specific, unlike the vanilla Transformer which contains six layers for both encoder and decoder, we use a multi-layer encoder and a one-layer decoder to reduce the inference time, because the complexity for decoding the POS sequence is much lower than that for the original sentence.", "POS Predictor Optimization To optimize the POS predictor, we take a multi-task learning (Evge-niou and Pontil, 2004) paradigm to jointly decode the word sequence and POS sequence on the target side.", "The underlying hypothesis is that the target word sentence is highly related to the POS sequence.", "Given a source sentence x , a POS sequence s and a target sentence y = ( y 1 , y 2 , ..., y L ) , the learning objective is then defined as the sum of the POS tagging loss (the first term) and the sentence prediction loss (the second term): L = L pos + L word , (3) where the POS sentence prediction loss can be written as: L pos = L (cid:88) t =1 log P ( s t | s <t , x ) , (4) and the target sentence prediction loss is: L word = L (cid:88) t =1 log P ( y t | s <t , x ) .", "(5) 3 We use NLTK POS tagger to create the POS sequence, which can be found at https://www.nltk.org/book/ ch05.html .", "In our method, the POS predictor uses an extra linear layer after the decoder to generate the target sentence, as shown in Figure", "3. After training, we only need the POS predicting linear layer for inference, thus enjoying the better performance for the POS sequence prediction.", "Almost all NAG models use the Byte Pair Encoding (BPE) (Sennrich et al., 2016) technique to build the word vocabulary with subword-level tokens.", "However, these tokens cannot be tagged by the mainstream POS Taggers (Yarowsky and Ngai, 2001), which makes difficulties in building the POS dataset.", "To address this issue, we propose a simple but effective subword-level POS tagging method for our POS predictor.", "A simple example is demonstrated in Table 1, the NLTK toolkit tags the word gutacht as NN in the original sentence but cannot handle the BPE form gut ##ach ##t.", "Intuitively, we can assign the BPE form to have the POS tag as gutacht (i.e. NN NN NN ).", "However, this method increases the number of repeated tokens in generation sentences of NAG models and even worsens the performance.", "The possible reason is that the aforementioned method cannot explicitly distinguish whether a POS tag is associated with a BPE token or a complete word.", "In contrast, our method tags the BPE form as NN1 NN2 NN3 .", "As a result, the conversion dictionary is more sparse while improving the mapping between the POS tag and the corresponding words.", "In addition, the word question is tagged as NN , since it doesn't have any sub-word tokens after the BPE.", "In this section, we use multiple text generation datasets to comprehensively evaluate the effectiveness and efficiency of the proposed POSPD.", "For an extensive comparison, we compare our POSPD with the sequence-level knowledge distillation, and provide detailed analyzes in alleviating the multimodality problem and the time cost in dataset building.", "We conduct experiments on four widely-used benchmark datasets to evaluate POSPD: XSUM for text summarization, ROCStories corpus for story ending generation, SQuAD 1.1 for question generation, and WMT14 (DE-EN) for machine translation.", "Meanwhile, we use BERT-based BPE tokenizer 4 for all datasets.", "The details are as follows: XSUM (Narayan et al., 2018) includes the 227k British Broadcasting Corporation (BBC) online articles and the corresponding single-sentence summaries.", "The average sentence lengths are 358.5 words for input and 21.1 words for output.", "ROCStories Corpus 5 (Mostafazadeh et al., 2016) contains 98k five-sentence stories.", "For each story, we use the last sentence as the target output while the other four sentences as the source input.", "We randomly sample 90k/4k stories for training/validation, and the remaining 4160 for testing.", "The average sentence lengths are 39.64 words for input and 10.72 words for output.", "SQuAD 1.1 6 (Rajpurkar et al., 2016) is a machine reading comprehension data set containing 98K passage-question-answer triples (Liu et al., 2020b).", "After processing, we obtain a question generation dataset.", "Following GLGE (Liu et al., 2020c), the input sentence is formatted as (cid:104) answer [SEP] passage (cid:105) .", "The average sentence lengths are 149.4 words for input and 11.5 words for output.", "WMT14 (DE-EN) 7 contains 4.5M translation pairs and 3k/3k pairs for validation/testing.", "The average sentence lengths are 25.07 words for input and 26.53 words for output.", "Follow GLGE (Liu et al., 2020c), we use ROUGE-1 (R-1), ROUGE-2 (R-2), and ROUGE-L (R-L) (Lin, 2004) as evaluation metrics for text summarization,", "4 https://pypi.org/project/ transformers/ .", "5 https://cs.rochester.edu/nlp/ rocstories/ 6 https://rajpurkar.github.io/ SQuAD-explorer/ 7 https://www.statmt.org/wmt14/ translation-task.html while BLEU-4 (B-4) (Papineni et al., 2002), Meteor (Denkowski and Lavie, 2014), and R-L are used in question generation and story ending generation.", "Meanwhile, BLEU-4 is also the evaluation metric for machine translation to keep in line with previous works (Gu et al., 2018).", "In this work, we focus on using iteration-based NAG models as backbones, because they are one of the mainstream NAG structures in current works and perform competitively to AG models without any external system (Kasai et al., 2020b).", "Specifically, we use two representative iteration-based NAG models from recent work, i.e., CMLM (Ghazvininejad et al., 2019) and DisCo (Kasai et al., 2020a).", "The details are as follows: CMLM The conditional masked language model randomly masks some target tokens and predicts them with the remaining ones.", "In inference, it masks several tokens with the lower confidence and retains other tokens with higher confidence during iterations, which is called mask-predict inference.", "Following Ghazvininejad et al. (2019), we use same settings for all generation tasks 8 .", "DisCo The disentangled context transformer aims to use different context information when predicting each token, being regarded as an effective improvement of CMLM.", "For better comparison, we also use mask-predict inference as same as CMLM.", "Meanwhile, we use the model settings described in Kasai et al. (2020a) for all generation tasks 9 .", "Knowledge Distillation Following Gu et al. (2018) which uses a standard transformer (Vaswani et al., 2017) as the teacher model to regenerate training set in the greedy method for NAG models (hereinafter described as Transformer-1 (6-6)), we report NAG models' performances on all text generation task when using the distilled training dataset.", "In the following discussion, the Transformer-1 and Transformer-4 denote the beam size of 1 and 4 in the beam search, respectively.", "Meanwhile, we also report the results of different Transformer model structures, where the (6-6) and (12-1) denote the version of six encoder layers, six decoder layers and the version of 12 encoder layers, one decoder layers, respectively.", "We follow the hyperparameters for standard Transformer in (Vaswani et al., 2017) for our POS predictor.", "One minor difference is the layers of encoder and decoder are set to 12 and 1 to make a fair comparison with AG models, respectively.", "All of the models are implemented based on Fairseq (Ott et al., 2019), and we follow the other specific parameter settings for both AG and NAG models in (Kasai et al., 2020b).", "In inference, the length beam, length penalty, and batch size are all set to 1 to calculate the main results (without any postprocessing) and latency.", "The latency is calculated through using the built-in time statistics function in Fairseq, which is tested on a single NVIDIA Tesla P100 GPU to keep in line with previous works (Gu et al., 2018).", "Meanwhile, the beam size of our POS predictor is set to 5.", "For the number of iterations, we report the iterations when the NAG model results are converged.", "In practice, the iterations of two NAG models are 4, 3, 3 and 10 on XSUM, SQuAD1.1, ROCStories and WMT14 (DE-EN).", "We evaluate the performance of two NAG models (CMLM and DisCo) on four text generation datasets, and further provide the results when using sequence-level data distillation (i.e., +Distill) and the POSPD (i.e., +POSPD), respectively.", "We report the main results in Table 2 and the inference time comparison in Table 3, from which we can make the following conclusions:", "1. POSPD consistently improve NAG models on four text generation dataset to a greater extent compared to knowledge distillation.", "POSPD consistently improve NAG models on four text generation tasks while knowledge distillation may even degrade performances of the NAG models such as XSUM (row 5 vs. row 6) and SQuAD 1.1 (row 8 vs. row 9).", "More importantly, although the knowledge distillation improves NAG models by 1.04/1.56 (row 5 vs. row 6, row 8 vs. row 9) on BLEU-4 in WMT14 (DE-EN), POSPD still beats the knowledge distillation version by 0.24/0.19 (row 6 vs. row 7, row 9 vs. row 10) on BLEU-4.", "2. Knowledge distillation does not always improve the NAG model as the AG models may get worse performance than NAG.", "In both text summarization (XSUM) and story ending generation (ROCStories) tasks, the two original NAG models CMLM and DisCo outperform the AG model.", "It is obvious that the adoption of sequence-level knowledge distillation limits the performance of NAG models in these case.", "More interestingly, in question generation, the AG model outperforms the NAG models with BLUE-4 by 0.4/0.5 (row 3 vs. row 5/row 8), but knowledge distillation degrades NAG models' performance with BLEU-4 by 0.46/0.13 (row 5 vs. row 6, row 8 vs. row 9).", "3. POSPD does not bring significant extra time in constraining NAG models' generation while decoding .", "POSPD maintains its advantage in high-speed inference across all data sets.", "For example, on the dataset SQuAD 1.1, the inference latency of POSPD is much lower than NAG models (1.00 vs. 0.62 /0.66 ).", "Meanwhile, on the WMT14 (DE-EN) that has the longest average length of the target sentence, POSPD still maintains its advantage in the inference speed.", "Therefore, our POSPD could constrain the NAG model with the negligible extra time, since POSPD and the NAG model predict sequences (i.e., POS sequence and target sentence) in parallel.", "There is a loose ending towards the discussion of our POSPD solution.", "In this section, we conduct discussions to shed light on other interesting properties of POSPD.", "The discussions are guided by the following three research questions: Q1 : How does POSPD alleviate the multimodality problem?", "Q2 : Is it time-consuming to build the POS dataset on the new task?", "Q3 : Does multi-tasking learning object help the POS tag prediction?", "To further analyze the role of POSPD and the sequence-level knowledge distillation in alleviating the multimodality problem, we conduct further statistical analyses on the generated results of four datasets.", "Considering the multimodality problem usually manifests as repeating or missing tokens in the generation sentences, we use two indicators, i.e., the repetition rate and the total number of tokens, to quantify them separately.", "Concretely, we refer to a single-token repeat metric (Welleck et al., 2020) and define the repetition rate here as the percentage of the repeated times between two adjacent tokens in the total number of tokens in a sentence, and then average it over the dataset.", "The results are shown in Table 4, from which we can see both knowledge distillation and POSPD can reduce the repetition rate in NAG models on four datasets, and they are more effective on XSUM datasets with longer sentences.", "While in token numbers, using knowledge distillation significantly reduces the number of tokens generated by NAG models on XSUM.", "In contrast, using POSPD remarkably make the length of generated sentences by NAG models close to the reference without increasing the repetition rate.", "Considering that both POSPD and knowledge distillation require the processing of the training dataset when it comes to a new task/dataset (i.e., building the POS data set for POSPD / regenerating the training set for knowledge distillation), we further analyze the time consumption of the two processing steps.", "As shown in Table 5, POSPD has a significant advantage over knowledge distillation in the time consuming of dataset building.", "Especially on the larger dataset WMT14 (DE-EN), it can save even more time in building datasets, which is beneficial for rapid deployment on new tasks.", "In this part, we analyze the impact of using a multitask learning strategy in POSPD's training stage.", "For lack of space, we take the ablation study on two datasets of different sizes, i.e., SQuAD 1.1 and XSUM.", "The results are shown in Table 6.", "Interestingly, predicting the POS sequence directly from the original sentence (i.e., POSPD w/o) can also improve the performance of the NAG models.", "More importantly, multi-task learning strategy can improve the performance of POSPD in two datasets with a tiny increase in model parameters (only one linear layer).", "Meanwhile, it is only used during the POSPD's training stage and does not affect the inference time of POSPD.", "In this paper, we revisit the role of the knowledge distillation in alleviating the multimodality problem of NAG.", "In brief, we experimentally reflect that the basic assumption of these knowledge distillation methods, the AG model is superior to NAG model, does not always hold for all text generation tasks.", "To alleviate the multimodality problem, we show a different solution by incorporating linguistic structure into NAG.", "Extensive experiments demonstrate that our POSPD significantly and consistently improves the NAG models in effectiveness and computational efficacy.", "As we tentatively give a successful implementation of leveraging one of the simplest linguistic structures to benefit the NAG models in inference, such paradigm deserves a closer and more detailed exploration.", "Thus in the future, we will investigate to make the NAG models enjoy the benefits of incorporating diverse and abundant linguistic structures in a more superior way.", "In addition, our experimental results suggest that future work might need to consider wider ranges of generation tasks instead of only machine translation when assessing the performance of NAG models.", "This work was supported in part by the National Key RD Program of China under Grant 2020YFB1406702, in part by NFSC under Grant 61625204, 61836006, and the Science and Technology Major Project of Sichuan province under Grant 2020YFG0478." ]
[ "abstain", "abstain", "abstain", "result", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "abstain", "result", "abstain", "abstain", "method", "result", "method", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "objective", "result", "other" ]
[ "The evolution of language follows the rule of gradual change.", "Grammar, vocabulary, and lexical semantic shifts take place over time, resulting in a diachronic linguistic gap.", "As such, a considerable amount of texts are written in languages of different eras, which creates obstacles for natural language processing tasks, such as word segmentation and machine translation.", "Although the Chinese language has a long history, previous Chinese natural language processing research has primarily focused on tasks within a specific era.", "Therefore, we propose a cross-era learning framework for Chinese word segmentation (CWS), CROSSWISE, which uses the Switch-memory (SM) module to incorporate era-specific linguistic knowledge.", "Experiments on four corpora from different eras show that the performance of each corpus significantly improves.", "Further analyses also demonstrate that the SM can effectively integrate the knowledge of the eras into the neural network.", "As a human-learnable communication system, language does not remain static but instead evolves over time.", "The rate of change between different aspects of language, such as grammar, vocabulary, and word meaning, vary due to language contact and many other factors, which has led to the diachronic linguistic gap.", "An example of this can be seen in, That slepen al the nyght with open ye (That sleep all the night with open eye), which is a sentence from The Canterbury Tales, written in Middle English by Geoffrey Chaucer at the end of the 14th century.", "It is difficult for people without an understanding of Middle English to make sense of this sentence.", "Furthermore, some discourses Corresponding author Sample from MSR Golds (wait) (who) (come) (slove) (ne)", "contain both modern English and Old English due to citation or rhetorical need.", "For example, Shake-speare's fourteen lines of poetry are often quoted in contemporary novels.", "This kind of era-hybrid text creates barriers to natural language processing tasks, such as word segmentation and machine translation.", "The Chinese language has the honor of being listed as one of the world's oldest languages and, as such, has seen several changes over its long history.", "It has undergone various incarnations, which are recognized as Archaic (Ancient) Chinese, Middle Ancient Chinese, Near Ancient Chinese, and Modern Chinese.", "Notably, most Chinese NLP tasks skew towards Modern Chinese.", "Previous research has primarily focused on addressing the CWS problem in Modern Chinese and has achieved promising results, such as Chinese Word Segmentation (CWS) (Zheng et al., 2013; Chen et al., 2015; Zhang et al., 2016; Xu and Sun, 2016; Shao et al., 2017; Yang et al., 2017; Zhang et al., 2018; Tian et al., 2020b,a).", "Although CWS for ancient Chinese has been recognized in recent years, the processing of language-hybrid texts is still an open question.", "As shown in 7830 Table 1, PKUSeg (Luo et al., 2019a) is a Chinese segmenter that is trained with a modern Chinese corpus; while it can segment modern Chinese sentences correctly, its accuracy drops sharply when applied to ancient Chinese.", "Conversely, the ancient Chinese segmenter JiaYan 1 performs well on ancient Chinese text but fails to perform well on Modern Chinese texts.", "Therefore, it is necessary to develop appropriate models to undertake cross-era NLP tasks.", "To address this need, we propose CROSSWISE (CROsS-ear Segmentation WIth Switch-mEmory), which is a learning framework that deals with cross-era Chinese word segmentation (CECWS) tasks.", "The framework integrates era-specific knowledge with the Switch-memory mechanism to improve CWS for era-hybrid texts.", "More specifically, we utilized the abilities of both CWS and sentence classification tasks to predict segmentation results and era labels.", "We also incorporated the Switch-memory module to include knowledge of different eras, which consists of key-value memory networks (Miller et al., 2016) and a switcher.", "In order to store era-specific knowledge by several memory cells, key-value memory networks are used.", "The sentence discriminator is considered to be a switcher that governs the quantity of information in each memory cell that is integrated into the model.", "For each memory cell, we map candidate words from the dictionary and word boundary information to keys and", "values..", "Cross-era learning is introduced for CWS, we share all the parameters with a multi-task architecture.", "The shared encoder is used to capture information that several datasets from different eras have in common.", "This single model can produce different words segmentation granularity according to different eras.", "The Switch-memory mechanism is used to integrate era-specific knowledge into the neural network, which can help improve the performance of out of vocabulary (OOV) words.", "This study proposes two switcher modes ( hard-switcher and soft-switcher ) to control the quantity of information that each cell will feed into the model.", "Experimental results from four CWS datasets with different eras confirm that the performance of each corpus improves significantly.", "Further analyses also demonstrate that this model is flexible for cross-era Chinese word segmentation.", "Chinese word segmentation is generally considered to be a sequence labeling task, namely, to assign a label to each character in a given sentence.", "In recent years, many deep learning methods have been successfully applied to CWS (Zheng et al., 2013; Chen et al., 2015; Zhang et al., 2016; Xu and Sun, 2016; Shao et al., 2017; Yang et al., 2017; Kurita et al., 2017; Liu et al., 2018; Zhang et al., 2018; Ye et al., 2019a; Higashiyama et al., 2019; Huang et al., 2020b; Tian et al., 2020b,a,c; Liu et al., 2021).", "Among these studies, some indicate that context features and external knowledge can improve CWS accuracy (Kurita et al., 2017; Yang et al., 2017; Zhang et al., 2018; Liu et al., 2018; Tian et al., 2020b,a,c).", "Studies from Liu et al. (2018) and Zhang et al. (2018) leveraged the dictionary to improve the task; n-gram is also an effective context feature for CWS (Kurita et al., 2017; Tian et al., 2020b; Shao et al., 2017).", "The use of syntactic knowledge generated by existing NLP toolkits to improve CWS and part-of-speech (POS) has been established by Tian et al. (2020b).", "Furthermore, Tian et al. (2020c) incorporated wordwood information for neural segmenters and achieved a state-of-the-art performance at that time.", "It is common practice to jointly train CWS and other related tasks based on a multi-task framework.", "Chen et al. (2017) took each segmentation criterion as a single task and proposed an adversarial multitask learning framework for multi-criteria CWS by extracting shared knowledge from multiple segmentation datasets.", "Yang et al. (2017) investigated the effectiveness of several external sources for CWS by a globally optimized beam-search model.", "They considered each type of external resource to be an auxiliary classification, and then leveraged multi-task learning to pre-train the shared parameters used for the context modeling of Chinese characters.", "Liu et al. (2018) jointly trained the CWS and word classification task by a unified framework model.", "Inspired by these successful studies, this study also incorporated ideas from the multi-task framework, and jointly trained the CWS task and 7831 M k a i + n a i+1 a i a 1 h i + n h i+1 h 1 h i M 1 O I B I B / Switch Memory Encoder Decoder Prediction Output Input Dis Key Value D 1 Memory cell VBVE----VS---VS------VS-----VS-----VS---VBVE--VB-VE yellow river Yellow River enter sea mouth Haikou estuary Figure 1: CROSSWISE for cross-era Chinese word segmentation.", "Recently, some studies have noticed the linguistic gap due to the differences between eras.", "Ceroni et al. (2014) proposed a time-aware re-contextualization approach to bridge the temporal context gap.", "Chang et al. (2021) reframed the translation of ancient Chinese texts as a multi-label prediction task, then predicted both translation and its particular era by dividing ancient Chinese into three periods.", "The use of key-value memory networks were introduced to the task of directly reading documents and answering questions by Miller et al. (2016), which helped to bridge the gap between direct methods and the use of human-annotated or automatically constructed Knowledge Bases.", "Tian et al. (2020c) applied this mechanism to incorporate n-grams into the neural model for CWS.", "Encouraged by the above works, this study designed a multi-task model for cross-era CWS by jointly training the sentence classification task and CWS through the use of a unified framework model.", "Key-value memory networks are used to integrate era-specific knowledge into the neural network, as was done in research by Tian et al. (2020c).", "3 The Proposed Framework 3.1 BERT-CRF model for Chinese word Segmentation Chinese word segmentation is generally viewed as a character-based sequence labeling task.", "Specifically, given the sentence X = { x 1 , x 2 , ...x T } , each character in the sequence is labeled as one of L = { B, M, E, S } , indicating the location of the character as at the beginning, middle, or end of a word, or that the character is a single-character word.", "CWS aims to determine the ground truth of labels Y = { y 1 , y 2 , . . . y T } : Y = arg max P ( Y | X ) Y L T (1) The universal end-to-end neural CWS architecture usually contains an encoder and a decoder.", "The framework used in this study is shown in Figure 1; the functions of each part are explained below.", "Encoding layer.", "According to Fu et al. (2020), although BERT-based (Devlin et al., 2019) models for CWS are imperfect, BERT is superior, in many aspects, to models that have not been pre-trained.", "For example, BERT is more suitable for dealing with long sentences; therefore, this study utilizes BERT released by Google Devlin et al. (2019) as the shared encoder, which is pre-trained with a large amount of unlabeled Chinese data.", "where h i is the representation for x i from the encoder.", "Decoding layer.", "This study is able to use a shared decoder for samples from different eras because era-aware representation have been combined for each character by the Switch-memory module.", "There are various algorithms that can be implemented as decoders, such as conditional random fields (CRF) (Lafferty et al., 2001) and softmax.", "According to (Tian et al., 2020c), CRF performs better in word segmentation tasks.", "Therefore, considering the framework of this study, CRF is used as the decoder.", "In the CRF layer, P ( Y | X ) in Eq.", "1 can be represented as: P ( Y | X ) = ( Y | X ) (cid:80) Y L T ( Y | X ) (3) where, ( Y | X ) is the potential function, and only interactions between two successive labels are considered.", "( Y | X ) = T (cid:89) i =2 ( X, i, y i 1 , y i ) (4) ( x , i, y , y ) = exp ( s ( X, i ) y + b y y ) (5) where b y y R is trainable parameters respective to label pair ( y , y ) .", "The score function s ( X , i ) R |L| calculate the score of each lable for i th character: s ( X , i ) = W s a i + b s (6) where a i is the final representation for i th character.", "W s R d a L and b s R |L| are trainable parameters.", "The Switch-memory consists of d memory cells and a switcher.", "For an input sentence, there are d memory cells for each character.", "The switcher governs how much information in each cell will be integrated into the network.", "And the state of the switcher depends on the sentence classifier.", "The dictionary has been a useful external source to improve the performance of CWS in many", "stud-ies.(Yang et al., 2017; Liu et al., 2018; Zhang et al., 2018).", "However, the ability to incorporate the dictionary into previous research has been limited by either concatenating candidate words and character embeddings or the requirement of handcrafted templates.", "In this study, key-value memory networks are utilized to incorporate dictionary information, which is initially applied to the Question Answering (QA) task for improved storage of prior knowledge required by QA.", "Furthermore, this network structure can also be used to store the existing knowledge that is required by cross-era CWS.", "Ancient Chinese is not a static language but is instead a diachronic language.", "Ancient Chinese has three development stages: Ancient, Middle Ancient, and Near Ancient.", "Each stage has a specific Rule V i,j x i is the beginning character of w i,j .", "lexicon and word segmentation granularity.", "Therefore, this research has constructed four dictionaries D = { D 0 , D 1 , D 2 , D 3 } , associating with the four development stages of Chinese, respectively, and each dictionary is era-related.", "When input a sentence, four memory cells are generated for each character in the sentence according to the four dictionaries, and each memory cell maps candidate words and word boundary information to keys and values.", "Candidate words as keys.", "Following Tian et al., for each x i in the input sentence, each dictionary has many words containing x i , we only keep the n-grams from the input sentence and appear in each dictionary, resulting w di = { w di, 1 , w di, 2 , ...w di,j ...w di,m i } , x i is a part of word w di,j D d , d [0 , 3] .", "We use an example to illustrate our idea.", "For the input sentence show in Figure 1, there are many n-grams containing x 3 = (sea), we only retain ones that appear in D 0 for the first memory cell, thus, w 03 = { (HaiKou) , (estuary) , (sea) } .", "Similarly, we can generate w 13 , w 23 , w 33 for the second, third and fourth memory cell according to D 1 , D 2 , D 3 .", "Then, the memory cell compute the probability for each key (which are denoted as e wi,j for each w di,j ), here h i is the embedding for x i , which is encoded by the encoder.", "Word boundary information as values.", "As we know, CWS aims to find the best segment position.", "However, each character x i may have different positions in each w di,j .", "For example, x i may be at the beginning, middle, ending of w di,j , or x i maybe a single word.", "Different positions convey different information.", "Therefore, we use the boundary information of candidate words as values for key-value networks.", "As shown in Table 2, a set of word boundary values { VB , VE , VS } with embeddings { e VB , e VE , e VS } represent the x i 's different positions in w di,j , and we map x i 7833 to different value vectors according to its positions.", "As a result, each w di for x i has a values list V di = [ v di, 1 , v di, 2 , ...v di,j , ...v di,m i ] .", "In Figure 1, x 3 = (sea), for the first memory cell, we can map candidate word boundary information to the value list V 03 = [ VS , VB ] .", "Four cells for x i has a values list V i = [ v 0 i , v 1 i , v 2 i , v 3 i ] .", "Then the d th memory cell embedding for x i is computed from the weighted sum of all keys and values as follow.", "m i (cid:88)", "where e v d i,j is the embedding for v di,j .", "Next, the final character embedding is the element-wise sum of o i and h i , or their concatenation, passing through a fully connected layer as follow: a i = W o ( o i h i ) (9) where operation could be sum or concatenate, W o RT is a trainable parameter and the output a i RT is the final representation for the i th character.", "o i is the final memory embedding for the i th character, and can be calculated as follow.", "where Switcher is used to control how much information in each memory cell will be combined with the output of the encoder.", "Inspired by the benefits of multi-task, a classifier has been added on top of the encoder to predict the era label of the input sentence.", "The discriminator predicts the probability of the correct era label, z , conditioned on the hidden states of the encoder, H , which is the output of [CLS] from BERT.", "The loss function of the discriminator is J disc = logP ( z | H ) , through minimizing the negative cross-entropy loss to maximizes P ( z | H ) .", "In this study, H is fed into a fully-connected layer and let it pass through a softmax layer to obtain probabilities for each era label.", "Switch mode.", "For the switcher, we propose two switcher modes, hard-switcher and soft-switcher .", "Hard-switcher switches memory cells according to the predicted final result from the discriminator.", "For the input sentence in Figure 1, if the predicted result is the modern era, then the switcher will switch to the memory cell associated with modern Chinese, and o i = o di .", "Soft-switcher switches memory cells according to the predicted probability, which is calculated by the weight of each memory cell.", "Soft-switcher means that the information from all four dictionaries may be fused into the current character's representation.", "For example, the predicted probability list is [0 . 1 , 0 . 2 , 0 . 1 , 0 . 6 ] ; therefore, the final memory representation for the i th character is o i = o 0 i 0 .", "1 + o 1 i 0 .", "2 + o 2 i 0 .", "1 + o 3 i 0 .", "6 .", "In this framework, the discriminator is optimized jointly with the CWS task, which both share the same encoding layer.", "Different weights are assigned to the loss of the two tasks, the final loss function is: J = J cws + (1 ) J disc (11) where is the weight that controls the interaction of the two losses.", "J cws is the negative log likelihood of true labels on the training set.", "where N is the number of samples in the training set, and Y n is the ground truth tag sequence of the n th sample.", "The model proposed in this study has been evaluated on four CWS datasets from Academia Sinica Ancient Chinese Corpus 2 (ASACC) and SIGHAN 2005 (Emerson, 2005).", "The statistics of all the datasets are listed in Table", "3. Among these datasets, PKIWI, DKIWI, AKIWI from ASACC, correspond to near ancient Chinese, middle ancient Chinese, ancient Chinese, respectively, and MSR from SIGHAN 2005 is a modern Chinese CWS dataset.", "It should be noted that PKIWI, DKIWI, and AKIWI are traditional Chinese and were translated into simplified Chinese prior to segmentation.", "For PKIWI, DKIWI, and AKIWI, 5K examples were randomly picked as a test set; then, 10% of examples were randomly selected from training set as the development set.", "Similar to previous work (Chen et al., 2017), all datasets are pre-processed by 2 http://lingcorpus.iis.sinica.edu.tw/ ancient 7834 Datasets Words Chars Word types Char Types Sents OOV Rate ASACC AKIWI Train 2.8M 3.2M 65.3K 7.5K 59.7K Test 0.2M 0.3M 15.7K 4.4K 5K 4.35% DKIWI Train 2.2M 2.8M 44.3K 6.0K 50.1K Test 0.2M 0.3M 13.0K 3.8K 5K 4.91% PKIWI Train 6.4M 7.8M 117.0K 7.2K 144.1K Test 0.2M 0.3M 18.6K 4.4K 5K 1.71% SIGHAN05 MSR Train 2.4M 4.1M 88.1K 5.2K 86.9K Test 0.1M 0.2M 12.9K 2.8K 4.0K 2.60% Table 3: Detail of the four datasets.", "replacing Latin characters, digits, and punctuation with a unique token.", "In the cross-era learning scenarios, all of the training data from four eras corpora were used as the training set.", "Then, all of the test data from four corpora were used as the cross-era test set to evaluate the model.", "Finally, F1 and OOV recall rates ( R oov ) were computed according to the different eras.", "In our experiments, for the encoder BERT, we follow the default setting of the BERT (Devlin et al., 2019).", "The key embedding size and value embedding size are the same as the output of the encoder, and they have been randomly initialized.", "For the baseline model Bi-LSTM, the character embedding size is set to 300, and the hidden state is set to 100.", "For the transformer, the same settings as Qiu et al. (2020) were followed.", "The loss weight coefficient is a hyper-parameter that balances classification loss and segmentation loss; the model achieves the best performance when is set to 0.7, which was identified by searching from 0 to 1 with the equal interval set to 0.1.", "The words from the training set are used as the internal dictionary, and each training set generates its own dictionary.", "The simplified Chinese dictionary sourced from jieba 3 is used as the external dictionary for MSR, and words from The ErYa (an ancient dictionary) and ancient Chinese textbooks were extracted as the external dictionary for AWIKI.", "For PWIKI and DWIKI, high-frequency bi-grams and tri-grams were extracted from the corresponding period corpus 4 as external dictionaries.", "To begin, in this section, the experimental results of the proposed model on the test sets from the four cross-era CWS datasets are provided, which can be seen in Table", "4. Several observations can be made from the data provided in Table", "4. First, BERT-CRF in single-era scenarios (ID:1 in Table 4) and cross-era learning without the SM module (ID:6) are compared.", "As can be seen in the table, when mixing four datasets, the average F1 value of all datasets decreases slightly.", "Single-era dataset learning has an average F1 value of 97.61, while cross-era learning without the Switch-memory module has a 97.32 average F1 value.", "This indicates that performance cannot be improved by merely mixing several datasets.", "Second, the models with the SM mechanism (ID:3,5,7) outperformed the baseline models (ID:2,4,6) in terms of F1 value and R oov on all datasets.", "For example, the average F1 score for BERT-CRF with SM module (ID:7) improved by 0.92% when compared to BERT-CRF (ID:6), and the average R oov went from 76.15 to 82.37.", "This indicates that the Switch-memory can help improve segmentation and R oov performance by integrating era-specific knowledge.", "Third, among different encoders, the improvement of the pre-trained encoder BERT on the F1 value is still significant.", "When using Bi-LSTM as the encoder (ID:2,3), the average F1 value and the R oov are 89.15 and 90.66, respectively.", "When using BERT as the encoder (ID:6,7), the F1 value improves by approximately 8%.", "The reason for this may be that the pre-training processing supplements some effective external knowledge.", "To further illustrate the validity and effectiveness of this model, the best results from this study are compared to works that have been previously 7835 NO.", "identified as state-of-the-art.", "Various aspects of multi-domain and multi-criteria Chinese word segmentation are very similar to the tasks in this study; therefore, this study reproduced experiments on several previous word segmentation models using the four datasets identified in this research (Luo et al., 2019b; Qiu et al., 2020; Huang et al., 2020a).", "For the multi-domain segmenter PKUSeg (Luo et al., 2019b), four datasets were trained with the pre-trained mixed model.", "The comparison is shown in Table 5; the model from this study outperforms previous methods.", "Table 6 shows the effectiveness of each component in the SM module.", "The first ablation study is conducted to verify the effectiveness of memory cells.", "In this experiment, the sentence classification task is no longer a switcher but simply a joint training task with word segmentation.", "We can see that the ancient Chinese datasets (AWIKI, DWIKI, PWIKI) are more sensitive to memory cells than MSR.", "This may be explained by the fact that the encoder is pre-trained with a large quantity of modern Chinese data, and the memory cells in this study incorporate some ancient era knowledge into the model, which helps to boost the performance of the three ancient Chinese datasets.", "The second ablation study is to evaluate the effect of the switcher.", "For this experiment, the average of four embedded memory cells is used as the final memory representation.", "The comparison between the second and the third line indicates that the switcher is an important component when integrating era-specific information.", "In summary, in terms of average performance, the switcher and the memory cells can both boost the performance of R oov considerably.", "In this section, the effect of the switcher mode and the combination mode (concatenate or sum) of memory embedding and character embedding is investigated.", "To better understand the effect of the different configurations, this study examines the four pair settings to train the model on the four datasets in this study; the results are shown in Figure 2, and different color bars represent different datasets.", "As can be seen, soft-switcher significantly improves the F1 value on MSR compared to hard-switcher , while the other three datasets prefer hard-switcher , which suggests that the forward direction of knowledge dissemination from ancient Chinese to mod-7837 ern Chinese can help modern Chinese word segmentation, and that the reverse knowledge dissemination will have a negative impact on ancient Chinese word segmentation.", "Concatenating memory embedding and character embedding from the encoder outperforms the combination of the two; therefore, this study chose the pair of configurations, hard +concat, to obtain the experimental results in the last row of Table 4 and Table", "5. 4.6 Case study This study further explores the benefits of the SM mechanism by comparing some cases from BERT-CRF and CROSSWISE.", "Table 7 lists three examples from the test sets of Ancient Chinese and modern Chinese datasets.", "According to the results, in the first sentence, (swept) and (grass) are two words in ancient Chinese, BERT-CRF treats these two words as a single word; BERT-CRF gives the second sentence the wrong boundary prediction in (middle) and (through).", "However, this study's CROSSWISE achieves all exact segmentation of these instances.", "The third sample is a sentence written in both ancient and modern Chi-nese, , which is a famous classical sentence in ancient Chinese.", "CROSSWISE also can split the sentence correctly.", "Therefore, it can be concluded that the model is flexible for Chinese word segmentation of era-hybrid texts and can produce different segmentation granularity of words according to the era of the sentence.", "Concurrently, it shows that the SM mechanism is effective in integrating era-specific linguistic knowledge according to different samples.", "In this study, a flexible model, called CROSSWISE, for cross-era Chinese word segmentation is proposed.", "This model is capable of improving the performance of each dataset by fully integrating era-specific knowledge.", "Experiments on four corpora show the effectiveness of this model.", "In the future, the incorporation of other labeling tasks into CROSSWISE, such as POS tagging and named entity recognition, may prove to be insightful.", "This research is supported by the NSFC project the Construction of the Knowledge Graph for the History of Chinese Confucianism (Grant No. 72010107003).", "We would like to thank Professor Jun Wang and Hao Yang for their insightful discussion.", "The datasets used in this paper are open datasets and do not involve any ethical issues." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "abstain", "other", "other", "method", "other", "other", "method", "other", "other", "method", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "other", "method" ]
[ "We present an annotation approach to capturing emotional and cognitive empathy in student-written peer reviews on business models in German.", "We propose an annotation scheme that allows us to model emotional and cognitive empathy scores based on three types of review components.", "Also, we conducted an annotation study with three annotators based on 92 student essays to evaluate our annotation scheme.", "The obtained inter-rater agreement of =0.79 for the components and the multi =0.41 for the empathy scores indicate that the proposed annotation scheme successfully guides annotators to a substantial to moderate agreement.", "Moreover, we trained predictive models to detect the annotated empathy structures and embedded them in an adaptive writing support system for students to receive individual empathy feedback independent of an instructor, time, and location.", "We evaluated our tool in a peer learning exercise with 58 students and found promising results for perceived empathy skill learning, perceived feedback accuracy, and intention to use.", "Finally, we present our freely available corpus of 500 empathy-annotated, student-written peer reviews on business models and our annotation guidelines to encourage future research on the design and development of empathy support systems.", "Empathy is an elementary skill in society for daily interaction and professional communication and is therefore elementary for educational curricula (e.g., Learning Framework 2030 (OECD, 2018)).", "It is the ability to simply understand the other person's perspective [ . . . ] and to react to the ob-Figure 1: Empathy annotation scheme. First, a text paragraph is classified into a peer review component ( strengths, weakness, improvement suggestions ). Second, the same annotator is then scoring the cognitive and emotional empathy level of the components based on our annotation guideline on a 1-to-5 scale. served experiences of another, (Davis, 1983, p.1) 1 .", "Empathy skills not only pave the foundation for successful interactions in digital companies, e.g., in agile work environments (Luca and Tarricone, 2001), but they are also one of the key abilities in the future that will distinguish the human workforce and artificial intelligence agents from one another (Poser and Bittner, 2020).", "However, besides the growing importance of empathy, research has shown that empathy skills of US college students decreased from 1979 to 2009 by more than thirty percent and even more rapidly between 2000 to 2009 (Konrath et al., 2011).", "On these grounds, the Organization for Economic Cooperation and Development (OECD) claims that the training for empathy skills should receive a more prominent role in today's higher education (OECD, 2018).", "1 Being aware that empathy is a multidimensional construct, in this study, we focus on emotional and cognitive empathy (Spreng et al., 2009; Davis, 1983).", "To train students with regard to empathy, educational institutions traditionally rely on experiential learning scenarios, such as shadowing, communication skills training, or role playing (Lok and Foster, 2019; van Berkhout and Malouff, 2016).", "Individual empathy training is only available for a limited number of students since individual feedback through a student's learning journey is often hindered due to large-scale lectures or the growing field of distance learning scenarios such as Massive Open Online Classes (MOOCs) (Seaman et al., 2018; Hattie and Timperley, 2007).", "One possible path for providing individual learning conditions is to leverage recent developments in computational linguistics.", "Language-based models enable the development of writing support systems that provide tailored feedback and recommendations (Santos et al., 2018), e.g., like those already used for argumentation skill learning (Wambsganss et al., 2020a, 2021b).", "Recently, studies have started investigating elaborated models of human emotions (e.g., Wang et al. (2016), Abdul-Mageed and Ungar (2017), Buechel and Hahn (2018), or Sharma et al. (2020)), but available corpora for empathy detection are still rare.", "Only a few studies address the detection and prediction of empathy in natural texts (Khanpour et al., 2017; Xiao et al., 2012), and, to the best of our knowledge, only one corpus is publicly available for empathy modelling based on news story reactions (Buechel et al., 2018).", "Past literature therefore lacks 1) publicly available empathy annotated data sets, 2) empathy annotation models based on rigorous annotation guidelines combined with annotation studies to assess the quality of the data, 3) the alignment of empathy in literature on psychological constructs and theories, and 4) an embedding and real-world evaluation of novel modelling approaches in collaborative learning scenarios (Rose et al., 2008).", "We introduce an empathy annotation scheme and a corpus of 500 student-written reviews that are annotated for the three types of review components, strengths, weaknesses , and suggestions for improvements , and their embedded emotional and cognitive empathy level based on psychological theory (Davis, 1983; Spreng et al., 2009).", "We trained different models and embedded them as feedback algorithms in a novel writing support tool, which provided students with individual empathy feedback and recommendations in peer learning scenarios.", "The measured empathy skill learning (Spreng et al., 2009), the perceived feedback accuracy (Pod-sakoff and Farh, 1989), and the intention to use (Venkatesh and Bala, 2008) in a controlled evaluation with 58 students provided promising results for using our approach in different peer learning scenarios to offer quality education independent of an instructor, time, and location.", "Our contribution is fourfold: 1) we derive a novel annotation scheme for empathy modeling based on psychological theory and previous work on empathy annotation (Buechel et al., 2018); 2) we present an annotation study based on 92 student peer reviews and three annotators to show that the annotation of empathy in student peer reviews is reliably possible; 3) to the best of our knowledge, we present the second freely available corpus for empathy detection in general and the first corpus for empathy detection in the educational domain based on 500 student peer reviews collected in our lecture about business innovation in German; 4) we embedded our annotation approach as predictive models in a writing support system and evaluated it with 58 students in a controlled peer learning scenario.", "We hope to encourage research on student-written empathetic texts and writing support systems to train students' empathy skills based on NLP towards a quality education independent of a student's location or instructors.", "The Construct of Empathy The ability to perceive the feelings of another person and react to their emotions in the right way requires empathy the ability of one individual to react to the observed experiences of another (Davis (1983), p.1).", "Empathy plays an essential role in daily life in many practical situations, such as client communication, leadership, or agile teamwork.", "Despite the interdisciplinary research interest, the term empathy is defined from multiple perspectives in terms of its dimensions or components (Decety and Jackson, 2004).", "Aware of the multiple perspectives on empathy, in this annotation study, we focused on the cognitive and emotional components of empathy as defined by Davis (1983) and Lawrence et al. (2004).", "Therefore, we follow the Toronto Empathy Scale' (Spreng et al., 2009) as a synthesis of instruments for measuring and validating empathy.", "Hence, empathy consists of both emotional and cognitive components (Spreng et al., 2009).", "While emotional empathy lets us perceive what other people feel, cognitive empathy is the human ability to recognize and understand other individuals (Lawrence et al., 2004).", "Emotion and Empathy Detection In NLP, the detection of empathy in texts is usually regarded as a subset of emotion detection, which in turn is often referred to as part of sentiment analysis.", "The detection of emotions in texts has made major progress, with sentiment analysis being one of the most prominent areas in recent years (Liu, 2015).", "However, most scientific studies have been focusing on the prediction of the polarity of words for assessing negative and positive notions (e.g., in online forums (Abbasi et al., 2008) or twitter postings (Rosenthal et al., 2018)).", "Moreover, researchers have also started investigating more elaborated models of human emotions (e.g., Wang et al. (2016), Abdul-Mageed and Ungar (2017), and Mohammad and Bravo-Marquez (2017)).", "Several corpora exist where researchers have annotated and assessed the emotional level of texts.", "For example, Scherer and Wallbott (1994) published an emotion-labelled corpus based on seven different emotional states.", "Strapparava and Mihalcea (2007) classified news headlines based on the basic emotions scale of Ekman (1992) (i.e., anger, disgust, fear, happiness, sadness and surprise ).", "More recently, Chen et al. (2018) published EmotionLines , an emotion corpus of multi-party conversations, as the first data set with emotion labels for all utterances was only based on their textual content.", "Bostan and Klinger (2018) presented a novel unified domain-independent corpus based on eleven emotions as the common label set.", "However, besides the multiple corpora available for emotion detection in texts, corpora for empathy detection are rather rare.", "As Buechel et al. (2018) also outline, the construction of corpora for empathy detection and empathy modelling might be less investigated due to various psychological perspectives on the construct of empathy.", "Most of the works for empathy detection focus, therefore, on spoken dialogue, addressing conversational agents, psychological interventions, or call center applications (e.g., McQuiggan and Lester (2007), Perez-Rosas et al. (2017), Alam et al. (2018), Sharma et al. (2020)) rather than written texts.", "Consequently, there are hardly any corpora available in different domains and languages that enable researchers in training models to detect the empathy level in texts, e.g., by providing students with individual empathy feedback (Buechel et al., 2018).", "Empathy Annotated Corpora and Annotation Schemes Only a few studies address the detection and prediction of empathy in natural language texts (e.g., Khanpour et al. (2017) and Xiao et al. (2012)).", "Presenting the first and only available gold standard data set for empathy detection, Buechel et al. (2018) constructed a corpus in which crowd-workers were asked to write emphatic reactions to news stories.", "Before the writing tasks, the crowd-workers were asked to conduct a short survey with self-reported items to measure their empathy level and their personal distress based on Batson et al. (1987).", "The scores from the survey were then taken as the annotation score for the overall news reaction message.", "The final corpus consisted of 1,860 annotated messages (Buechel et al., 2018).", "Nevertheless, previous empathy annotations on natural texts merely focused on intuition-based labels instead of rigorous annotation guidelines combined with annotation studies by researchers to assess the quality of the corpora (i.e., as is done for corpora of other writing support tasks, e.g., argumentative student essays by Stab and Gurevych (2017)).", "Moreover, previous annotations have mostly been conducted at the overall document level, resulting in one generic score for the whole document, which makes the corpus harder to apply to writing support systems.", "Consequently, there is a lack of linguistic corpora for empathy detection in general and, more specifically, for training models that provide students with adaptive support and feedback about their empathy in common pedagogical scenarios like large-scale lectures or the growing field of MOOCs (Wambsganss et al., 2021c, 2020b).", "In fact, in the literature about computer-supported collaborative learning (Dillenbourg et al., 2009), we found only one approach by Santos et al. (2018) that used a dictionary-based approach to provide students with feedback on the empathy level of their texts.", "We aim to address this literature gap by presenting and evaluating an annotation scheme and an annotated empathy corpus built on student-written texts with the objective to develop intelligent and accurate empathy writing support systems for students.", "with feedback on previously developed business models.", "Peer reviews are a modern learning scenario in large-scale lectures, enabling students to reflect on their content, receive individual feedback from peers, and thus deepen their understanding of the content (Rietsche and Sollner, 2019).", "Moreover, they are easy to set up in traditional large-scale learning scenarios or the growing field of distance-learning scenarios such as MOOCs.", "This can be leveraged to train skills such as the ability to appropriately react to other students' perspectives (e.g., Santos et al. (2018)).", "Therefore, we aim to create an annotated corpus to provide empathy feedback based on a data set that A) is based on real-world student peer reviews, B) consists of a sufficient corpus size to be able to train models in a real-world scenario and C) follows a novel annotation guideline for guiding the annotators towards an adequate agreement.", "Hence, we propose a new annotation scheme to model peer review components and their emotional and cognitive empathy levels that reflect the feedback discourse in peer review texts.", "We base our empathy annotation scheme on emotional and cognitive empathy following Davis (1983) and Spreng et al. (2009) guided by the study of Buechel et al. (2018).", "To build a reliable corpus, we followed a 4-step methodology: 1) we examined scientific literature and theory on the construct of empathy and on how to model empathy structures in texts from different domains; 2) we randomly sampled 92 student-generated peer reviews and, on the basis of our findings from literature and theory, developed a set of annotation guidelines consisting of rules and limitations on how to annotate emphatic review discourse structures; 3) we applied, evaluated, and improved our guidelines with three native speakers of German in a total of eight consecutive workshops to resolve annotation ambiguities; 4) we followed the final annotation scheme based on our 14-page guidelines to annotate a corpus of 500 student-generated peer reviews.", "2 3.1 Data Source We gathered a corpus of 500 student-generated peer reviews written in German.", "The data was collected in a business innovation lecture in a master's program at a Western European university.", "In this lecture, around 200 students develop and present a 2 The annotation guidelines as well as the entire corpus can be accessed at https://github.com/thiemowa/ empathy_annotated_peer_reviews .", "new business model for which they receive three peer reviews each.", "Here, a fellow student from the same course elaborates on the strengths and weaknesses of the business model and gives recommendations on what could be improved.", "We collected a random subset of 500 of these reviews from around 7,000 documents collected from the years 2014 to 2018 in line with the ethical guidelines of our university and with approval from the students to utilize the writings for scientific purposes.", "An average peer review consists of 200 to 300 tokens (in our corpus we counted a mean of 19 sentences and 254 tokens per document).", "A peer review example is displayed in Figure", "2. 3.2 Annotation Scheme Our objective is to model the empathy structures of student-generated peer reviews by annotating the review components and their emotional and cognitive empathy levels.", "Most of the peer reviews in our corpus followed a similar structure.", "They described several strengths or weaknesses of the business model under consideration, backing them up by examples or further elaboration.", "Moreover, the students formulated certain suggestions for improvements of the business model.", "These review components (i.e., strengths, weaknesses, and suggestions for improvement ) were written with different empathetic levels, sometimes directly criticizing the content harshly, sometimes empathetically referring to weaknesses as further potentials for improvement with examples and explanation.", "We aim to capture these empathic differences between the peer reviews with two empathy level scores, the cognitive empathy level of a certain review component and the emotional empathy level of a certain component.", "Our basic annotation scheme is illustrated in Figure", "1. 3.2.1 Review Components For the review components, we follow established models of feedback structures suggested by feedback theory (e.g., Hattie and Timperley (2007) or Black and Wiliam (2009)).", "A typical peer review, therefore, consists of three parts: 1) elaboration of strengths, 2) elaboration of weaknesses, and 3) suggestions for improvements (to answer Where am I going and how am I going? and Where do I go next? , i.e., Hattie and Timperley (2007)).", "Accordingly, the content of a review consists of multiple components, including several controversial statements (e.g., a claim about a strength or weakness of a business model) that are usually supported by elaborations or examples (i.e., a premise ) (Toulmin, 1984).", "Also, in the domain of student-written peer reviews, we found that a standpoint and its elaboration are the central element of a review component.", "Accordingly, we summarized all the claims and premises which described positive aspects of a business model as strengths .", "All content (claims and premises) describing negative aspects were modelled as weaknesses , while claims and premises with certain content for improvement were modelled as suggestions for improvement , following the structure of a typical review.", "Besides the content, syntactical elements and key words were used as characteristics for the compound classification, e.g., most students introduced a review component by starting with structural indications such as Strengths: or Weaknesses: in their peer review texts.", "3.2.2 Empathy Level To capture the differences in the empathy levels of the peer reviews (i.e., the way the writer was conveying their feedback (Hattie and Timperley, 2007)), we followed the approach of Davis (1983) and Spreng et al. (2009) for cognitive and emotional empathy.", "Cognitive empathy (perspective taking) is the writer's ability to use cognitive processes, such as role taking, perspective taking, or decentering , while evaluating the peers' submitted tasks.", "The student sets aside their own perspective and steps into the shoes of the other.", "Cognitive empathy can happen purely cognitively, in that there is no reference to any affective state, (Baron-Cohen and Wheelwright, 2004) but it mostly includes understanding the other's emotional state as well.", "The following example displays high cognitive empathy: You could then say, for example, Since market services are not differentiated according to customer segments and locations, the following business areas result... And that due to the given scope of this task you will focus on the Concierge-Service business segment.' After that, you have correctly only dealt with this business seg-ment.", "Emotional empathy (emphatic concern) is the writer's emotional response to the peers' affective state.", "The students can either show the same emotions as read in the review or simply state an appropriate feeling towards the peer.", "Typical examples include sharing excitement with the peer about the business model submitted or showing concern over the peer's opinion.", "The following example depicts high emotional empathy: I think your idea is brilliant! .", "Both constructs are measured on a scale from 1-5 following the empathy scale range of Moyers and Martin (2010), with every level being precisely defined in our annotation guidelines.", "A summary of the definitions for both empathy level scores are displayed in Table 1 and Table", "2. A more detailed description of both scores can be found in the appendix in Table 7 and Table 8.", "3 Figure 2 illustrates an example of an entire peer review that is annotated for strength, weakness and suggestion for improvement and the cognitive and emotional empathy scores.", "4 Figure 2: Fully annotated example of a peer review.", "Three native German speakers annotated the peer reviews independently from each other for the components strengths, weaknesses and suggestions for improvement , as well as their cognitive and emotional empathy levels according to the annotation guidelines we specified.", "The annotators were master's students in business innovation from a European university with bachelor's degrees in business administration and were, therefore, domain experts in the field of business models.", "Inspired by Stab 3 More elaborated definitions, examples, and key word lists for both empathy scales can be found in our annotation guidelines.", "and Gurevych (2017), our guidelines consisted of 14 pages, including definitions and rules for how the review components should be composed, which annotation scheme was to be used, and how the cognitive and emotional empathy level were to be judged.", "Several individual training sessions and eight team workshops were performed to resolve disagreements among the annotators and to reach a common understanding of the annotation guidelines on the cognitive and emotional empathy structures.", "We used the tagtog annotation tool, 5 which offers an environment for cloud-based annotation in a team.", "First, a text was classified into peer review components ( strengths, weaknesses, suggestions for improvement , or none ) by the trained annotators.", "Second, the same annotator then scored the cognitive and emotional empathy levels of each component based on our annotation guideline on a one to five scale.", "After the first 92 reviews were 5 https://tagtog.net/ annotated by all three annotators, we calculated the inter-annotator agreement (IAA) scores (see Section 4.1).", "6 As we obtained satisfying results, we proceeded with two annotators annotating 130 remaining documents each and the senior annotator annotating 148 peer reviews, resulting in 408 additional annotated documents.", "Together with the 92 annotations of the annotation study of the senior annotator (the annotator with the most reviewing experience), we counted 500 annotated documents in our final corpus.", "To evaluate the reliability of the review components and empathy level annotations, we followed the approach of Stab and Gurevych (2014).", "Review Components Concerning the review components, two strategies were used.", "Since there were no predefined markables, annotators not only had to identify the type of review component but also its boundaries .", "In order to assess the latter, we use Krippendorff's U (Krippendorff, 2004), which allows for an assessment of the reliability of an annotated corpus, considering the differences in the markable boundaries.", "To evaluate the an-notators' agreement in terms of the selected category of a review component for a given sentence, we calculated the percentage agreement and two chance-corrected measures, multi (Fleiss, 1971) and Krippendorff's (Krippendorff, 1980).", "Since each annotation always covered a full sentence (or a sequence of sentences), we operated at the sentence level for calculating the reliability of the annotations in terms of the IAA.", "Table 3 displays the resulting IAA scores.", "The obtained scores for Krippendorff's indicated an almost perfect agreement for the strengths components and a substantial agreement for both the weaknesses and the suggestions for improvement components.", "The unitized of strengths, weaknesses and suggestions for improvement annotations was slightly smaller compared to the sentence-level agreement.", "Thus, the boundaries of review components were less precisely identified in comparison to the classification into review components.", "Yet the scores still suggest that there was a moderate level of agreement between the annotators for the strengths and a fair agreement for the weaknesses and the suggestions for improvement.", "With a score of U =90.32%, the boundaries of the non-annotated text units were more reliably detected, indicating an almost perfect agreement between the annotators.", "Percentage agreement, multi , and Krippendorff's were considerably higher for the non-annotated spans as compared to the strengths, weaknesses, and suggestions for improvement, indicating an almost perfect agreement between the annotators.", "Hence, we conclude that the annotation of the review components in student-written peer reviews is reliably possible .", "Empathy Level To assess the reliability of the cognitive and emotional empathy level annotations, we calculated the multi for both scales.", "For the cognitive empathy level, we received a multi of 0.41 for both the emotional and cognitive empathy level, suggesting a moderate agreement between the annotators in both cases.", "Thus, we conclude that the empathy level can also be reliably annotated in student-generated peer reviews.", "To analyze the disagreement between the three annotators, we created a confusion probability matrix (CPM) (Cinkova et al., 2012) for the review components and the empathy level scores.", "The results can be found in Section C of the appendix.", "The corpus we compiled consists of 500 student-written peer reviews in German that were composed of 9,614 sentences with 126,887 tokens in total.", "Hence, on average, each document had 19 sentences and 254 tokens.", "A total of 2,107 strengths, 3,505 weaknesses and 2,140 suggestions for improvement were annotated.", "Tables 4, 5, and 6 present some detailed statistics on the final corpus.", "Moreover, Figure 3 displays the distribution of the empathy scores in the annotated dataset.", "Both the cognitive and the emotional empathy levels approximately follow a normal distribution with a mean score of 2.94 and 3.22, respectively (see Table 6).", "We measured only a low correlation of 0.38 between the scores of cognitive and emotional empathy.", "Modelling Cognitive and Emotional Empathy The empathy detection task is considered a paragraph-based, multi-class classification task, where each paragraph is either considered to be a strength, weakness , or a suggestion for improvement and has a non-empathic, neutral, or em-pathic cognitive and emotional empathy level.", "Therefore, we assigned the levels of our cognitive and emotional empathy scores to three different labels: level 1 and 2 were assigned to a non-empathic text label, level 3 to a neutral label, and levels 4 and 5 to anempathic label .", "We split the data into 70% training, 20% validation, and 10% test data.", "To apply the model, the corpus texts were split into word tokens.", "The model performances were measured in terms of accuracy, precision, recall, and f1-score.", "We trained a predictive model following the architecture of Bidirectional Encoder Representations from Transformers (BERT) proposed by Devlin et al. (2018).", "We used the BERT model from deepset , 7 since it is available in German and provides a deep pretrained model that was unsupervised while training on domain-agnostic German corpora (e.g., the German Wikipedia).", "The best performing paramenter combination for our BERT model incorporated a dropout probability of 10% and a learning rate of 3e -5 , and the number of epochs were", "3. After several iterations, we reached a micro f1-score of 74.96% for the detection of the emotional empathy level and 69.98% for the detection of the cognitive empathy level of a text paragraph.", "Moreover, we reached an f1-score of 94.83% to predict a text paragraph as a strength, a 64.28% to predict a text paragraph as a weakness, 7 https://github.com/deepset-ai/FARM and 59.79% to predict suggestions for improvement.", "To ensure the validity of our BERT model, we benchmarked against bidirectional Long-Short-Term-Memory-Conditional-Random-Fields classi-fiers (BiLSTM-CRF).", "In combination with the corresponding embeddings vocabulary (GloVe) (Pen-nington et al., 2014), our LSTM reached an unsatisfying f1-score of 61% for detecting the emotional empathy level and 51% for detecting the cognitive empathy level.", "Evaluation in a Peer Learning Setting We designed and built an adaptive writing support system that provides students with individual feedback on their cognitive and emotional empathy skills.", "The application is illustrated in Figure", "4. We embedded our system into a peer writing exercise where students were asked to write a peer review on a business model.", "During this writing task, they received adaptive feedback on the cognitive and emotional empathy level based on our model.", "The evaluation was conducted as a web experiment facilitated by the behavioral lab of our university, and thus, designed and reviewed according to the ethical guidelines of the lab and the university.", "We received 58 valid results (mean age = 23.89, SD= 3.07, 30 were male, 28 female).", "The participants were told to read an essay about a business model of a peer student.", "Afterwards, they were asked to write a business model review for the peer by providing feedback on the strengths, weaknesses, and suggestions for improvement of the particular business model.", "After the treatment, we measured the intention to use (ITU) (Venkatesh and Bala, 2008) by asking three items.", "We also asked the participants to judge their perceived empathy skill learning (PESL) by asking two items that covered cognitive and emotional empathy skills (Spreng Figure 4: Screenshot of a trained model on our corpus as an adaptive writing support system. et al., 2009; Davis, 1983).", "Finally, we surveyed the perceived feedback accuracy (PFA) (Podsakoff and Farh, 1989) to control the accuracy of our model.", "All constructs were measured with a 1-to-7 point Likert scale (1: totally disagree to 7: totally agree, with 4 being a neutral statement).", "8 Furthermore, we asked three qualitative questions: What did you particularly like about the use of the tool? , What else could be improved? , and Do you have any other ideas? and captured the demographics.", "In total, we asked 13 questions.", "All participants were compensated with an equivalent of about 12 USD for a 25 to 30 minute experiment.", "Results Participants judged their empathy skill learning with a mean of 5.03 (SD= 1.05).", "Concerning the PFA, the subjects rated the construct with a mean of 4.93 (SD= 0.94).", "The mean value of intention to use of the participants using our application as a writing support tool in peer learning scenarios was 5.14 (SD= 1.14).", "The mean values of all three constructs were very promising when comparing the results to the midpoints.", "All results were better than the neutral value of 4, indicating a positive evaluation of our application for peer learning tasks.", "We also asked open questions in our survey to receive the participants' opinions about the tool they used.", "The general attitude was very positive.", "Participants positively mentioned the simple and easy interaction, the distinction between cognitive and emotional empathy feedback, and the overall empathy score together with the adaptive feedback message several times.", "However, participants also said that the tool should provide even more detailed feedback based on more categories and should pro-8 The exact items are listed in the appendix.", "vide concrete text examples on how to improve their empathy score.", "We translated the responses from German and clustered the most representative responses in Table 16 in the appendix.", "We introduce a novel empathy annotation scheme and an annotated corpus of student-written peer reviews extracted from a real-world learning scenario.", "Our corpus consisted of 500 student-written peer reviews that were annotated for review components and their emotional and cognitive empathy levels.", "Our contribution is threefold: 1) we derived a novel annotation scheme for empathy modeling based on psychological theory and previous work for empathy modeling (Buechel et al., 2018); 2) we present an annotation study based on 92 student peer reviews and three annotators to show that the annotation of empathy in student peer reviews is reliably possible ; and 3) to the best of our knowledge, we present the second freely available corpus for empathy detection and the first corpus for empathy detection in the educational domain based on 500 student peer reviews in German.", "For future research, this corpus could be leveraged to support students' learning processes, e.g., through a conversational interaction (Zierau et al., 2020).", "However, we would also encourage research on the ethical considerations of empathy detection models in user-based research (i.e., Wambsganss et al. (2021a)).", "We, therefore, hope to encourage future research on student-generated empathetic texts and on writing support systems to train empathy skills of students based on NLP towards quality education independent of a student's location or instructors." ]
[ "method", "objective", "method", "abstain", "method", "result", "method", "abstain", "other", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "other", "method", "other", "other", "other", "method", "method", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "other", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "result", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "method", "abstain", "abstain", "other", "result", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "other", "abstain", "method", "method" ]
[ "Several studies have reported the inability of Transformer models to generalize compositionally, a key type of generalization in many NLP tasks such as semantic parsing.", "In this paper we explore the design space of Transformer models showing that the inductive biases given to the model by several design decisions significantly impact compositional generalization.", "We identified Transformer configurations that generalize compositionally significantly better than previously reported in the literature in many compositional tasks.", "We achieve state-of-the-art results in a semantic parsing compositional generalization benchmark (COGS), and a string edit operation composition benchmark (PCFG).", "Although modern neural network architectures reach state-of-the-art performance in many challenging natural language tasks, they seem to exhibit a low amount of compositional generalization, i.e., the ability to learn a set of basic primitives and combine them in more complex ways than those seen during training (Hupkes et al., 2020).", "For example, suppose a system has learned the meaning of jump and that jump twice means that the action jump has to be repeated two times.", "Upon learning the meaning of the action jax, it should be able to infer what jax twice means.", "Compositional generalization is a key aspect of natural language and many other tasks we might want machine learning models to learn.", "While both humans and classical AI techniques (such as grammars or search-based systems) can handle compositional tasks with relative ease, it seems that modern deep learning techniques do not possess this ability.", "A key question is thus: Can we build deep learning architectures that can also solve compositional tasks?", "In this paper we focus on Transformers (Vaswani et al., 2017), which have been shown in the literature to exhibit poor compositional generalization (see Section 2).", "Through an empirical study, we show that this can be improved.", "With the goal of creating general models that generalize compositionally in a large range of tasks, we show that several design decisions, such as position encodings, decoder type, weight sharing, model hyper-parameters, and formulation of the target task result in different inductive biases, with significant impact for compositional generalization 1 .", "We use a collection of twelve datasets designed to measure compositional generalization.", "In addition to six standard datasets commonly used in the literature (such as SCAN (Lake and Baroni, 2018), PCFG (Hupkes et al., 2020), CFQ (Keysers et al., 2019) and COGS (Kim and Linzen, 2020)), we also use a set of basic algorithmic tasks (such as addition, duplication, or set intersection) that although not directly involving natural language, are useful to obtain insights into what can and cannot be learned with different Transformer models.", "We also include tasks where we do not see significant improvements, to understand what types of compositional generalization are improved with our proposed modifications, and which are not.", "The main contributions of this paper are: (1) A study of the Transformer design space, showing which design choices result in compositional learning biases across a variety of tasks.", "(2) state-of-the-art results in COGS, where we report a classification accuracy of 0.784 using an intermediate representation based on sequence tagging (compared to 0.35 for the best previously reported model (Kim and Linzen, 2020)), and the productivity and systematicity splits of PCFG (Hupkes et al., 2020).", "The rest of this paper is organized as follows.", "Section 2 provides some background on compositional generalization and Transformers.", "In Sec-1 Source code: https://github.com/ google-research/google-research/tree/master/compositional_transformers .", "tion 3, we present the datasets used in our empirical evaluation, which is presented in Section", "4. The paper closes with a discussion on the implications of our results, and directions for future work.", "Compositional generalization can manifest in different ways.", "Hupkes et al. (2020) identified five different types, such as systematicity and productivity (extrapolation to longer sequences than those seen during training).", "Systematicity is the ability of recombining known parts and rules in different ways than seen during training.", "The example in the introduction of knowing the meaning of jump, jump twice and jax and from those inferring the meaning of jax twice is an example of systematicity. Productivity , on the other hand, is the ability to extrapolate to longer sequences than those seen during training. For example, consider the example of learning how to evaluate mathematical expressions of the form 3 + (4 (5 2)) .", "An example of productivity would be to extrapolate to expressions with a larger number of parenthesis, or with deeper parenthesis nesting, than seen during training.", "Hupkes et al. (2020) identify other forms of compositionality, such as substitutivity , localism or overgeneralization , but we will mostly focus on systematicity and productivity in this paper.", "Compositional generalization is related to the general problem of out-of-distribution generalization .", "Hence, we can also see it as the problem of how models can discover symmetries in the domain (such as the existence of primitive operations or other regularities) that would generalize better to out-of-distribution samples than shortcuts (Geirhos et al., 2020), which would only work on the same distribution of examples seen during training.", "Early work focused on showing how different deep learning models do not generalize compositionally (Lika et al., 2018).", "For example Lika et al. (2018) showed that while models like LSTMs are able to generalize compositionally, it is unlikely that gradient descent converges to a solution that does so (only about 2% out of 50000 training runs achieved a generalization accuracy higher than 80% in a compositional task, while they had almost perfect performance in training).", "Datasets like SCAN (Lake and Baroni, 2018), PCFG (Hup-kes et al., 2020), Arithmetic language (Veldhoen et al., 2016), or CFQ (Keysers et al., 2019) were proposed to show these effects.", "Work toward improving compositional generalization includes ideas like Syntactic attention (Russin et al., 2019), increased pretraining (Furrer et al., 2020), data augmentation (Andreas, 2019), intermediate representations (Herzig et al., 2021) or structure annotations (Kim et al., 2021).", "Specialized architectures that achieve good performance in specific compositional generalization tasks also exist.", "For example, Liu et al. (2020) propose a model made up of a composer and a solver, achieving perfect performance on SCAN.", "The most related concurrent work to ours is that of Csords et al. (2021), who also showed gains in compositional generalization via relative attention.", "Additionally, in their work, they show that a key problem in some tasks is the end of sequence detection problem (when to stop producing output).", "Finally, they show that generalization accuracy keeps growing even when training accuracy maxes out, questioning early stopping approaches in compositional generalization.", "We note that training for longer might also improve our results, which we will explore in the future.", "Models based on Transformers (Vaswani et al., 2017), such as BERT (Devlin et al., 2018), or variants (Yang et al., 2019; Lan et al., 2019; Raffel et al., 2019) yield state-of-the-art results in many NLP tasks such as language modeling (Child et al., 2019; Sukhbaatar et al., 2019; Rae et al., 2019; Kitaev et al., 2020), question answering (Ainslie et al., 2020; Lan et al., 2019; Zaheer et al., 2020; Beltagy et al., 2020), and summarization (Zhang et al., 2019).", "However, existing studies show that they do not have good compositional generalization.", "In this paper we will consider the original Transformer architecture and expand upon it.", "The standard Transformer model consists of two main components (see the center of Figure 2): an encoder and a decoder , each of which consists of a series of layers.", "Each layer contains an attention sublayer followed by a feed-forward sublayer (the decoder has two attention sublayers for decoder-to-decoder and decoder-to-encoder attention).", "The input of a Transformer is a sequence of token embeddings, and the output is a sequence of tokens 3592 Input : # # # 3 6 7 [SEP] # # 1 4 9 1 [END] Output: # # 1 8 5 8 [END] Addition: Input : # # 3 6 7 [SEP] # # 1 4 9 1 [END] Output: # # 1 1 2 4 [END] AdditionNegatives: Input : 1 3 3 7 2 [END] Output: 2 7 3 3 1 [END] Reverse: Input : 1 3 5 7 2 [END] Output: 1 3 5 7 2 1 3 5 7 2 [END] Duplication: Input : 1 2 3 [SEP] a b [END] Output: 1 a [SEP] 2 a [SEP] 3 a [SEP] 1 b [SEP] 2 b [SEP] 3 b [END] Cartesian: Input : a4 b1 f6 [SEP] f7 a4 c3 [END] Output: true [END] Intersection: Input : look around right and walk left twice [END] Output: I_TURN_RIGHT I_LOOK I_TURN_RIGHT I_LOOK I_TURN_RIGHT I_LOOK I_TURN_RIGHT I_LOOK I_TURN_LEFT I_WALK I_TURN_LEFT I_WALK [END] SCAN-length / SCAN-add-jump: Input : swap_first_last copy remove_second E18 E15 Q6 , P15 L18 X10 I15 Y14 [END] Output: Q6 E15 E18 [END] PCFG-productivity / PCFG-systematicity Input : A rose was helped by a dog .", "generated one at a time by predicting based on the output distribution generated by the decoder.", "To provide a notion of token order a set of position encodings are typically added to the embedding of each input token to indicate sequence order.", "We will use l to denote the number of en-coder/decoder layers, d for the dimensionality of token embeddings, f for the intermediate dimensionality used by the feed-forward sublayer, and h for the number of attention-heads in the attention sublayers.", "The original Transformer model used l = 6 , d = 512 , f = 2048 and h = 8 , as their base configuration.", "In this paper, we use parameters much smaller than that, as we are evaluating the architectural decisions on relatively small datasets.", "We use a collection of 12 datasets that require different types of compositional generalization.", "Six of those dataset consist of algorithmic tasks (addition, reversing lists, etc.), and six of them are standard datasets used to evaluate compositional generalization (most involving natural lan-guage).", "We note that our algorithmic tasks mostly require productivity -style compositional generalization, while other datasets also require systematicity or synonimity (Hupkes et al., 2020).", "Specifically, we used the following datasets (see Appendix E for details, and Figure 1 for examples): Addition ( Add ): A synthetic addition task, where the input contains the digits of two integers, and the output should be the digits of their sum.", "The training set contains numbers with up to 8 digits, and the test set contains numbers with 9 or 10 digits.", "Numbers are padded to reach a length of 12.", "AdditionNegatives ( AddNeg ): The same as the previous one, but 25% of the numbers are negative (preceded with the symbol).", "Reversing ( Reverse ): Where the output is expected to be the input sequence in reverse order.", "Training contains sequences of up to 16 digits, and the test set contains lengths between 17 to 24.", "Duplication ( Dup ): The input is a sequence of digits and the output should be the same sequence, repeated twice.", "Training contains sequences up to 16 digits, and test from 17 to 24.", "Cartesian ( Cart ): The input contains two sequences of symbols, and the output should be their Cartesian product.", "Training contains sequences of up to 6 symbols (7 or 8 for testing).", "Intersection ( Inters ): Given two sequences of symbols, the output should be whether they have a non-empty intersection.", "Training contains sets with size 1 to 16, and testing 17 to 24.", "SCAN-length ( SCAN-l ): The length split of the SCAN dataset (Lake and Baroni, 2018).", "SCAN-add-jump ( SCAN-aj ): The add primitive jump split of the SCAN dataset (Lake and Baroni, 2018).", "PCFG-productivity ( PCFG-p ): The productivity split of the PCFG dataset (Hupkes et al., 2020) PCFG-sytematicity ( PCFG-s : The systematicity split of the PCFG dataset (Hupkes et al., 2020).", "COGS : The generalization split of the COGS semantic parsing dataset (Kim and Linzen, 2020).", "CFQ-mcd1 ( CFQ ): The MCD1 split of the CFQ dataset (Keysers et al., 2019).", "the training and test sets come from the same distribution, and most Transformer models achieve near 100% accuracy (except a few hard tasks like the Cartesian product or set intersection).", "Hence, splitting train and test data in a way that requires compositional generalization is key (e.g., having examples with larger sequences in the test set than in the training set).", "We want to make sure models do not just learn shortcuts (Geirhos et al., 2020) that do not generalize to out-of-distribution data.", "In this section we present an evaluation of the compositional generalization abilities of Transformers with different architectural configurations.", "Specifically we evaluated: (1) the type of position encodings, (2) the use of copy decoders, (3) model size, (4) weight sharing, and (5) the use of intermediate representations for prediction (see Figure 2).", "For this systematic experimentation, we used small Transformer models, without pre-training (all models are trained from scratch).", "Even if previous work has reported benefits of pre-training in some compositional tasks (e.g., in CFQ (Furrer et al., 2020)), we aim at disentangling the effects of each architecture decision in and of itself, in the search for compositional inductive biases.", "Our results show that, while these decisions do not affect certain types of compositional generalization tasks, we see significant gains in others.", "We report the average of at least 3 training runs (for algorithmic tasks, we use at least 5 training runs, and 10 for set intersection since they have a higher variance; see Appendix B).", "We use sequence-level accuracy as the evaluation metric: an output sequence with even just a single wrong token is considered wrong.", "While the original Transformer model (Vaswani et al., 2017) and BERT (Devlin et al., 2018) used absolute position encodings , later models such as T5 (Raffel et al., 2019) or ETC (Ainslie et al., 2020) use relative position encodings (Shaw et al., 2018).", "Relative position encodings assign a label to each pair of tokens in the input (typically representing their relative distance in the input, up to a maximum radius).", "So, there is a label used for tokens attending to a token two positions to the right, etc.", "One interesting thing about relative position encodings is that they are position invariant , i.e. two tokens that are k positions apart will attend to each other in the same way, regardless of where they are in the sequence, and hence allowing models to capture further symmetries in the domain.", "We compare the following position encodings: abs : sinusoidal absolute position encodings (as used in the original Transformer) 2 .", "rel-e : relative position encodings, where the relative position label defines a learnable embedding that is added to the key during the attention process.", "We used a maximum local attention radius of 16, which means that we have the following relative position labels { l 16 , l 15 , ..., l 1 , l 0 , l 1 , ..., l 15 , l 16 } .", "Tokens that are further than 16 positions apart get the l 16 or l 16 labels.", "rel-b : relative positions define a learnable bias that is added to the attention weight of each attention pair.", "This is the attention mechanism used by T5 (although they use a logarithmic scheme for representing relative positions).", "While relative positions are straightforward for encoder-to-encoder and decoder-to-decoder attention, it is unclear what the relative positions should be for decoder-to-encoder.", "Hence, we tested three alternatives ( rel2-e , rel2-b and rel2-eb in our result tables).", "rel-* methods do not use relative position labels in decoder to encoder attention, while those named rel2-* do (where token y i in the decoder attending to token x j in the encoder will have label l j i .", "Table 1 shows sequence-level classification accuracy for small Transformers ( l = 2 , d = 64 , f = 256 , h = 4 ).", "The right-most column shows the average accuracy across all datasets, and we can see that position encodings play a very significant role in the performance of the models.", "Going from 0.137 accuracy of the model with absolute position encodings up to 0.354 for a model with relative position encodings using embeddings (but no bias term), as well as relative positions for decoder-to-encoder attention.", "In general almost any type of relative position encodings help, but using embeddings helps more than using bias terms.", "Moreover, position encodings play a bigger role in algorithmic tasks.", "For example, in the Add and AddNeg tasks, models go from 0.005 and 0.042 accuracy to almost perfect accuracy (0.988 and 0.830 for the rel2-e model).", "Moreover tasks like SCAN or CFQ do not seem to be affected by position encodings, and using relative position encodings with only a bias term hurts in PCFG.", "Many tasks (such as the duplication or PCFG datasets used in our experiments) require models able to learn things like output whatever is in position k of the input, rather than having to learn hard-coded rules for outputting the right token, depending on the input, a type of symmetry that can be captured with a copy decoder .", "The copy decoder in our experiments is fairly simple, and works as follows (Figure 2, top-left).", "It assumes that the input and output vocabularies are the same (we use the union of input and output vocabularies in our experiments).", "For a given token x i in the output (with final embedding y i ), in addition to the output probability distribution p 1 over the tokens in the vocabulary, the copy decoder produces a second distribution p 2 , which is then mixed with p 1 via a weight w .", "p 2 is obtained by attending to the output of the last encoder layer (the attention query is calculated using a learnable weight matrix from y i , the embeddings of the last encoder layer are used as the keys , and the values are a one-hot representation of the input tokens).", "The result is passed through a softmax layer, resulting in p 2 .", "Table 2 shows sequence-level classification accuracy for models with and without a copy decoder.", "As can be seen in the last column ( Avg. ), having a copy decoder consistently helps performance, with all models using a copy decoder ( abs-c , rel-eb-c and rel2-eb-c ) outperforming their counterparts without a copy decoder.", "Moreover, we see that the copy decoder helps the most in PCFG and COGS , while it does not seem to help in some other tasks.", "Moreover, we would like to point out that there are other ways to set up copy decoders.", "For example Akyrek et al. (2021) propose defining a lexical 3595 Add AddNeg Reverse Dup Cart Inters SCAN-l SCAN-aj PCFG-p PCFG-s COGS CFQ Avg.", "translation layer in the copy decoder, which allows models to translate tokens in the input to tokens in the output (which is useful in tasks such as SCAN, which have disjoint vocabularies).", "In their work, they propose to initialize this layer via a lexicon learning task.", "Next, we compare the effect of varying both the number of layers ( l ), as well as their size ( d , f h ).", "Specifically, we tested models with number of layers l equal to 2, 4 and 6, and layers of two sizes: small ( d = 64 , f = 256 , h = 4 ), and large ( d = 128 , f = 512 , h = 8 ).", "We denote these models small-2 , small-4 , small-6 , large-2 , large-4 , and large-6 .", "All of the models in this section are variants of rel2-eb-c , our previous best (see Appendix C for parameter counts of our models).", "Table 3 shows the sequence-level classification accuracy, showing a few interesting facts.", "First, in most algorithmic tasks, size does not help.", "Our hypothesis is that the logic required to learn these tasks does not require too many parameters, and large models probably overfit (e.g., like in Duplication ) 3 .", "Some datasets, however, do benefit from size.", "For example, most large models outperform their respective small ones in both variants of PCFG.", "These results are not unexpected, as most compositional generalization datasets contain idealized examples, often generated via some form of 3 Further investigation showed that lowering the learning rate improves performance in the larger models, preventing the phenomenon seen in the Duplication dataset.", "Systematically exploring this is left for future work.", "grammar, and have very small vocabularies (see Table 7).", "Hence, models might not benefit from size as much as on complex natural language tasks.", "In this section we evaluate the effect of sharing weights across transformer layers.", "When weight sharing is activated, all learnable weights from all layers in the encoder are shared across layers, and the same is true across the layers of the decoder .", "Table 4 shows the resulting performance of the models (to be compared with Table 3).", "Surprisingly, weight sharing significantly boosts compositional generalization accuracy, and almost all models achieve a higher average accuracy across all datasets than their equivalent models in Table", "3. In particular, datasets such as AdditionNegatives see a significant boost, with several models achieving higher than 0.9 accuracy (0.982 for large-6s ).", "PCFG also significantly benefits from weight sharing, with the large-6s model achieving 0.634 and 0.828 in the productivity and systematicity versions, respectively.", "These are higher than previously reported results in the literature (using the original Transformer, which is a much larger model): 0.50 and 0.72 (Hupkes et al., 2020).", "Notice, moreover that achieving good results in PCFG (or SCAN) is easy with specialized models.", "The important achievement is doing so with general purpose models.", "Our hypothesis is that a model with shared weights across layers might have a more suited inductive bias to learn primitive operations that are applied repeatedly to the input of the transformer (copying, reversing, duplicating, etc.).", "The key idea of an intermediate representation is to define a different representation of the target output that is easier to generate by the model, but that can be easily mapped to the desired output.", "Herzig et al. (2021) recently showed very promising results using this technique in several tasks.", "Defining useful intermediate representations is task-specific and not trivial.", "Thus we experimented with it in only two datasets: COGS and CFQ (Figure 3).", "Our intermediate representation for COGs turns the task from seq2seq into a sequence tagging task.", "We ask the model to produce 5 tags for each input token: a parent , the role of the relation between the token and its parent (if applicable), the category , the noun determiner (for nouns) and the verb name (for verbs).", "With these tags, the original output can be constructed deterministically.", "One of the main advantages of this is that the model is naturally pushed to produce outputs with the correct length even for longer inputs (improving productivity ).", "For the sequence tagging formulation, we used only the encoder part of the Transformer and added five prediction heads, to predict each tag.", "For role , category , noun determiner and verb name , we simply had a dense layer with a Sigmoid activation function.", "For the parent tag, we experimented with 3 different head types: Absolute used a dense layer with a Sigmoid activation to predict the absolute index of the parent in the input sequence ( -1 for no parent).", "Relative predicted the relative offset of the parent token with respect to the current token, or self for no parent.", "Finally, Attention used the attention weights from a new attention layer with 1 head to predict the parent.", "Table 5 shows the experimental results comparing a few configurations of this new tagging approach to a few configurations of the seq2seq approach (see Appendix D for all other configu-rations).", "Examples in the structural generalization tasks are typically longer than in the train-seq2seq tagging Model abs rel2-eb-c abs rel-eb Size small-2 small-6s small-2 small-2s Parent encoding absolute attention Lexical Generalization: Primitives and Grammatical Roles Subject Object (common noun) 0.309 0.899 0.911 0.969 Subject Object (proper noun) 0.098 0.429 0.630 0.826 Object Subject (common noun) 0.790 0.936 0.982 0.978 Object Subject (proper noun) 0.207 0.951 0.993 0.995 Prim noun Subject (common noun) 0.240 0.913 0.993 0.988 Prim noun Subject (proper noun) 0.019 0.772 0.974 0.996 Prim noun Object (common noun) 0.017 0.902 0.950 0.953 Prim noun Object (proper noun) 0.000 0.513 0.651 0.700 Prim verb Infinitival argument 0.000 0.766 0.000 0.001 Lexical Generalization: Verb Argument Structure Alternation Active Passive 0.604 0.000 0.697 0.948 Passive Active 0.196 0.001 0.535 0.897 Object-omitted transitive Transitive 0.275 0.003 0.527 0.926 Unaccusative Transitive 0.069 0.003 0.528 0.787 Double object dative PP dative 0.819 0.000 0.590 0.958 PP dative Double object dative 0.404 0.004 0.771 0.850 Lexical Generalization: Verb Class Agent NP Unaccusative Subject 0.399 0.951 0.784 1.000 Theme NP Obj-omitted trans Subj 0.688 0.965 0.791 0.701 Theme NP Unergative subject 0.694 0.966 0.930 0.771 Structural Generalization: Phrases and Grammatical Roles Obj-mod PP Subj-mod PP 0.000 0.000 0.000 0.299 Structural Generalization: Deeper Recursion Depth generalization: PP modifiers 0.003 0.000 0.138 0.681 Depth generalization: Sentential comp 0.000 0.000 0.000 0.233 Overall 0.278 0.475 0.637 0.784 Table 5: Sequence-level accuracy in different generalization subsets in COGS for both seq2seq and sequence tagging models.", "ing set and require productivity .", "All the models tested in the original COGS paper (Kim and Linzen, 2020) (and all of our seq2seq approaches above) achieved 0 accuracy in this category.", "The small-6s seq2seq model improves the overall performance from 0.278 to 0.475, but curiously has near 0 performance on Verb Argument Structure Alternation tasks, worse than the base abs model.", "The intermediate representation based on tagging works much better.", "The base abs tagging model manages to get non-zero performance on one structural generalization task, which suggests that enforcing the right output length helps.", "Finally, when predicting the parent directly from attention weights, the structural generalization tasks score 0.2-0.7, compared to our previous near 0 scores (see Appendix D for common types of errors).", "one model reaching 0.784, higher than any previously reported performance in COGS in the literature, to the best of our knowledge.", "This suggests that the encoder has the power to parse the input correctly, but maybe the decoder is not capable of generating the correct output sequence from the encoder in the full transformer.", "One of the difficulties in the CFQ dataset is that models need to learn to perform Cartesian products (e.g., for questions like who directed and acted in M1 and M2?, the model needs to expand to directed M1, directed M2, acted in M1 and acted in M2).", "However, as shown in our experiments above, this is a very hard task to learn.", "Hence, we followed the same idea as in Herzig et al. (2021), and defined an intermediate representation that removes the need to learn Cartesian products by allowing triples of the form (entity list) (relation list) (entity list) .", "Table 6 shows the sequence-level classification accuracy for models on CFQ and on the version with intermediate representations ( CFQ-im ).", "While the different variations on Transformer models have little affect on the performance, the use of an intermediate representation significantly improves performance, going from around 0.3 accuracy for most Transformer models to over 0.5, and up to 0.555 for the rel-eb model.", "This is consistent with the results reported by Herzig et al. (2021).", "An overall trend is that algorithmic tasks seem to be greatly affected by the different architecture design decisions we explored.", "In all datasets, except for Cartesian product, there is at least one combination in our experiments that achieved high performance (close to 0.8 accuracy or higher).", "Cartesian products remain an open challenge for future work, where one of the big obstacles is learning to produce much longer outputs than seen during training (output is quadratic with respect to input size).", "There are some datasets, such as SCAN-aj , where we did not see large improvements in performance.", "The main obstacle is learning to handle a symbol (jump) having seen it very few times (or even just once) during training (this also happens in some types of generalization in COGS).", "None of the variations we experimented with were enough to handle this type of compositionality either.", "In conclusion, we observed:", "1. relative position encodings (when both embeddings and biases are used) seem to never be detrimental (they either provided gains, or did not affect).", "Results indicate this significantly helps in productivity .", "Moreover, for tasks where positional information is important (such as addition, or reversing), adding positional encodings to decoder2encoder attention provided significant benefits.", "Finally, as Table 1 shows, for relative position embeddings to be beneficial, using embeddings was necessary; only using relative position biases was not enough.", "2. Adding a copy decoder was generally benefi-cial.", "some tasks (e.g., Reverse ), but these are high variance tasks (see Table 10 in the Appendix), where results are more uncertain.", "3. Model size in terms of embedding dimensions, helped generally.", "Going from 2 to 4 layers provided a slight benefit in general.", "Our experiments show going to 6 layers hurt performance, but as noted earlier, additional (un-reported preliminary) experiments indicated larger models might need smaller learning rates, with which they also seem to improve performance (systematic exploration of this is future work).", "4. Weight sharing seems to benefit in tasks where there are a clear set of primitives that have to be learned (PCFG in particular), or algorithmic tasks, but it seems to hurt in COGs.", "Hence, weight sharing does not provide general benefits as the previous modifications.", "5. Intermediate representations, although dataset-specific, significantly help when they can be defined, as expected.", "This paper presented an empirical study of the design space of Transformer models, evaluated in a collection of benchmarks for compositional generalization in language and algorithmic tasks.", "Our results show that, compared to a baseline Transformer, significant gains in compositional generalization can be achieved.", "Specifically, the baseline Transformer achieved an average sequence-level accuracy of 0.137, while we showed this can increase to up to 0.527 with some design changes.", "Accuracy levels of up to 0.493 can be achieved without increasing the parameter count of our baseline model (see Appendix C for parameter counts).", "Moreover, we achieved state-of-the-art results in COGS (at the time of submission), showing 0.784 accuracy on the generalization set, and two PCFG splits (0.634 and 0.828 respectively).", "This shows that a key factor in training models that generalize compositionally is to provide the right inductive biases.", "As part of our future work, we want to explore more dimensions, such as pre-training and optimizer parameters, and study the implications of our results in compositional generalization in large models on real world tasks." ]
[ "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "objective", "method", "result", "objective", "objective", "result", "method", "abstain", "other", "method", "result", "other", "other", "other", "other", "other", "method", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "method", "other", "other", "other", "other", "other", "method", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "result", "result", "result", "result", "abstain", "objective" ]
[ "Zero-shot transfer learning for multi-domain dialogue state tracking can allow us to handle new domains without incurring the high cost of data acquisition.", "This paper proposes new zero-short transfer learning technique for dialogue state tracking where the in-domain training data are all synthesized from an abstract dialogue model and the ontology of the domain.", "We show that data augmentation through synthesized data can improve the accuracy of zero-shot learning for both the TRADE model and the BERT-based SUMBT model on the MultiWOZ 2.1 dataset.", "We show training with only synthesized in-domain data on the SUMBT model can reach about 2/3 of the accuracy obtained with the full training dataset.", "We improve the zero-shot learning state of the art on average across domains by 21%.", "Automated conversational agents can reduce the costs of customer support, a necessary service in just about every business.", "However, training a goal-directed dialogue agent for a domain often requires acquiring annotated dialogues to cover all possible conversation flows.", "Commonly, this is done using the Wizard-of-Oz technique (Kel-ley, 1984), where two crowdsource workers converse with each other, while also annotating the state at each turn.", "This technique has been employed to construct several datasets (Hemphill et al., 1990; Wen et al., 2017; Yu et al., 2019).", "Recently, it has been used to build the MultiWOZ dataset (Budzianowski et al., 2018), a large corpus of dialogues across 7 domains.", "Unfortunately, not only is the initial acquisition expensive, annotating dialogues correctly has proven to be challenging due to human errors, delays in annotation, inconsistent conventions, and normalization issues (Eric et al., 2019; Zhou and S: STARTU: Can you help with information regarding a food place?", "I need to book at 15:45.", "S: SEARCHREQUEST restaurant ( book time = 15:45 ) A: How about the restaurant with name La Tasca and Italian food?", "U: Can you find something which serves seafood?", "S: SEARCHREQUEST restaurant ( book time = 15:45 , food = seafood ) A: What date are you looking for?", "U: Thursday please.", "S: SEARCHREQUEST restaurant ( book time = 15:45 , food = seafood , book day = thursday ) A: How about the Copper Kettle?", "It is a food place with seafood food.", "U: What is the price range and the area?", "S: SLOTQUESTION restaurant ( book time = 15:45 , food = seafood , book day = thursday , price range =? , area =?) A: The Copper Kettle is a moderately priced restaurant in the north of the city.", "Would you like a reservation?", "U: No, thanks.", "S: CLOSE restaurant ( book time = 15:45 , food = seafood , book day = thursday ) A: Can I help with you anything else?", "U: Thank you, that will be it for now.", "S: END restaurant ( book time = 15:45 , food = seafood , book day = thursday ) Figure 1: An example of a dialogue that can be synthesized from our templates.", "To facilitate transfer learning, we create an abstract model of dialogues that is independent of the domain of the conversation.", "In this paper we will focus on dialogues for transactions; other kinds of dialogues such as opinion sharing will have different models.", "We have developed an algorithm that accepts an ontology of a domain and a few phrases commonly used in that domain.", "The algorithm synthesizes dialogue training data based on an abstract dialogue model.", "The dialogue synthesized consists of turns of conversation, each of which has a start state, an agent utterance, a user utterance, and an end state.", "The start and end states summarize the semantics of the conversation at those points.", "An example of a dialogue that can be synthesized by our model is shown in Fig.", "1. To transfer knowledge to a new domain in a zero-shot setting, we train with the synthesized data for the new domain together with existing data for other domains.", "In addition, we adapt training samples from related domains by substituting them with the vocabulary of the new domain.", "We can improve the accuracy of the abstract dialogue model as well as the state-tracking neural network by iteratively refining the model based on the error analysis on the validation data, and by introducing additional annotations in the new domain.", "Note that the abstract dialogue model can be also used directly to implement the agent itself.", "The contributions of this paper are as follows: A new zero-short transfer learning technique for dialogue state tracking where the in-domain training data are all synthesized from an abstract dialogue model and the ontology of the domain.", "Our approach improves over the previous state-of-the-art result on zero-shot transfer learning for MultiWOZ 2.1 tasks by 21% on average across domains.", "We show that our approach improves the accuracy for TRADE (Wu et al., 2019), an RNN-based model, and SUMBT (Lee et al., 2019), a BERT-based model (Devlin et al., 2019), suggesting that our technique is independent of the specific model used.", "Our experimental results show that synthesized data complements BERT pretraining.", "The BERT-based SUMBT model can, in a purely zero-shot fashion, achieve between 61% and 92% of the accuracy obtained by a model trained on the full dataset.", "We propose combining pretrained models with synthesized data as a general technique to bootstrap new dialogue state trackers.", "Dialogue Datasets and Synthesis.", "Synthesized data (in training and evaluation) was proposed by Weston et al. (2015) to evaluate the ability of neural models to reason compositionally, and was also used in visual question answering (Johnson et al., 2017a; Hudson and Manning, 2019) and semantic parsing (Lake and Baroni, 2018).", "Wang et al. (2015) proposed synthesizing data, then crowdsourcing paraphrases to train semantic parsers.", "Various semantic parsing datasets have been generated with this technique (Su et al., 2017; Zhong et al., 2017) and the technique has also been adapted to the multiturn setting (Cheng et al., 2018; Shah et al., 2018).", "While it tends to be well-annotated, paraphrase data is expensive to acquire, and these datasets are very small.", "More recently, we proposed training with both a large amount of synthesized data and a small amount of paraphrase data for semantic parsing of single sentences (Campagna et al., 2019; Xu et al., 2020).", "We showed that training with such data can perform well on real-world evaluations.", "This paper extends this work to the multi-turn setting.", "Dialogues are more complex as they need to capture information, such as the abstract dialogue state, that is not present in the target annotation (domain and slot values).", "We extend the synthesis algorithm to operate based on a dialogue model, tracking enough information to continue the dialogue.", "We also present a novel dialogue model that is suitable for synthesis.", "Dialogue State Tracking.", "Dialogue state tracking is a long-studied field, starting with the first Dialogue State Tracking Challenge (Williams et al., 2014).", "A review of prior work can be found by Williams et al. (2016).", "Previous works on DST use different approaches, ranging from using handcrafted features to elicit utterance information (Henderson et al., 2014; Wang and Lemon, 2013).", "Mrksic et al. (2017) use Convolutional Neural Networks to learn utterance representations.", "However, their models do not scale as they do not share parameters across different slots.", "Zhong et al. (2018) and Nouri and Hosseini-Asl (2018) propose a new global module that shares information to facilitate knowledge transfer.", "However, they rely on a predefined ontology.", "Xu and Hu (2018) use a pointer network with a Seq2Seq architecture to handle unseen slot values.", "Lee et al. (2019) use a pre-trained BERT model (Devlin et al., 2019) to encode slots and utterances and uses multi-head attention (Vaswani et al., 2017) to find relevant information in the dialogue context for predicting slot values.", "Wu et al. (2019) introduce an encoder-decoder architecture with a copy mechanism, sharing all model parameters between all domains.", "Zhou and Small (2019) formulate multi-domain DST as a question answering task and use reading comprehension techniques to generate the answers by either span or value prediction.", "Johnson et al. (2017b) propose single encoder-decoder models for zero-shot machine translation by encoding language and input sentence jointly, and Zhao and Eskenazi (2018) propose cross-domain zero-shot language generation using a cross-domain embedding space.", "Modelling of Dialogues.", "Previous work already proposed general models of dialogues as finite state machines (Jurafsky et al., 1997; Bunt et al., 2017; Yu and Yu, 2019).", "Existing models are optimized for analyzing existing human conversations.", "Our dialogue model is the first suitable for synthesis, carrying enough information to continue the dialogue.", "Gupta et al. (2018) previously proposed a different annotation scheme for dialogues, using a hierarchical representation scheme, instead of the more typical intent and slot.", "Their work is complementary to ours: our method of dialogue synthesis is applicable to any annotation scheme.", "In this paper, we focus on the existing annotation scheme used by the MultiWOZ dataset.", "In this section, we first define abstract dialogue models, then describe how we can generate dialogues based on the model.", "We also describe the techniques we use to adapt training dialogues from other domains to the new domain.", "We define a dialogue model with finite sets of abstract states , agent dialogue acts , user dialogue acts , and transitions , defined below.", "The abstract dialogue for transactions we use in this paper is shown in Table", "1. The abstract states capture the typical flow of a conversation in that model, regardless of the domain.", "For example, a transaction dialogue model has states GREET , SEARCHREQUEST , COMPLETEREQUEST , COMPLETETRANSACTION , and CLOSECONVERSATION , etc.", "Each domain has a set of slots ; each slot can be assigned a value of the right type, a special DONTCARE marker indicating that the user has no preference, or a special ? marker indicating the user is requesting information about that slot.", "Thus, we can summarize the content discussed up to any point of a conversation with a concrete state , consisting of an abstract state, and all the slot-value pairs mentioned up to that point.", "Where it is not ambiguous, we refer to the concrete state as the state for simplicity.", "All possible agent utterances in a dialogue model are classified into a finite set of agent dialogue acts , and similarly, all the possible user utterances into a finite set of user dialogue acts .", "Examples of the former are GREETUSER , ASKQUESTION , ANSWER , OFFERRESERVATION ; examples of the latter are ASKBYNAME , ADDCONSTRAINTS , ACCEPT , REJECT .", "Each transition in the model describes an allowed turn in a dialogue.", "A transition consists of an abstract start state, an agent dialogue act, a user dialogue act, and an abstract end state.", "A dialogue is a sequence of turns, each of which consists of a start state, an agent utterance, a user utterance, and an end state.", "We say that a dialogue belongs to a model, if and only if,", "1. for every turn, the start state's abstract state, the dialogue act of the agent utterance, the dialogue act of the user utterance, and the end state's abstract state constitute an allowed transition in the model.", "2. the slot-value pairs of each end state are derived by applying the semantics of the agent and user utterances to the start state.", "3. the first turn starts with the special START state, and every turn's end state is the start state of the next turn, except for the last turn, where the end state is the special END state.", "We use templates to synthesize dialogues in a domain from an abstract dialogue model and a domain ontology.", "In this paper, we introduce dialogue model templates which specify with grammar rules how to generate a turn of a dialogue from From Abstract State Agent Dialogue Act User Dialogue Act To Abstract State Start Greet Greeting Ask by name Info request Ask with constraints Search request Greet Greet Ask by name Info request Ask with constraints Search request Search request Ask to refine search Provide constraints Search request Ask question Answer question Search request Propose constraint Accept constraint Search request Add constraints Search request Propose entity Accept Complete request Add constraints Search request Reject Search request Ask slot question Slot question Ask info question Info question Empty search, offer change Change constraints Search request Insist Insist Info request Provide info, offer reservation Accept Accept Provide reservation info Accept Ask info question Info question Info question Answer, offer reservation Accept Accept Provide reservation info Accept Thanks Close conversation Slot question Answer, offer reservation Accept Accept Add constraint Search request Insist Repeat empty search Apologize Close conversation Change constraints Search request Complete request Offer reservation Accept Accept Thanks Close conversation Accept Ask missing slots Answer question Complete transaction Complete transaction Execute Ask transaction info Transaction info question Thanks Close conversation Error Thanks Close conversation Transaction info question Answer Thanks Close conversation Close conversation Anything else Thanks End Table 1: Our abstract dialogue model for transaction dialogues.", "a transition in the abstract model.", "They create possible agent and user utterances matching the agent and user dialogue acts in the transition, and they include a semantic function to ensure the utterances make sense given the input state.", "For example, the user should ask about a slot only if its value is not already known.", "The semantic function returns an output state that matches the semantics of the utterances.", "The slot values of the output state are used as the annotation for the turn when training the dialogue state tracker.", "As an example, the SLOTQUESTION template shown in Fig. 2 corresponds to the 13th transition in the dialogue model in Table", "1. The following agent and user utterances, separated by a delimiting token < sep > , are examples of dialogue acts PROPOSEENTITY and ASKSLOTQUESTION .", "They transition the abstract state SEARCHREQUEST to the abstract state SLOTQUESTION .", "In this case, the non-terminals NAME , NP , ADJSLOT are expanded into domain-dependent phrases Curry Garden, Indian restaurant in the south of town, and expensive, respectively, and the results of their semantic functions, name, np, adj slot , are (sets of) slot-value pairs: name = Curry Garden; { food = Indian, area= south } ; price = expensive.", "The semantic function of SLOTQUESTION checks that the input state does not already include a value for the price slot, and the price is not mentioned by the agent at this turn.", "It returns, as the new state, the old state with a ? on the price.", "All the non-dialogue specific templates are introduced by Xu et al. (2020).", "We have extended this template library, originally intended for database queries, to return slot-value pairs as semantic function results.", "Readers are referred to Xu et al. (2020) for details.", "This library has SLOTQUESTION := How about NAME ? It is a NP . < sep > Is it ADJ SLOT ?: ( state , name , np , adj slot ) { if adj slot ( state . slots np ) return state .", "abstract = SLOTQUESTION state .", "slots [ adj slot . name ] = ? return state } NP := ADJ SLOT NP : ( adj slot , np ) np { adj slot } NP := NP PREP SLOT : ( np , prep slot ) np { prep slot } NP := restaurant : () ADJ SLOT := FOOD | PRICE : ( x ) x PREP SLOT := in the AREA of town : ( x ) x NAME := Curry Garden | . . . : ( x ) name = x FOOD := Italian | Indian | . . . : ( x ) food = x AREA := north | south | . . . : ( x ) area = x PRICE := cheap | expensive | . . . : ( x ) price = x Figure 2: The SLOTQUESTION template and other non-dialogue specific templates used to generate the example interaction.", "four kinds of domain templates.", "Domain Subject Templates describe different noun phrases for identifying the domain.", "Slot Name Templates describe ways to refer to a slot name without a value, such as cuisine, number of people or ar-rival time.", "Slot Value Templates describe phrases that refer to a slot and its value; they can be a noun phrase (restaurants with Italian food), passive verb phrase (restaurants called Alimen-tum), active verb phrase (restaurants that serve Italian food), adjective-phrase (Italian restau-rants), preposition clauses (reservations for 3 people).", "Finally, Information Utterance Templates describe full sentences providing information, such as I need free parking, or I want to arrive in London at 17:00.", "These are domain-specific because they use a domain-specific construction (free parking) or verb (arrive).", "Developers using our methodology are expected to provide domain templates, by deriving them manually from observations of a small number of in-domain human conversations, such as those used for the validation set.", "As there is an exponential number of possible dialogues, we generate dialogues with a randomized search algorithm.", "We sample all possible transitions uniformly to maximize variety and coverage.", "Our iterative algorithm maintains a fixed-size working set of incomplete dialogues and their current states, starting with the empty dialogue in the START state.", "At each turn, it computes a random sample of all possible transitions out of the abstract states in the working set.", "A fixed number of transitions are then chosen, their templates expanded and semantic functions invoked to produce the new concrete states.", "Extended dialogues become the working set for the next iteration; unextended ones are added to the set of generated results.", "The algorithm proceeds for a maximum number of turns or until the working set is empty.", "The algorithm produces full well-formed dialogues, together with their annotations.", "The annotated dialogues can be used to train any standard dialogue state tracker.", "We also synthesize new training data by adapting dialogues from domains with similar slots.", "For example, both restaurants and hotels have locations, so we can adapt a sentence like find me a restaurant in the city center to find me a hotel in the city center.", "We substitute a matching domain noun phrase with the one for the new domain, and its slot values to those from the target ontology.", "We also generate new multi-domain dialogues from existing ones.", "We use heuristics to identify the point where the domain switches and we concatenate single-domain portions to form a multi-domain dialogue.", "The MultiWOZ dataset (Budzianowski et al., 2018; Eric et al., 2019) is a multi-domain fully-labeled corpus of human-human written conversations.", "Its ontology has 35 slots in total from 7 domains.", "Each dialogue consists of a goal, multiple user and agent utterances, and annotations in terms of slot values at every turn.", "The dataset is created through crowdsourcing and has 3,406 single-domain and 7,032 multi-domain dialogues.", "Of the 7 domains, only 5 have correct annotations and any data in the validation or test sets.", "Following Wu et al. (2019) we only focus on these 5 domains in this paper.", "The characteristics of the domains are shown in Table", "2. 4.2 Machine Learning Models We evaluate our data synthesis technique on two state-of-the-art models for the MultiWOZ dialogue state tracking task, TRADE (Wu et al., 2019) and SUMBT (Lee et al., 2019).", "Here we Attraction Hotel Restaurant Taxi Train # user slots 3 10 7 4 6 # agent slots 5 4 4 2 2 # slot values 167 143 374 766 350 # real dialogues 3,469 4,196 4,836 1,919 3,903 # in-domain turns 10,549 18,330 18,801 5,962 16,081 # in-domain tokens 312,569 572,955 547,605 179,874 451,521 # domain subject templates 3 5 4 2 4 # slot name templates 15 17 21 18 16 # slot value templates 7 30 30 37 42 # information utterance templates 1 14 13 13 27 # synthesized dialogues 6,636 13,300 9,901 6,771 14,092 # synthesized turns 30,274 62,950 46,062 35,745 60,236 # synthesized tokens 548,822 1,311,789 965,219 864,204 1,405,201 transfer domain Restaurant Restaurant Hotel Train Taxi overlapping slots 2 6 6 4 4 Table 2: Characteristics of the MultiWOZ ontology, the MultiWOZ dataset, the template library, and the synthesized datasets for the zero-shot experiment on the 5 MultiWOZ domains.", "TRADE TRAnsferable Dialogue statE generator (TRADE) uses a soft copy mechanism to either copy slot-values from utterance pairs or generate them using an Recurrent Neural Network (RNN) (Sutskever et al., 2014) decoder.", "This model can produce slot-values not encountered during training.", "The model is comprised of three main parts: an RNN utterance encoder which generates a context vector based on the previous turns of the dialogue; a slot-gate predictor indicating which (domain, slot) pairs need to be tracked, and a state generator that produces the final word distribution at each decoder time-step.", "SUMBT Slot-Utterance Matching Belief Tracker (SUMBT) uses an attention mechanism over user-agent utterances at each turn to extract the slot-value information.", "It deploys a distance-based non-parametric classifier to generate the probability distribution of a slot-value and minimizes the log-likelihood of these values for all slot-types and dialogue turns.", "Specifically, their model includes four main parts: the BERT (De-vlin et al., 2019) language model which encodes slot names, slot values, and utterance pairs, a multi-head attention module that computes an attention vector between slot and utterance representations, a RNN state tracking module, and a discriminative classifier which computes the probability of each slot value.", "The use of similarity to find relevant slot values makes the model depend on the ontology.", "Thus the model is unable to track unknown slot values.", "We used the Genie tool (Campagna et al., 2019) to synthesize our datasets.", "We incorporated our dialogue model and template library into a new version of the tool.", "The exact version of the tool used for the experiment, as well as the generated datasets, are available on GitHub 1 .", "For each experiment, we tuned the Genie hyperparameters separately on the validation set.", "For the models, we use the code that was released by the respective authors, with their recommend hyperparameters.", "For consistency, we use the same data preprocessing to train both TRADE and SUMBT.", "Our abstract transaction dialogue model has 13 abstract states, 15 agent dialogue acts, 17 user dialogue acts, and 34 transitions (Table 1).", "We have created 91 dialogue templates for this model.", "Dialogue templates were optimized using the validation data in the Restaurant domain.", "We also created domain templates for each domain in MultiWOZ.", "The number of templates and other characteristics of our synthesis are shown in Table", "2. To simulate a zero-shot environment in which training data is not available, we derived the templates from only the validation data of that domain.", "We did not look at in-domain training data to design the templates, nor did we look at any test data until the results reported here were obtained.", "In the table, we also include the domain we chose to perform domain adaptation (Section 3.5) and the number of slots from the adapted domain that are applicable to the new domain.", "Note that the validation and test sets are the same datasets as the MultiWOZ 2.1 release.", "Our first experiment evaluates how our synthesized data affects the accuracy of TRADE and SUMBT on the full MultiWOZ 2.1 dataset.", "As in previous work (Wu et al., 2019), we evaluate the Joint Accuracy and the Slot Accuracy .", "Joint Accuracy measures the number of turns in which all slots are predicted correctly at once, whereas Slot Accuracy measures the accuracy of predicting each slot individually, then averages across slots.", "Slot Accuracy is significantly higher than Joint Accuracy because, at any turn, most slots do not appear, hence predicting an empty slot yields high accuracy for each slot.", "Previous results were reported on the MultiWOZ 2.0 dataset, so we reran all models on MultiWOZ 2.1.", "Results are shown in Table", "3. We observe that our synthesis technique, which is derived from the MultiWOZ dataset, adds no value to this set.", "We obtain almost identical slot accuracy, and our joint accuracy is within the usual margin of error compared to training with the original dataset.", "This is a sanity-check to make sure our augmentation method generates compatible data and training on it does not worsen the results.", "Before we evaluate zero-shot learning on new domains, we first measure the accuracy obtained for each domain when trained on the full dataset.", "For each domain, we consider only the subset of dialogues that include that particular domain and only consider the slots for that domain when calculating the accuracy.", "In other words, suppose we have a dialogue involving an attraction and a restaurant: a prediction that gets the attraction correct but not the restaurant will count as joint-accurate for the attraction domain.", "This is why the joint accuracy of individual domains is uniformly higher than the joint accuracy of all the domains.", "Table 4 shows that the joint accuracy for TRADE varies from domain to domain, from 50.5% for Hotel to 74.0% for Train.", "The domain accuracy with the SUMBT model is better than that of TRADE by between 1% and 4% for all domains, except for Taxi where it drops by about 4.5%.", "In our zero-shot learning experiment, we withhold all dialogues that refer to the domain of interest from the training set, and then evaluate the joint and slot accuracies in the same way as before.", "The joint accuracy with the TRADE model is poor throughout except for 59.2% for Taxi.", "The rest of the domains have a joint accuracy ranging from 16.4% for Restaurant to 22.9% for Train.", "Upon closer examination, we found that simply predicting empty for all slots would yield the same joint accuracy.", "The zero-shot results for SUMBT are almost identical to that of TRADE.", "A different evaluation methodology is used by Wu et al. (2019) in their zero-shot experiment.", "The model for each domain is trained with the full dataset, except that all the slots involving the domain of interest are removed from the dialogue state.", "The slots for the new domain are present in the validation and test data, however.", "The method they use, which we reproduce here 2 , has consistently higher slot accuracy, but slightly worse joint accuracy than our baseline, by 1.9% to 5.8%, except for Taxi which improves by 1% to 60.2%.", "To evaluate our proposed technique, we add our synthesized data for the domain of interest to the training data in the zero-shot experiment.", "Besides synthesizing from templates, we also apply domain adaptation.", "The pairs of domain chosen for 2 Wu et al. (2019) reported results on MultiWOZ 2.0, while we report MultiWOZ 2.1.", "The results on the two datasets are all within 3% of each other.", "adaptation are shown in Table 2, together with the number of slot names that are common to both domains.", "Taxi uses a subset of the slot names as Train but with different values.", "Attraction, Restaurant and Hotel share the name and area slot; Restaurant and Hotel also share the price range, book day, book time and book people slots.", "For slots that are not shared, the model must learn both the slot names and slot values exclusively from synthesized data.", "Our dialogue-model based zero-shot result, reported as Zero-shot (DM) in Table 4, shows that our synthesized data improves zero-shot accuracy on all domains.", "For TRADE, the joint accuracy improves between 6% on Taxi and 19% on Restaurant, whereas for SUMBT, joint accuracy improves between 3% on Taxi and 30% on At-traction.", "With synthesis, SUMBT outperforms TRADE by a large margin.", "Except for Taxi which has uncharacteristically high joint accuracy of 65%, SUMBT outperforms TRADE from 8% to 18%.", "This suggests SUMBT can make better use of synthesized data.", "To compare synthesized with real training data, we calculate how close the accuracy obtained with the synthetic data gets to full training.", "We divide the accuracy of the former with that of the latter, as shown in the last row for each model in Table", "4. Overall, training with synthesized data is about half as good as full training for TRADE, but is 2/3 as good as for SUMBT (the ratio is 61% to 74%, ignoring Taxi as an outlier).", "This suggests that our synthesis algorithm is generating a reasonable variety in the dialogue flows; the pretrained BERT model, which imbues the model with general knowledge of the English language, is better 1 2 3 4 5 Turn number 0% 20% 40% 60% 80% 100% 0 1 2 3 4 Number of slots in annotation 0% 20% 40% 60% 80% 100% Zero-shot Zero-shot (DM) Full dataset Figure 3: Breakdown of accuracy by turn number and number of slots of the TRADE model on the Restaurant domain.", "at compensating for the lack of language variety in synthesized data.", "Thus, the model only needs to learn the ontology and domain vocabulary from the synthesized data.", "Conversely, TRADE has no contextual pretraining and must learn the language from the limited dialogue data.", "This suggests that the combination of unsupervised pretraining and training on synthesized data can be effective to bootstrap new domains.", "To analyze the errors, we break down the result according to the turn number and number of slots in the dialogues in the test set, as shown in Fig.", "3. We perform this analysis using the TRADE model on the Restaurant domain, which is the largest domain in MultiWOZ.", "We observe that the baseline model achieves 100% accuracy for turns with no slots, and 0% accuracy otherwise.", "The baseline results in the turn-number plot thus indicate 0%1% 5% 10% 0% 20% 40% 60% 80% 100% Attraction 0%1% 5% 10% 0% 20% 40% 60% 80% 100% Hotel 0%1% 5% 10% 0% 20% 40% 60% 80% 100% Restaurant 0%1% 5% 10% 0% 20% 40% 60% 80% 100% Taxi 0%1% 5% 10% 0% 20% 40% 60% 80% 100% Train TRADE TRADE with synthesis SUMBT SUMBT with synthesis Figure 4: Accuracy plots for the few-shot MultiWOZ experiments.", "the percentage of dialogues with all empty slots at each turn.", "It is possible for 5-turn dialogues to have all empty slots because a multi-domain dialogue may not have filled any slot in one domain.", "By and large, the accuracy degrades for both the full dataset model and the zero-shot (DM) model, with the latter losing more accuracy than the former when there are 3 or 4 slots.", "The accuracy drops almost linearly with increasing turn numbers for the full model.", "This is expected because a turn is considered correct only if the full dialogue state is correct, and the state accumulates all slots mentioned up to that point.", "The results for the full and the zero-shot (DM) models look similar, but the zero-shot model has a larger drop in later turns.", "Modeling the first few turns in the dialogue is easier, as the user is exclusively providing information, whereas in later turns more interactions are possible, some of which are not captured well by our dialogue model.", "Following Wu et al. (2019), we also evaluate the effect of mixing a small percentage of real training data in our augmented training sets.", "We use a naive few-shot training strategy, where we directly add a portion of the original training data in the domain of interest to the training set.", "Fig. 4 plots the joint accuracy achieved on the new domain with the addition of different percentages of real training data.", "The results for 0% are the same as the zero-shot experiment.", "The advantage of the synthesized training data decreases as the percent of real data increases, because real data is more varied, informative, and more representative of the distribution in the test set.", "The impact of synthesized data is more pronounced for SUMBT than TRADE for all domains even with 5% real data, and it is significant for the At-traction domain with 10% real data.", "This suggests that SUMBT needs more data to train, due to having more parameters, but can utilize additional synthesized data better to improve its training.", "We propose a method to synthesize dialogues for a new domain using an abstract dialogue model, combined with a small number of domain templates derived from observing a small dataset.", "For transaction dialogues, our technique can bootstrap new domains with less than 100 templates per domain, which can be built in a few person-hours.", "With this little effort, it is already possible to achieve about 2/3 of the accuracy obtained with a large-scale human annotated dataset.", "Furthermore, this method is general and can be extended to dialogue state tracking beyond transactions, by building new dialogue models.", "We show improvements in joint accuracy in zero-shot and few-shot transfer learning for both the TRADE and BERT-based SUMBT models.", "Our technique using the SUMBT model improves the zero-shot state of the art by 21% on average across the different domains.", "This suggests that pretraining complements the use of synthesized data to learn the domain, and can be a general technique to bootstrap new dialogue systems.", "We have released our algorithm and dialogue model as part of the open-source Genie toolkit, which is available on GitHub 3 .", "This work is supported in part by the National Science Foundation under Grant No. 1900638; Mehrad Moradshahi is supported by a Stanford Graduate Fellowship." ]
[ "objective", "objective", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "method", "method", "objective", "abstain", "abstain", "abstain", "result", "objective", "objective", "objective", "abstain", "objective", "result", "result", "result", "objective", "objective", "other", "other", "other", "other", "other", "abstain", "abstain", "objective", "other", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "abstain", "method", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "result", "result", "abstain", "abstain", "other" ]
[ "End-to-end simultaneous speech-to-text translation aims to directly perform translation from streaming source speech to target text with high translation quality and low latency.", "A typical simultaneous translation (ST) system consists of a speech translation model and a policy module, which determines when to wait and when to translate.", "Thus the policy is crucial to balance translation quality and latency.", "Conventional methods usually adopt fixed policies, e.g. segmenting the source speech with a fixed length and generating translation.", "However, this method ignores contextual information and suffers from low translation quality.", "This paper proposes an adaptive segmentation policy for end-to-end ST. Inspired by human interpreters, the policy learns to segment the source streaming speech into meaningful units by considering both acoustic features and translation history, maintaining consistency between the segmentation and translation.", "Experimental results on English-German and Chinese-English show that our method achieves a good accuracy-latency trade-off over recently proposed state-of-the-art methods.", "Recent years have witnessed extensive studies and rapid progress of Simultaneous translation (ST).", "It aims to perform translation from source speech into the target language with high quality and low latency and is widely used in many scenarios, such as international conferences, press releases, etc.", "Generally, the research of ST falls into two categories: the cascaded method, and the end-to-end Corresponding author.", "1 In German, each singular noun is assigned a gender, either masculine , feminine , or neuter , which determines whether the definite article (like The in English) preceding the noun is Der , Die or Das .", "Therefore, translating The hastily without receiving the following noun may cause mistranslation.", "method.", "The cascaded method consists of an automatic speech recognition (ASR) model which transcribes the source speech into source streaming text (Moritz et al., 2020; Wang et al., 2020b; Li et al., 2020a), and a followed-by machine translation (MT) model that generates translation based on the ASR output.", "Since there are no sentence or segment boundaries in the streaming source text output by ASR, a segmentation policy is required to link the ASR and the MT to determine when to read more source tokens and when to start translation (Oda et al., 2014; Dalvi et al., 2018; Ma et al., 2019; Arivazhagan et al., 2019; Zhang et al., 2020; Wilken et al., 2020).", "However, cascaded methods 7862 face two main challenges.", "One is the error propagation that the ASR errors may hurt the translation quality.", "The other is the increase of latency because the translation model has to wait for the output of the ASR model.", "To overcome these limitations, the end-to-end method attempts to directly translate from source speech to target text, without explicitly transcribing the source speech (Bansal et al., 2018; Di Gangi et al., 2019b; Jia et al., 2019).", "To balance the translation quality and latency, the key challenge lies in the segmentation policy that determines the translation boundaries of the speech frames.", "Most of the previous work used fixed policies.", "Some of them take fixed-length policy (Nguyen et al., 2021; Ma et al., 2020b, 2021) that splits speech at a fixed frequency, for example, to generate one target word every T s ms (Figure 1", "(a)).", "Other work adopts word-based policy that splits the speech into words and generates one target word whenever a new source word is detected, which calls for an auxiliary source word detector (Ren et al., 2020; Elbayad et al., 2020; Ma et al., 2020b; Zeng et al., 2021; Chen et al., 2021), see Figure 1", "(b).", "However, both the above methods are hard policies, which do not consider the contextual information and result in low translation quality (Ari-vazhagan et al., 2019; Zhang et al., 2020).", "In this paper, we propose an adaptive segmentation policy for end-to-end simultaneous translation based on Meaningful Unit (MU).", "The idea is borrowed from human interpreters, who do interpretation based on a unit with clear meaning rather than fixed frame length or word.", "We model the speech segmentation policy as a binary classification that determines whether a speech segment is an MU.", "Once an MU is detected, it is fed into an end-to-end speech translation model, as illustrated in Figure 1", "(c).", "We propose a supervised training method, using both acoustic features and translation features to train the policy.", "Besides, we propose an incremental decoding method to construct training data from speech and translation pairs.", "Concretely, we first train a full speech translation model MST , and then gradually expand speech frames to simulate simultaneous translation.", "When the translation of the current speech segment is a prefix of the full speech translation, the segment is extracted as an MU.", "At inference time, we employ the same MST to maintain the consistency between segmentation and translation.", "Our method is more flexible than fixed policies, as it dynamically detects meaningful units according to contextual information.", "Experiments on two language pairs show that the proposed approach outperforms the strong baselines in balancing translation quality and latency.", "Cascade Simultaneous Translation.", "To eliminate the impact of ASR errors, most previous work of cascade ST use golden transcript, rather than ASR result, to explore different read/write policies in ST. Existing policies can be classified into two categories: 1) The fixed policy segments the source text based on fixed lengths (Ma et al., 2019; Dalvi et al., 2018).", "For example, wait-k (Ma et al., 2019) is a typical fixed policy that first read k source words, then generates one target word immediately after receiving one subsequent source word.", "2) The adaptive policy learns to segment the source text according to its context (Oda et al., 2014; Cho and Esipova, 2016; Gu et al., 2017; Arivazhagan et al., 2019; Ma et al., 2020a; Zhang et al., 2020).", "It has been proven that the adaptive policy is more effective than the fixed policy in balancing translation quality and latency (Zhang et al., 2020).", "End-to-End Simultaneous Translation.", "The method has shown great potential over the cascaded method (Brard et al., 2016; Weiss et al., 2017; Bansal et al., 2018; Jia et al., 2019; Wang et al., 2020a; Li et al., 2020b; Ansari et al., 2020).", "End-to-end ST contains a speech translation model, along with a policy to decide when to translate.", "However, most previous studies are based on fixed-length policy that translate every T s ms (Nguyen et al., 2021; Ma et al., 2020b), or decide to translate whenever a fixed number of words are detected (Ren et al., 2020; Elbayad et al., 2020; Zeng et al., 2021; Ma et al., 2021; Chen et al., 2021), following the fixed policy of cascade ST systems.", "This paper presents an adaptive policy for end-to-end simultaneous translation.", "We are motivated by an adaptive policy proposed for cascade ST (Zhang et al., 2020), which proposed to perform translation when a source text segment is detected to be a unit with clear meaning.", "However, there are three main differences.", "First, our method is proposed for end-to-end ST, while Zhang et al. (2020) is for cascade ST. Second, our method directly detects MU on speech rather than on the streaming text of the output of ASR.", "Third, we propose a multi-modal MU detection model using both acoustic features 7863 and translation history.", "The overall framework of our adaptive speech segmentation policy is illustrated in Figure 2.", "Given a streaming speech s , we incrementally detect whether a speech clip s t ( t = 1 , 2 , ... ) is an MU, where s t denotes the head t F frames of s , and F is the detection interval.", "Once an MU is detected, the speech translation model produces its translation y (cid:48) with the translation history y p force decoded as a translation prefix.", "Meanwhile, y (cid:48) is displayed to users and added to the translation history y p to improve MU detection.", "In the following, we first introduce our MU detection model (Section 3.1), then propose a method to construct MU training data (Section 3.2).", "Finally, we describe the training details in Section 3.3.", "We model the MU detection as a classification problem.", "Given a source speech s , the detector incrementally reads speech clips at each time t , to make a decision whether s t is an MU.", "We propose a multi-modal detector that uses both acoustic features and translation history.", "See the bottom green block of Figure 2 for illustration.", "For the acoustic feature extractor E fa , we use stacked temporal convolutional layers performed on raw speech features (80-channel log-mel filter-banks).", "Each of the convolutional layers is followed by layer normalization and a GELU activation function (Hendrycks and Gimpel, 2016), following Baevski et al. (2020).", "For the context feature extractor E ft , we use a trainable word embedding layer.", "The outputs of the feature extractors are fed to the acoustic encoder E e a and context encoder E e t to generate latent representation, respectively.", "We add a position embedding to the textual embedding, as in BERT (Devlin et al., 2019), and add a convolutional layer as the relative positional embedding to the acoustic embedding, similar to Mohamed et al. (2019); Baevski et al. (2019).", "Both E et and E ea follow the Transformer architecture (Sperber et al., 2018; Devlin et al., 2019).", "Next, we add different type embeddings e Type ( e [ TXT ] and e [ AUD ] ) to both text and audio embeddings to indicate their source type.", "These two sequences are then concatenated and fed to a 6-layer Transformer for cross-modal fusion.", "Special tokens [ CLS ] and [ SEP ] are added in this process following BERT (Devlin et al., 2019).", "The final hidden state corresponding to [ CLS ] is used as the aggregated sequence representation to predict the classification result l (cid:48) using a fully-connected layer, followed by a softmax.", "where E mm denotes the multi-modal fusion Transformer and fc performs a fully-connection layer.", "Since there are no standard MU segmentation training corpora, we propose a simple method to automatically extract meaningful speech units to construct MU training samples.", "We expect that MUs can be translated properly without waiting for future speech.", "Therefore, we define MU as the minimum speech segment whose translation will not be changed by subsequent speech .", "This requires MUs to contain enough information to generate stable translation.", "Accordingly, we propose to extract meaningful speech units by comparing the translation of every speech prefix segment and the full-speech translation with a pre-trained speech translation model MST .", "For a speech segment s t , if its translation y (cid:48) = MST ( s t ) is a prefix of the full-speech translation (cid:101) y = MST ( s ) , we identify that s t is sufficient to provide a stable translation and annotate it to an MU.", "We propose an incremental-translation paradigm.", "We incrementally translate s t , t = 1 , 2 , ... , to judge whether its translation y (cid:48) is a prefix of (cid:101) y .", "If so, we extract s t as an MU, and force-decode its translation y (cid:48) as a prefix in detecting subsequent speech segments.", "This is to keep consistent with the force-decoding strategy at the inference stage.", "Moreover, while comparing y (cid:48) with (cid:101) y , as illustrated in Figure 3, we propose a tail-truncation strategy that discards the last k words from the partial decoding results y (cid:48) .", "This is to avoid translation 2 Note that the length of acoustic encoding T is not equal to the number of source frames t F for the temporal sampling of convolutional feature extractor layers.", "errors caused by ambiguous speech fragments.", "For example, s 3 pronounces The dog has is translated to Der Hund hat, which is not a prefix of the full-speech translation (cid:101) y due to the current received speech is ambiguous 3 .", "However, after truncated the tail word, y (cid:48) turn to be Der Hund and becomes a prefix of (cid:101) y .", "Therefore, discarding tail words from y (cid:48) enables the model to discover translation that partially matches (cid:101) y in advance, thus shortening the granularity of extracted MUs and reducing latency.", "At the inference stage, we also remove tail k words from the translation of detected MUs.", "Our pre-trained speech translation model MST includes an acoustic feature extractor E fa , an acoustic encoder E e a , and a textual translation decoder, as shown in Figure", "2(c).", "E fa and E ea are shared with the multi-modal MU detection model so that the acoustic forward computation can be shared at inference time.", "The translation decoder is based on Transformer, which links the acoustic encoder through cross-attention (Vaswani et al., 2017).", "Note that to keep MU detection consistency in both training and decoding, we initialize the acoustic feature extractor and context encoder with the weights from MST , and keep them frozen in training MU detection, instead of joint training with 3 The English word \"has\" can be translated to either \"ist\" or \"hat\" in most cases, depending on what follows.", "Formally, the MST model optimizes the two acoustic encoders and the translation decoder first with auto-regressive loss LST : LST = (cid:88) ( s,y ) D STN (cid:88) i =1 log p ( y i | s, y <i ; ae , td ) (3) Then the MU detection model is optimized without gradients back-propagated to the acoustic encoders: 7865 LMU = (cid:88) ( s,y p ,l ) D MU log p ( l | s, y p ; te , mm ) (4) where ae denotes weights of the two acoustic encoders, td is the translation decoder of MST , te is the textual encoders for y p and mm is the weights of the multi-modal fusion.", "DST is the speech translation dataset and DMU contains training triplets generated by the pre-trained MST model.", "We carry out experiments on English-German (En-De) and Chinese-English (Zh-En) simultaneous translation.", "We use sacreBLEU (Post, 2018) to evaluate the translation performance and the acoustic average lagging (AL) (Ma et al., 2019, 2020b) as the latency metric.", "The AL measures the system lagging behind an ideal policy, which produces translation at the same speed as the audio received.", "We evaluate our method on MuST-C (Di Gangi et al., 2019a) En-De dataset and BSTC (Zhang et al., 2021a) Zh-En dataset.", "To compare with the previous methods, we carried out experiments under two settings: the Limited-training-corpora setting that constrains the training data to a limited set of corpora, and the Open-training-corpora setting uses more data.", "For En-De, we set the training data of Limited-training-corpora setting as the training set of MuST-C, a dataset consisting of 408 hours of speech with transcription and translation, while the experiments with the Open-training-corpora setting use unlimited datasets of up to 1,302 hours of speech.", "See appendix A.1 for detail.", "We compare our method with previous strong ST approaches.", "Methods listed with * are carried out under the Open-training-corpora setting, while others use the Limited-training-corpora setting.", "Wait-k (Chen et al., 2021) integrates the wait-k (Ma et al., 2019) policy into end-to-end speech translation with an additional ASR module to detect the number of source words within the streaming speech.", "SimulST (Ma et al., 2020b) takes the fixed-length policy that translates out one token every T s ms. We set T s to 280 following their best experimental settings.", "StreamMemory (Ma et al., 2021) proposes an end-to-end speech translation model with augmented memory, which stores previous states of streaming speech to reduce the computation cost.", "They use the same fixed-length policy as in SimulST .", "RealTranS (Zeng et al., 2021) proposes a fixed policy (Wait-k-Stride-N) for an end-to-end ST that triggers translation based on the number of words within the streaming speech, which is detected based on a CTC module built on top of the speech translation encoder.", "Wait-K-Stride-S-Write-N * (Nguyen et al., 2021) proposes a fixed-length policy for end-to-end ST that first wait for K frames, then alternatively decoding N target words and read-ing S frames.", "ON-TRAC * (Elbayad et al., 2020): A cascade system that achieved the first-place of the IWSLT2020 En-De simultaneous translation shared task (Federico et al., 2020).", "It takes a fixed policy ( wait-k ) to link the ASR output and the MT module.", "MU-ST : Our proposed method that triggers the speech translator with an MU-based adaptive policy.", "The MST model is trained from scratch.", "MU-ST ( +pretrain )*: To compare with methods of the Open-training-corpora setting, we take pre-training techniques in training the speech translator MST .", "to enhance the acoustic representations, as performed in SimulST (Ma et al., 2020b), StreamMemory (Ma et al., 2021) and RealTranS (Zeng et al., 2021).", "For MU-ST ( +pretrain )*, we follow the recently proposed speech translation pre-training method (Li et al., 2020b) to initialize the encoder with wav2vec2.0 4 (Baevski et al., 2020) and initialize the decoder with mBART50 5 (Tang et al., 2020), then fine-tune with speech translation corpora.", "As listed in Table 1, MU-ST takes the Base model and MU-ST ( +pretrain )* takes the Big version 6 .", "We set the number of truncated words k=2 in tail-truncation and the length of speech clips T s = 250 ms as default.", "Figure 4 shows the results on MuST-C dev set and tst-COMMON set.", "We calculate the computation-aware latency on the MuST-C dev set and computation-unaware latency on the MuST-C tst-COMMON set, to be consistent with previous work.", "The difference between them is whether the model inference time is taken into account.", "MU-ST achieves higher translation quality under the same latency on both datasets.", "denotes the probability threshold of MU detector, i.e., = [0 . 3 , 0 . 4 , ..., 0 . 9] corresponds to the results of taking p ( l = 1 | s, y p ) > as the criterion of determining s to be an MU.", "Small produces fine-grained speech segments and small delay, but if some ambiguous speech segments are incorrectly recognized as MUs, it will result in poor translation quality.", "On the dev set, we compare MU-ST with Wait-k (Chen et al., 2021), SimulST (Ma et al., 2020b) and StreamMemory (Ma et al., 2021).", "SimulST and StreamMemory takes fixed-length speech policy (Figure 1", "(a)) while Wait-k performs wait-k based on the number of words detected from an ASR module (Figure 1", "(b)).", "We also plot the result of a Cascaded system based on textual Wait-K (Chen et al., 2021) .", "We observed that: Our adaptive policy outperforms the Wait-k methods, and the Wait-k approaches are superior to the fixed-length methods.", "We report the result of translating the whole speech without segmentation in the full-speech translation.", "MU-ST approaches the BLEU of full-speech translation as early as = 0 .", "6 , indicating that our method can achieve comparable BLEU with full-speech translation with a very small latency (about 2100ms), while other methods still have a large gap with the full-speech translation under corresponding delay.", "On the tst-COMMON set, we compare MU-ST with three fixed-policy methods.", "ON-TRAC * and RealTranS follow the wait-k policy, while Wait-K-Stride-S-Write-N * takes the fixed-length policy.", "We observed that: Our proposed MU-ST trained with MuST-C achieves higher BLEU at all latency regimes than other approaches.", "In particular, it even superior to the cascade method ON-TRAC *, 7867 15 16 17 18 19 20 21 22 23 0 1000 2000 3000 4000 5000 6000 BLEU AL(ms) tail-truncation-0tail-truncation-2tail-truncation-4tail-truncation-6Full-Speech Figure 5: Impact of the number of truncated words in tail-truncation .", "which utilized large-scale ASR and MT corpora for training.", "MU-ST ( +pretrain )* outperforms MU-ST in BLEU by taking advantage of pre-trained models and more training data for MST .", "The full-speech translation of MU-ST ( +pretrain )* has 3.32 BLEU points improvement (22.58 25.90) over MU-ST .", "Meanwhile, the latency of MUST ( +pretrain )* is longer than MU-ST under the same .", "This may be because the pre-trained large speech translation model of MUST ( +pretrain )* enables the high translation diversity of its speech translation model MST , which is not conducive in constructing fine-grained meaningful units.", "Strong translation diversity will reduce the probability of prefix matching between partial translation y (cid:48) i and full-speech translation (cid:101) y in MU construction, thus bringing in longer MUs and higher latency.", "We conduct experiments concerning various aspects of our MU-based policy in this section.", "All ablation results are trained on the Limited-training-corpora setting and evaluated on MuST-C tst-COMMON set.", "When constructing the data for MU detection, we proposed a tail-truncation strategy, which removes the last k words from the translation of each speech segment to avoid translation errors caused by ambiguous speech segments.", "Now we verify its significance.", "We compare models with different numbers of truncated words k in tail-truncation , with results shown in Figure 5.", "It is observed that without tail-truncation ( k = 0), the translation quality is worse and the latency is longer, compared with k = 2.", "This corroborates our motivation specified in Section 3.2 that tail-truncation enables the model to discover fine-grained meaningful units.", "Moreover, it also facilitates producing context-aware translation by taking longer context in translation.", "Therefore, tail-truncation strategy plays an important role in extracting meaningful units.", "Increasing k from 2 to 4 and 6 generally brings higher latency, along with a tiny improvement in translation quality.", "According to the I-MOS ranking mechanism (Zhang et al., 2021b) for ST systems, k = 2 and k = 6 ranks tied, both better than k = 4. k = 2 is superior at low-latency regime and k = 6 performs better at high-latency regime.", "This is because according to our MU extraction algorithm, we can always guarantee the consistency between the MU translation and the full-speech translation, regardless of the value of k .", "Larger k makes it easier to match partial translation and full-speech translation, thus producing more fine-grained MUs.", "But at the same time, truncating more translated words can avoid displaying problematic translations at the tail.", "So the translation quality of large k will not degrade.", "On the contrary, using a larger k improves the translation accuracy because it receives more source speech when performing translation.", "To further study the effect of the translation history for MU detection, we remove the previously generated translation y p from the multi-modal MU detection model.", "Without y p , the segmentation model detects MUs only based on the input speech clip s t , and only optimizes the top 6-layer Transformer.", "We build golden segmentation on tst-COMMON based on the meaningful speech units construction algorithm (Section 3.2), then evaluate different models on MU segmentation, translation quality, and latency.", "The results are shown in Table 2.", "It is observed that for both limited and open training corpora settings, the multi-modal method which combines speech features and translation history outperforms the single-modal method on MU segmentation in terms of F1 score (absolute improvements of 1.6-2 percentage points).", "However, there are only slight improvements in translation quality and a slight delay in latency.", "This is because incorrect segmentation does not necessarily lead to the decline of BLEU, which also depends on 7868 Model setting F1 (%) BLEU AL (ms) MU-ST Single-Modal 72.6 20.92 1642.2 Multi-Modal 74.6 21.07 1684.8 MU-ST ( +pretrain )* Single-Modal 72.5 22.73 1925.7 Multi-Modal 74.1 22.78 1952.5 Table 2: The performance of single-modal and multimodal MU detection models evaluated on MuST-C tst-COMMON at = 0 .", "the robustness of the speech translation model.", "For some small errors brought by wrong segmentation, a robust speech translation model may ignore them and generate correct translation when translating subsequent MUs.", "In such cases, the overall BLEU will not be largely affected.", "We also evaluate our method on Zh-En ST using BSTC (Zhang et al., 2021a) dataset.", "BSTC is the largest Zh-En public speech translation corpus, but contains only 66 hours of speech, corresponding to 37k sentences.", "To alleviate the data scarcity, we first construct pseudo speech translation data by translating the transcript of ASR corpora (AISHELL-1 (Bu et al., 2017), AISHELL-3 (Shi et al., 2020), and aidatatang_200zh 8 ) with a Zh-En machine translation model trained on a translation corpus, CCMT2019 (Yang et al., 2019).", "Then the pseudo speech translation data, together with the BSTC, are assigned as the training corpus for the Zh-En end-to-end speech translation model.", "The combined training set contains a total of 529 hours of speech, corresponding to 478k sentence pairs of transcript and translation.", "We implement three methods for comparison: Cascade : we use an adaptive policy (Zhang et al., 2020) to connect an ASR model and an MT model.", "The ASR model is trained on 529 hours of speech, and the MT model based on Transformer big is pre-trained on CCMT2019 and fine-tuned on BSTC.", "The adaptive policy based on textual MU (Zhang et al., 2020) is trained on BSTC.", "Cascade * : Similar to Cascade , the only difference is that it adopts a public real-time ASR API 9 that uses more than 9400 hours of ASR training data (Amodei et al., 2016).", "8 a free Chinese Mandarin speech corpus by Beijing DataTang Technology Co., Ltd (www.datatang.com) 9 https://ai.baidu.com/tech/speech/realtime_asr 20 22 24 26 28 30 32 34 1000 1200 1400 1600 1800 2000 2200 2400 2600 BLEU Computation-unaware AL(ms) Cascade Cascade* MU-ST Full-SpeechTranslation ... 4373 4526 = 0.5 = 0.9 = 0.", "MU-ST : Our proposed adaptive speech segmentation policy for end-to-end ST. The acoustic encoders and target decoders of our speech translation model are initialized by the ASR and MT model of Cascade method, respectively.", "Then we fine-tune with speech translation data following Li et al. (2020b).", "The results in Figure 6 show that: 1) Cascade * has a significant advantage over the other two methods.", "This is because the word error rate of the ASR model is 10.32% and 21.58% for Cascade * and Cascade , respectively, leading to 5.9 BLEU points of gap between their full-speech translation results (27.2 vs. 33.1) 10 .", "2) Cascade and our end-to-end method MU-ST are optimized with identical training data, but MU-ST outperforms Cascade .", "We attribute this improvement to two reasons.", "First, the end-to-end method avoids ASR error propagation, with the full-speech translation of MU-ST surpassing Cascade by 0.7 BLEU points (27.9 vs. 27.2).", "Second and more important, MU-ST detects MUs directly from speech, thus avoiding loss of information.", "The average gap between Cascade and MU-ST at five ST results is 2.9 BLEU points, much larger than that of full-speech translation (0.7).", "This represents that segmentation from the source speech is superior to segmentation from noisy ASR results.", "Accordingly, we expect our MU-ST to have greater potential based on large-scale training data.", "We present an adaptive speech segmentation policy for end-to-end simultaneous translation, which triggers translation with a meaningful speech unit de-10", "de-10 Note that, our proposed MU-ST surpassed the cascade method ON-TRAC * in En-De experiments, but it failed to surpass Cascade * in Zh-En because the ASR training data of ON-TRAC * in En-De is only three times that of MU-ST (in hours), while the training data of Cascade * is thousands of times that of MU-ST in Zh-En experiments.", "tector.", "Experiments across two language pairs show that our method outperforms state-of-the-art methods with constrained training corpus, suggesting the effectiveness of our adaptive policy.", "Ablation studies reveal key factors that lead to its success, including tail-truncation, multi-modal segmentation, and speech-text pre-training." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "result", "abstain" ]
[ "bert2BERT: Towards Reusable Pretrained Language Models", "Abstract In recent years, researchers tend to pre-train ever-larger language models to explore the upper limit of deep models.", "However, large language model pre-training costs intensive computational resources, and most of the models are trained from scratch without reusing the existing pre-trained models, which is wasteful.", "In this paper, we propose bert2BERT 1 , which can effectively transfer the knowledge of an existing smaller pre-trained model to a large model through parameter initialization and significantly improve the pre-training efficiency of the large model.", "Specifically, we extend the previous function-preserving (Chen et al., 2016) method proposed in computer vision on the Transformer-based language model, and further improve it by proposing a novel method, advanced knowledge for the large model's initialization.", "In addition, a two-stage learning method is proposed to further accelerate the pre-training.", "We conduct extensive experiments on representative PLMs (e.g., BERT and GPT) and demonstrate that (1) our method can save a significant amount of training cost compared with baselines including learning from scratch, StackBERT (Gong et al., 2019) and MSLT (Yang et al., 2020); (2) our method is generic and applicable to different types of pretrained models.", "In particular, bert2BERT saves about 45% and 47% computational cost of pretraining BERTBASE and GPTBASE by reusing the models of almost their half sizes.", "Pre-trained language models (PLMs), such as BERT (Devlin et al., 2019), GPT (Radford et al., 2018, 2019; Brown et al., 2020), ELECTRA (Clark et al., 2020), XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019), have achieved great", "success in natural language processing (NLP).", "However, the pre-training process of large PLMs can be extremely computationally expensive and produces huge carbon footprints.", "For example, GPT-3 uses 3.1E+6 GPU hours for training, at an estimated cost of $4.6 million 2 , consuming a lot of computing resources.", "Therefore, how to reduce the training cost of PLM is of great importance to Green AI (Schwartz et al., 2020).", "Recently, there is a trend of training extremely large models to explore the upper limits of PLMs.", "For example, large pre-trained models, including GPT-3 (Brown et al., 2020) (175B), PanGu(Zeng et al., 2021) (200B) and Switch Transformers (Fedus et al., 2021) (1571B), have been proved promising in language understanding and generation.", "However, these models are all pre-trained from scratch independently without utilizing the knowledge of smaller ones that have already been trained.", "On the other hand, our empirical studies show that the pre-trained models of different scales could share similar knowledge, for example in Figure 2, the attention patterns of the two PLMs with different sizes are similar.", "2 https://lambdalabs.com/blog/ demystifying-gpt-3/", "propose the bert2BERT method, which can efficiently transfer the learned knowledge of the smaller model to the large model.", "bert2BERT consists of two components: (1) For parameter initialization, we first extend the function preserving training (Chen et al., 2016) to PLMs by duplicating and stacking the parameters of the existing smaller PLM, which we call function-preserving initialization (FPI).", "FPI ensures that the initialized large model has almost the same behavior as the small model, so that the large model has a good starting point for later optimization.", "We also find that duplicating the weights of the upper layer to the current layer can further accelerate the convergence of the large model, which we call advanced knowledge initialization (AKI).", "Although the AKI somewhat violates the principle of function preserving, we find that empirically it also has a good starting point as shown in Table 1, which leads to a faster convergence rate and achieves higher training efficiency.", "(2) Secondly, a two-stage training strategy is further applied to the large model to accelerate the training process.", "To demonstrate the superiority of our method, we conduct extensive experiments on two representative PLMs: BERT and GPT, with different source model sizes.", "The results show that: (1) our method can save a significant amount of computation in pre-training compared to the traditional way of learning from scratch and progressive stacking methods such as StackBERT (Gong et al., 2019) and MSLT (Yang et al., 2020); (2) our method is model-agnostic, which can be applied on a wide range of Transformer-based PLMs.", "One typical example is that, when using a small pre-trained model with half the size of BERTBASE for initialization, bert2BERT saves 45% computation cost of the original BERTBASE pre-training.", "In general, our contributions are summarized as follows: (1) We explore a new direction for the efficient pre-training by reusing the trained parameters of small models to initialize the large model; (2) We successfully extend function preserving method (Chen et al., 2016) on BERT and further propose advanced knowledge initialization, which can effectively transfer the knowledge of the trained small model to the big model and improve the pre-training efficiency; (3) The proposed method outperforms other training methods and achieves 45% computation reduction on BERTBASE ; (4) Our method is generic, effective for both the BERT and GPT models, and have great potential to become an energy-efficient solution for pre-training super large-scale language models.", "Efficient Pre-training in NLP.", "The efficiency of pre-training has been explored by previous work.", "Some works (Gong et al., 2019; Yang et al., 2020; Gu et al., 2021) propose progressive learning to accelerate the pre-training, which are motivated by the fact that different layers have some similar knowledge (e.g., attention patterns).", "They start pre-training a small model with fewer Transformer layers, and then iteratively expand the model by stacking the already trained layers on the top.", "Another line of work proposes to back distill the knowledge of the small models into large models, which is termed as knowledge inheritance (Qin et al., 2021).", "Some works focus on the data effi-2135 ciency (Wu et al., 2021) and take notes for rare words during the pre-training process to help the model understand them when they occur next.", "ELECTRA (Clark et al., 2020) proposes a task of replaced token detection to predict whether each token in the input was replaced or not, which improves the pre-training efficiency.", "Our method is orthogonal to this kind of work and the combination of ELECTRA and bert2BERT could achieve better efficiency.", "In addition, there are several other orthogonal techniques for efficient pre-training: mixed-precision training (Shoeybi et al., 2019), large batch optimization (You et al., 2020), model architecture innovation (Lan et al., 2020), layer dropping technique (Zhang and He, 2020), etc.", "Reusable Neural Network.", "Reusable neural network, a topic related to transfer learning (Pan and Yang, 2010), is introduced to accelerate the model training in computer vision.", "One classical work is Net2Net (Chen et al., 2016), which first proposes the concept of the function-preserving transformation to make neural networks reusable.", "However, Net2Net randomly selects the neurons to be split.", "To handle this problem, some works (Wu et al., 2019, 2020b; Wang et al., 2019b; Wu et al., 2020a) leverage a functional steepest descent idea to decide the optimal subset of neurons to be split.", "The pruning technique (Han et al., 2015) is also introduced for reusable neural networks (Feng and Panda, 2020).", "In this paper, we study the reusable pre-trained language model and propose a new method, bert2BERT, to accelerate the pre-training of BERT and GPT.", "BERT consists of one embedding layer and multiple Transformer (Vaswani et al., 2017) layers.", "The embedding layer first maps the tokens in a sentence into vectors with an embedding matrix WE .", "Then one normalization layer is employed to produce the initial hidden states H 0 .", "The hidden states are iteratively processed by multiple Transformer layers as follows:", "H l = Transformer l ( H l 1 ) , l [1 , L ] (1) where L denotes the number of Transformer layers, each including a multi-head attention (MHA) and a feed-forward network (FFN).", "MHA.", "It is composed of multiple parallel self-attention heads.", "The hidden states of the previous layer are fed into each head and then the outputs of all heads are summed to obtain the final output as follows: Q i , K i , V i = H l 1 W Ql,i , H l 1 W Kl,i , H l 1 W Vl,i , HHEAD l,i = softmax( Q i K iT d k ) V i W Ol,i , MHA( H l 1 ) = a (cid:88) i =1 HHEAD l,i , HMHA l = LayerNorm( H l 1 + MHA( H l 1 )) .", "H l 1 is linearly projected to queries ( Q i ), keys ( K i ) and values ( V i ) using W Ql,i , W Kl,i , W Vl,i respectively.", "HHEAD l,i indicates the context-aware vector which is obtained by the scaled dot-product of queries and keys in the i -th attention head.", "a represents the number of self-attention heads.", "d k is the head dimension acting as the scaling factor.", "FFN.", "It consists of two linear layers and one GeLU activation function (Hendrycks and Gimpel, 2016), that is: HFFN l = GeLU( HMHA l W 1 l + b 1 l ) W 2 l + b 2 l , H l = LayerNorm( HMHA l + HFFN l ) .", "Layer Normalization.", "Both the modules of MHA and FFN have one layer normalization (Ba et al., 2016) that stabilizes the dynamics of the hidden state in the Transformer.", "Formally, it is written as: LayerNorm( H ) = ( H H H ) WLN + b LN , (4) where means the element-wise multiplication.", "We aim to accelerate the pre-training of target model T ( L t , D t ) by transferring the knowledge of an existing pre-trained source model S ( L s , D s ) , where L s | t means the numbers of Transformer layer and D s | t means the model width (i.e., hidden size), satisfying L s L t and D s D t .", "Formally, our problem is two-fold: (1) how to perform an effective parameter initialization for T by reusing the trained parameters of S , and (2) how to efficiently 2136 train the initialized T , so that T can have a faster convergence rate in pre-training.", "Targeting the above problems, bert2BERT first initializes the target model T with the parameters of the existing model S by the width-wise expansion ( D s D t ) and depth-wise expansion ( L s L t ).", "Through this expansion, the knowledge contained in the parameters of the source model is directly transferred to the target model.", "Then we further pre-train the initialized target model with a two-stage pre-training method.", "The overall workflow is illustrated in Section 4.5.", "Essentially, the width-wise expansion can be decomposed into expansions of parameter matrices (or vectors 3 ).", "As illustrated in Figure 3, the matrix expansion enlarges W R d w in d w out of S to U R d u in d u out of T by two kinds of operations: in-dimension and out-dimension expansion.", "In the following sections, we first introduce two strategies of width-wise expansion: function-preserving and advanced knowledge initialization.", "Then, we introduce the depth-wise expansion and detail the two-stage pre-training process.", "For the paper clarity, we introduce two index mapping functions: g in and g out , where g in ( i ) means the i -th in-dimension of U reuses the g in ( i ) -th in-dimension parameters of W , g out ( j ) means the j -th out-dimension of U reuses the g out ( j ) -th out-dimension parameters of W .", "Both our two methods are defined with these two mapping functions.", "W ( i,j ) means the parameter element, i and j refer to the i -th in-dimension index and j -th out-dimension index respectively.", "As shown in Figure 3, the i -th in-dimension parameters of W are the parameters of the i -th input neuron of W or the i -th column of W .", "Function preserving initialization (FPI) (Chen et al., 2016) aims to make the initialized target model have the same function as the source model, which means that given the same input, the initialized target model has the same output as the source model.", "In this paper, we extend FPI on a different architecture, Transformer-based pre-trained language model.", "We give an example in Figure 3 to illustrate 3 We omit the expansion of bias (vector) for simplicity.", "as follows: g in ( i ) = (cid:40) i i [1 , d w in ] f ( { 1 , 2 , ..., d w in } ) i ( d w in , d u in ] , (5) g out ( j ) = (cid:40) j j [1 , d w out ] f ( { 1 , 2 , ..., d w out } ) j ( d w out , d u out ] , (6)", "where f ( ) is uniform sampling.", "We denote the weight expansion as U = EXPN( W ; g in , g out ) , which includes in-dimension expansion (Eq. 7) and out-dimension expansion (Eq. 8): C g in ( i ) = d u in (cid:88) i =1 I ( g in ( i ) = g in ( i )) (cid:101) U ( i, ) = 1 C g in ( i ) W ( g in ( i ) , ) , (7) U ( ,j ) = (cid:101) U ( ,g out ( j )) , (8) where I ( ) is an indicator function, and C g in ( i ) is the count of g in ( i ) in the values of g in ( ) , which is used to re-scale the original parameters to keep the function preserving property.", "Expansion for All Modules.", "We apply FPI for all modules of BERT via matrix expansion EXPN( ) .", "Specifically, for the embedding matrix WE , we only conduct the out-dimension expansion: UE ( ,j ) = WE ( ,g e out ( j )) .", "(9) MHA module can be decomposed into multiple parallel self-attention heads and we conduct the head-wise expansion for this module, which means 2137 increasing the number of attention heads.", "The head-wise expansion is formulated as: UQ | K | V | O = EXPN( WQ | K | V | O ; g q | k | v | o in , g q | k | v | o out ) .", "(10)", "2 and the out-dimension expansion for W Ql,i | W Kl,i | W Vl,i is: g q | k | v out ( j ) = (cid:40) j j [1 , a s ] f ( { 1 , 2 , ..., a s } ) j ( a s , a t ] , (11) where j is the head index and a s | t mean the head numbers of source model and target model respectively.", "Specifically, the head-wise expansion means that we reuse the head group parameters to construct the new matrices.", "The i -th head group in l -th layer contains W Ql,i | W Kl,i | W Vl,i | W Ol,i in Eq.", "The module has three constraints: { g e out = g q | k | v in ; g q | k | v out = g o in ; g q | k | v in = g o out }, with the first two constraints for hidden dimension consistency (Wen et al., 2018; Chen et al., 2021) and the third one for residual connection (Eq. 2).", "Similar to the MHA module, the mapping functions of FFN also have three constraints: { g o out = g 1in ; g 1out = g 2in ; g 1in = g 2out }.", "For the layer normalization, we take the layer normalization of FFN as an example, its expansion is formulated as: U LNj = W LNg 2out ( j ) .", "(13)", "Note that in layer normalization (Eq. 4), the mean and variance are calculated based on the hidden representations H .", "Thus, the expansion of this parameter inevitably induces a gap and prevents the target model from strictly following the function preserving principle.", "However, we empirically find that the gap is so small that it can hardly affect the initialization and convergence of the target model.", "Thus we ignore this discrepancy.", "We have validated the effectiveness of the adapted FPI in different settings in Table 1.", "The results show that the initialized model T achieves almost the same loss as S , demonstrating that FPI successfully retains the knowledge of the small model when performing parameter expansion.", "4.3.2 Advanced Knowledge Initialization To further improve the convergence rate of the pretraining target model, we propose the advanced knowledge initialization (AKI), which expands new Method S (12 , 384) S (12 , 512) Original 1.89 1.67 Rand 10.40 10.42 DirectCopy 9.05 6.45 FPI 1.89 1.70 AKI 2.08 1.96 Table 1: The comparison of MLM losses between FPI and baselines.", "matrices based on not only the parameters of the same layer but also the parameters of the upper layer in the source model.", "The intuition is based on previous findings (Jawahar et al., 2019; Clark et al., 2019) that adjacent Transformer layers have similar functionality, which ensures that it will not damage the knowledge contained in the parameters of the current layer.", "Moreover, the knowledge that comes from adjacent layers can break the symmetry (Chen et al., 2016) appeared in FPI, which has been demonstrated beneficial.", "We give an illustrative example in Figure 4 and formulate AKI as: U l = EXPN( W l , W l +1 ; g l | l +1 in , g l out ) .", "(14)", "Specifically, we first do the in-dimension expansion for W l | l +1 .", "Here we take W l as an example: C g l in ( i ) = d u in (cid:88) i =1 I ( g l in ( i ) = g l in ( i )) (cid:101) U l ( i, ) = 1 C g l in ( i ) W l ( g l in ( i ) , ) .", "(15)", "It is similar with Eq.", "7. Then we stack the expanded matrices of (cid:101) U l and (cid:101) U l +1 to construct the final matrix: U l ( ,j ) = (cid:40) (cid:101) U l ( ,j ) j [1 , d w out ] (cid:101) U l +1( ,g l out ( j )) j ( d w out , d u out ] .", "(16)", "We directly copy the expanded (cid:101) U l as the top part of the new matrix and place the sampled parameters from (cid:101) U l +1 on the bottom of the new matrix.", "We aggregate upper-layer information into a new matrix for two intuitions: (1) it breaks the FPI symmetry that hinders model convergence (Chen et al., 2138 in {1:1,2:2,:} 22 22 2 2 2 2 in+1 {1:1,2:2,:} 22 22 out {1:1,2:2,:} 2 2 Cheng 20211113 COPY in out in in out +1 +1 Figure 4: Overview of AKI. It first performs the in-dimension expansion on both the matrixes of current and upper layers. Then it uses the widened matrix of the current layer as the top part of the new matrix and samples the row of the widened matrix of the upper layer as the bottom part of the new matrix. 2016).", "For example, FPI makes the attention patterns in the same layer repeated, which is redundant and called symmetry; (2) upper-layer information can be used as similar but high-level knowledge to guide the model to converge faster.", "We display the attention patterns of the target model initialized by AKI in Appendix E and find that the target model can maintain the attention patterns of both current and upper layers very well.", "Expansion for All Modules.", "For embedding matrix, we only do the out-dimension expansion as Eq.", "9 in the FPI.", "Both the modules of MHA and FFN do the matrix expansion by following the defined operation in Eq.", "15 and Eq.", "16.", "The constraints of mapping functions follow the setting of FPI.Empirically, we find that the AKI method outperforms FPI, while the performance is worse if we build a new matrix based on the matrix of the lower layer (or low-level knowledge).", "How to construct the optimal initialization for the target model with the parameters of different layers remains an open question and we leave it as future work.", "For more details, we give a clear illustration of the FPI and AKI process in Appendix F. 4.4 Depth-wise Expansion After the width-wise expansion, we obtain a widened model with the same width as the target model.", "To bridge the depth gap, we perform depthwise expansion to increase model depth to the depth of the target model.", "We illustrate this process in Algorithm 1 and the main idea is to iteratively stack the widened model until its depth is equal to the target model (Gong et al., 2019).", "Algorithm 2 Two-stage Pre-training", "Input: the initialized model T , large-scale unsupervised dataset D , the epoch number of sub-model training E b and the epoch number of whole training process E , the layer number l b .", "1: Construct sub-models and these models have the layer numbers of { l b , 2 l b , . . . , L t }.", "2: for e = 1 E b do 3: for batch in D do 4: T sample one sub-model.", "5: Perform forward and backward of T .", "6: Update only top l b layers of T .", "7: end for 8: end for 9: for e = E b E do 10: for batch in D do 11: Perform forward and backward of T .", "12: Update whole model T .", "13: end for 14: end for Output: the pre-trained model T layers in a random manner to make the complete model converge at a low cost.", "These sub-models are built with bottom Transformer layers of the initialized target model and share one classification layer.", "At each optimization step, we randomly sample one sub-model and only update its top Transformer layers and the shared classification layer.", "(2) After the sub-structure training, we further perform the traditional full-model training.", "The details of our method are displayed in Algorithm 2.", "Pre-training Details.", "We use the English Wikipedia and Toronto Book Corpus (Zhu et al., 2015) as the pre-training data.", "The settings of pretraining are: peak learning rate of 1e-4, warmup 2139 Model FLOPs Ratio Loss SQuADv1.1 SST-2 MNLI MRPC CoLA QNLI QQP STS-B Avg.", "steps of 10k, training epochs of E =40, batch size of 512, sub-model training epochs of E b =5, layer number of l b =3.", "Unless otherwise noted, all methods including bert2BERT and baselines use the same pre-training settings for fair comparisons.", "In the settings of bert2BERT, the target model has a BERTBASE architecture of T (12 , 768) and the source model has an architecture of S (12 , 512) .", "Fine-tuning Details.", "For the evaluation, we use tasks from GLUE benchmark (Wang et al., 2019a) and SQuADv1.1 (Rajpurkar et al., 2016).", "We report F1 for SQuADv1.1, Matthews correlation coefficient (Mcc) for CoLA (Warstadt et al., 2019) and accuracy (Acc) for other tasks.", "For the GLUE tasks fine-tuning, we set the batch size to 32, choose the learning rate from {5e-6, 1e-5, 2e-5, 3e-5} and epochs from {4, 5, 10}.", "For the SQuADv1.1 fine-tuning, we set the batch size to 16, the learning rate to 3e-5, and the number of training epochs to 4.", "All results are the average of 3 runs on the dev set.", "Baselines.", "We first introduce a naive bert2BERT baseline named DirectCopy, which directly copies the small model to the target model and randomly initializes the unfilled parameters.", "StackBERT (Gong et al., 2019) and MSLT (Yang et al., 2020) are also included as the baselines.", "Both of them are trained in a progressive manner.", "Following the original setting, for the StackBERT, we first train the 3-layer BERT for 5 epochs, stack it twice into a 6-layer BERT and then train it for 7 epochs.", "In the final step, we stack the 6-layer model into BERTBASE and further train it with 28 epochs.", "For MSLT, we first perform 4-stage training.", "In each stage, we add the top 3 layers of the model already trained to the top of the model and then pre-train the new model by partially updating the top 3 layers.", "Each stage of the partial training process has 8 epochs.", "Finally, we further perform 20 full-model training epochs 4 to achieve the same loss as BERTBASE trained from scratch.", "The baselines are trained using the same optimizer, training steps, and warmup steps as the bert2BERT.", "We demonstrate the effectiveness of the proposed method on the SQuAD and GLUE benchmark.", "The results are shown in Table 2.", "We also represent the loss curves in Figure 1 and Appendix A. The results show that: (1) DirectCopy only saves 12.2% computational costs, which indicates this naive method of directly copying the trained parameters of the source model to the target model is not effective; (2) our proposed methods, FPI and AKI, achieve better performances than the baselines.", "Although AKI does not follow the function preserving, it has a bigger loss than FPI at the start of training, AKI achieves a faster convergence rate by using the advanced knowledge and breaking the symmetry; (3) by performing the two-stage pre-training on the target model initialized by AKI, we can save 45 .", "2 % computational costs.", "Note that the total parameters of the source model are half of those of the target model (54M vs. 110M).", "The loss of bert2BERT in Figure 1 is high at the stage of sub-model training because it represents the average loss of all sub-models.", "We also compare the attention patterns of the target models initialized by DirectCopy, FPI, and AKI.", "The attention patterns and their discussions are displayed in Appendix E. 4 We have tried the same setting as the original paper with 8 epoch full-model running but it does not achieve the same loss with BERTBASE (1.511 vs. 1.437).", "bert2BERT with Smaller Source Model.", "We also evaluate bert2BERT on different settings, where the source model S (6, 512), S (8, 512), S (10, 512) are significantly smaller than the target model (35M | 42M | 48M vs. 110M).", "The results are shown in Table 3 and loss curves are displayed in Appendix B. We observe that DirectCopy for S (6, 512) achieves no efficiency improvement over the original pre-training, which indicates that the significant size gap between the source and target model greatly reduces the benefit of DirectCopy methods.", "Compared with DirectCopy, our proposed method reduces the computation cost by 23.3%, which again demonstrates the effectiveness of bert2BERT.", "The results show that the smaller the size gap between the source model and target model, the greater the cost savings of bert2BERT.", "We also note that it is more challenging to speed up the target model with a small source model S (6, 512).", "We encourage future work to explore to transfer the knowledge from smaller source models to improve the pre-training efficiency of the target model.", "Effect of Sub-model Training Epochs.", "Our training procedure includes two stages: sub-model training and full-model training.", "Here, we study the effect of the number of sub-model training epochs by performing bert2BERT on the different settings of E b ={0, 5, 10, 20}.", "The results are presented in Table 4 and the loss curves are displayed in Appendix C. We observe that our method achieves the best efficiency when the epoch number is set to 5, while a larger or smaller epoch number will bring a negative impact.", "Datasets.", "To demonstrate that our method is generic, following the BERT setting, we also use the English Wikipedia and Book Corpus in the GPT-training.", "For the evaluation, we use the datasets of WikiText-2, PTB, and WikiText103 and evaluate these models under the zero-shot setting without fine-tuning on the training set.", "Implementation Details.", "We use the architecture of { L =12, D =768} for the GPT target model, and pre-train it with the learning rate of 1e-4, training epochs of 20.", "For bert2BERT, we use the source model with an architecture of { L =12, D =512}, initialize the target model with AKI, and pre-train it by the full-model training ( E b =0).", "Results and Analysis.", "We compare the original pre-training method and bert2BERT, the results are shown in Table 5 and Appendix D. We observe that the proposed method saves 47% computation cost of GPT pre-training, exhibiting a similar trend to BERT pre-training.", "Although GPT and BERT have different architectures (e.g., post-LN and pre-LN (Xiong et al., 2020)) and are pretrained with different tasks, bert2BERT saves a significant amount of training cost on both these two models, which shows that the proposed method is generic and is effective for different kinds of PLMs.", "bert2BERT 2.6 ( 47 % ) 132.1 47.9 53.0 Table 5: Experiments on GPT.", "We report the perplexity for these tasks.", "w/o FT means that the pre-trained model is directly evaluated on the test set without fine-tuning on the train set.", "Datasets.", "To demonstrate that our method can be used to train larger models, we use the Baidu Wikipedia, Sougou Wikipedia, and Zhihu to train the T5 model (Raffel et al., 2020).", "For the evaluation, we use the dataset of the original Chinese natural language inference task (OCNLI) (Hu et al., 2020).", "Implementation Details.", "Since the bert2BERT method is suitable for BERT and GPT, it can also be used for the T5 model, which consists of an encoder and a decoder.", "The target T5 model's architecture is { L e =12, L d =12, D =1024, A =16}, where 2141 L e and L d means the numbers of encoder and decoder Transformer layers respectively, D means the hidden size, A means the number of attention heads.", "We pre-train it with the learning rate of 1e-4, batch size of 1024.", "For bert2BERT, we use the source model with an architecture of { L e =12, L d =12, D =256, A =4}, initialize the target model with FPI, and pre-train it by the full-model training ( E b =0).", "Note that the scale gap between the source model and the target model is over 10 times (31M vs. 360M), which is a challenging setting.", "Results and Analysis.", "We compare the original pre-training method and bert2BERT method on the T5 model, the results are shown in Table", "6. We observe that the proposed method saves at least 25% computation cost of T5 pre-training.", "It demonstrates the effectiveness of the method on larger models.", "This paper proposes an efficient pre-training method, bert2BERT, which reuses the parameters of the small trained model as the initialization parameters of the large model.", "We employ the proposed method in BERT and GPT under different settings of model sizes.", "The extensive results show that bert2BERT is generic to Transformer-based models and saves a significant amount of computation cost.", "Moreover, the detailed analysis shows that our techniques, function-preserving, advanced knowledge initialization, and two-stage pre-training, are all effective.", "In the future, we will apply bert2BERT on training super large-scale language models (e.g., use the 10B source model to train the 100B target model) and extends its scope to other PLMs such as ELECTRA and BART (Lewis et al., 2020).", "This work is supported in part by NSFC (Grant No. 61872215), and Shenzhen Science and Technology Program (Grant No. RCYX20200714114523079).", "We would like to thank Yifeng Liu, Binbin Deng, Ziliang Yang, Jiaxin Shi for their support of this work." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "result", "result", "abstain", "objective", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "objective", "abstain", "result", "abstain", "other", "other" ]
[ "Recently, contrastive learning has been shown to be effective in improving pre-trained language models (PLM) to derive high-quality sentence representations.", "It aims to pull close positive examples to enhance the alignment while push apart irrelevant negatives for the uniformity of the whole representation space.", "However, previous works mostly adopt in-batch negatives or sample from training data at random.", "Such a way may cause the sampling bias that improper negatives ( e.g., false negatives and anisotropy representations) are used to learn sentence representations, which will hurt the uniformity of the representation space.", "To address it, we present a new framework DCLR (Debiased Contrastive Learning of unsupervised sentence Representations) to alleviate the influence of these improper negatives.", "In DCLR, we design an instance weighting method to punish false negatives and generate noise-based negatives to guarantee the uniformity of the representation space.", "Experiments on seven semantic textual similarity tasks show that our approach is more effective than competitive baselines.", "Our code and data are publicly available at the link: https: //github.com/RUCAIBox/DCLR .", "As a fundamental task in the natural language processing (NLP) field, unsupervised sentence representation learning (Kiros et al., 2015; Hill et al., 2016) aims to derive high-quality sentence representations that can benefit various downstream tasks, especially for low-resourced domains or computationally expensive tasks, e.g., zero-shot text semantic matching (Qiao et al., 2016), large-scale semantic similarity comparison (Agirre et al., 2015), and document retrieval (Le and Mikolov, 2014).", "mantic representation approach, achieving remarkable performance on various NLP tasks.", "However, several studies have found that the native sentence representations derived by PLMs are not uniformly distributed with respect to directions, but instead occupy a narrow cone in the vector space (Ethayarajh, 2019), which largely limits their expressiveness.", "To address this issue, contrastive learning (Chen et al., 2020) has been adopted to refine PLM-derived sentence representations.", "It pulls semantically-close neighbors together to improve the alignment, while pushes apart non-neighbors for the uniformity of the whole representation space.", "In the learning process, both positive and negative examples are involved in contrast with the original sentence.", "For positive examples, previous works apply data augmentation strategies (Yan et al., 2021) on the original sentence to generate highly similar variations.", "While, negative examples are commonly sampled from the batch or training data ( e.g., in-batch negatives (Gao et al., 2021)) at random, due to the lack of ground-truth annotations for negatives.", "Although such a negative sampling way is simple and convenient, it may cause sampling bias and affect the sentence representation learning.", "First, the sampled negatives are likely to be false negatives that are indeed semantically close to the 6120 original sentence.", "As shown in Figure 1, given an input sentence, about half of in-batch negatives have a cosine similarity above 0.7 with the original sentence based on the SimCSE model (Gao et al., 2021).", "It is likely to hurt the semantics of the sentence representations by simply pushing apart these sampled negatives.", "Second, due to the anisotropy problem (Ethayarajh, 2019), the representations of sampled negatives are from the narrow representation cone spanned by PLMs, which cannot fully reflect the overall semantics of the representation space.", "Hence, it is sub-optimal to only rely on these representations for learning the uniformity objective of sentence representations.", "To address the above issues, we aim to develop a better contrastive learning approach with debiased negative sampling strategies.The core idea is to improve the random negative sampling strategy for alleviating the sampling bias problem.", "First, in our framework, we design an instance weighting method to punish the sampled false negatives during training.", "We incorporate a complementary model to evaluate the similarity between each negative and the original sentence, then assign lower weights for negatives with higher similarity scores.", "In this way, we can detect semantically-close false negatives and further reduce their influence.", "Second, we randomly initialize new negatives based on random Gaussian noises to simulate sampling within the whole semantic space, and devise a gradient-based algorithm to optimize the noise-based negatives towards the most nonuniform points.", "By learning to contrast with the nonuniform noise-based negatives, we can extend the occupied space of sentence representations and improve the uniformity of the representation space.", "To this end, we propose DCLR , a general framework towards Debiased Contrastive Learning of unsupervised sentence Representations.", "In our approach, we first initialize the noise-based negatives from a Gaussian distribution, and leverage a gradient-based algorithm to update the new negatives by considering the uniformity of the representation space.", "Then, we adopt the complementary model to produce the weights for these noise-based negatives and randomly sampled negatives, where the false negatives will be punished.", "Finally, we augment the positive examples via dropout (Gao et al., 2021) and combine them with the above weighted negatives for contrastive learning.", "We demonstrate that our DCLR outperforms a number of competitive baselines on seven semantic textual similarity (STS) tasks using BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019).", "(1) To our knowledge, our approach is the first attempt to reduce the sampling bias in contrastive learning of unsupervised sentence representations.", "(2) We propose DCLR, a debiased contrastive learning framework that incorporates an instance weighting method to punish false negatives and generates noise-based negatives to guarantee the uniformity of the representation space.", "(3) Experimental results on seven semantic textual similarity tasks show the effectiveness of our framework.", "Sentence Representation Learning.", "Learning universal sentence representations (Kiros et al., 2015; Hill et al., 2016) is the key to the success of various downstream tasks.", "Previous works can be roughly categorized into supervised (Conneau et al., 2017; Cer et al., 2018) and unsupervised approaches (Hill et al., 2016; Li et al., 2020).", "Supervised approaches rely on annotated datasets ( e.g., NLI (Bowman et al., 2015; Williams et al., 2018)) to train the sentence encoder (Cer et al., 2018; Reimers and Gurevych, 2019).", "Unsupervised approaches consider deriving sentence representations without labeled datasets, e.g., pooling word2vec embeddings (Mikolov et al., 2013).", "Recently, to leverage the strong potential of PLMs (Devlin et al., 2019), several works propose to alleviate the anisotropy problem (Ethayarajh, 2019; Li et al., 2020) of PLMs via special strategies, e.g., flow-based approach (Li et al., 2020) and whitening method (Huang et al., 2021).", "Besides, contrastive learning (Wu et al., 2020; Gao et al., 2021) has been used to refine the representations of PLMs.", "Contrastive Learning.", "Contrastive learning has been originated in the computer vision (Hadsell et al., 2006; He et al., 2020) and information retrieval (Bian et al., 2021; Zhou et al., 2022) field with significant performance improvement.", "Usually, it relies on data augmentation strategies such as random cropping and image rotation (Chen et al., 2020; Yan et al., 2021) to produce a set of semantically related positive examples for learning, 6121 and randomly samples negatives from the batch or whole dataset.", "For sentence representation learning, contrastive learning can achieve a better bal-ance between alignment and uniformity in semantic representation space.", "Several works further adopt back translation (Fang and Xie, 2020), token shuffling (Yan et al., 2021) and dropout (Gao et al., 2021) to augment positive examples for sentence representation learning.", "However, the quality of the randomly sampled negatives is seldom studied.", "Virtual Adversarial Training.", "Virtual adversarial training (VAT) (Miyato et al., 2019; Kurakin et al., 2017) perturbs a given input with learnable noises to maximize the divergence of the model's prediction with the original label, then utilizes the perturbed examples to improve the generalization (Miyato et al., 2017; Madry et al., 2018).", "A class of VAT methods can be formulated into solving a min-max problem, which can be achieved by multiple projected gradient ascent steps (Qin et al., 2019).", "In the NLP field, several studies incorporate adversarial perturbations in the embedding layer, and show its effectiveness on text classification (Miyato et al., 2017), machine translation (Sun et al., 2020), and natural language understanding (Jiang et al., 2020) tasks.", "This work aims to make use of unlabeled corpus for learning effective sentence representations that can be directly utilized for downstream tasks, e.g., semantic textual similarity task (Agirre et al., 2015).", "Given a set of input sentences X = { x 1 , x 2 , . . . , x n } , our goal is to learn a representation h i R d for each sentence x i in an unsupervised manner.", "For simplicity, we denote this process with a parameterized function h i = f ( x i ) .", "In this work, we mainly focus on using BERT-based PLMs (Devlin et al., 2019; Liu et al., 2019) to generate sentence representations.", "Following existing works (Li et al., 2020; Yan et al., 2021), we fine-tune PLMs on the unlabeled corpus via our proposed unsupervised learning approach.", "After that, for each sentence x i , we encode it by the fine-tuned PLMs and take the representation of the [CLS] token from the last layer as its sentence representation h i .", "Our proposed framework DCLR focuses on reducing the influence of sampling bias in the contrastive", "contrastive learning of sentence representations.", "In this framework, we devise a noise-based negatives generation strategy to reduce the bias caused by the anisotropy PLM-derived representations, and an instance weighting method to reduce the bias caused by false negatives.", "Concretely, we initialize the noise-based negatives based on a Gaussian distribution and iteratively update these negatives towards non-uniformity maximization.", "Then, we utilize a complementary model to produce weights for all negatives ( i.e., randomly sampled and the noise-based ones).", "Finally, we combine the weighted negatives and augmented positive examples for contrastive learning.", "The overview of our DCLR is presented in Figure 2.", "We aim to generate new negatives beyond the sentence representation space of PLMs during the training process, to alleviate the sampling bias derived from the anisotropy problem of PLMs (Etha-yarajh, 2019).", "For each input sentence x i , we first initialize k noise vectors from a Gaussian distribution as the negative representations: { h 1 , h 2 , , h k } N (0 , 2 ) , (1) where is the standard variance.", "Since these vectors are randomly initialized from such a Gaussian distribution, they are uniformly distributed within the whole semantic space.", "By learning to contrast with these new negatives, it is beneficial for the uniformity of sentence representations.", "To further improve the quality of the new negatives, we consider iteratively updating the negatives to capture the non-uniformity points within the whole semantic space.", "Inspired by VAT (Miy-ato et al., 2017; Zhu et al., 2020), we design a non-uniformity loss maximization objective to produce gradients for improving these negatives.", "The non-uniformity loss is denoted as the contrastive loss between the noise-based negatives { h j } and the positive representations of the original sentence ( h i , h + i ) as: LU ( h i , h + i , { h } ) = log e sim ( h i ,h + i ) / u (cid:80) h j { h j } e sim ( h i , h i ) / u , (2) where u is a temperature hyper-parameter and sim ( h i , h + i ) is the cosine similarity h i h + i || h i |||| h + i || .", "g ( h j ) denotes the gradient of h j by maximizing the non-uniformity loss between the positive representations and the noise-based negatives.", "In this way, the noise-based negatives will be optimized towards the non-uniform points of the sentence representation space.", "By learning to contrast with these negatives, the uniformity of the representation space can be further improved, which is essential for effective sentence representations.", "In addition to the above noise-based negatives, we also follow existing works (Yan et al., 2021; Gao et al., 2021) that adopt other in-batch representations as negatives { h } .", "However, as discussed before, the sampled negatives may contain examples that have similar semantics with the positive example ( i.e., false negatives).", "To alleviate this problem, we propose an instance weighting method to punish the false negatives.", "Since we cannot obtain the true labels or semantic similarities, we utilize a complementary model to produce the weights for each negative.", "In this paper, we adopt the state-of-the-art SimCSE (Gao et al., 2021) as the complementary model.", "1 Given a negative representation h from { h } or { h } and the representation of the original sentence h i , we utilize the complementary model to produce the 1 For convenience, we utilize SimCSE on BERT-base or RoBERTa-base model as the complementary model.", "where is a hyper-parameter of the instance weighting threshold, and sim C ( h i , h ) is the cosine similarity score evaluated by the complementary model.", "In this way, the negative that has a higher semantic similarity with the representation of the original sentence will be regarded as a false negative and will be punished by assigning the weight", "0. Based on the weights, we optimize the sentence representations with a debiased cross-entropy contrastive learning loss function as L = log e sim ( h i ,h + i ) / (cid:80) h { h }{ h } h e sim ( h i ,h ) / , (6) where is a temperature hyper-parameter.", "In our framework, we follow SimCSE (Gao et al., 2021) that utilizes dropout to augment positive examples h + i .", "Actually, we can utilize various positive augmentation strategies, and will investigate it in Section 6.1.", "Our framework DCLR contains three major steps.", "In the first step, we generate noise-based negatives to extend in-batch negatives.", "Concretely, we first initialize a set of new negatives via random Gaussian noises using Eq.", "1. Then, we incorporate a gradient-based algorithm to adjust the noise-based negatives by maximizing the non-uniform objective using Eq.", "3.", "After several iterations, we can 6123 obtain the noise-based negatives that correspond to the nonuniform points within the whole semantic space, and we mix up them with in-batch negatives to compose the negative set.", "In the second step, we adopt a complementary model ( i.e., SimCSE) to compute the semantic similarity between the original sentence and each example from the negative set, and produce the weights using Eq.", "5. Finally, we augment the positive examples via dropout and utilize the negatives with corresponding weights for contrastive learning using Eq.", "6. 4.3.2 Discussion As mentioned above, our approach aims to reduce the influence of the sampling bias about the negatives, and is agnostic to various positive data augmentation methods ( e.g., token cutoff and dropout).", "Compared with traditional contrastive learning methods (Yan et al., 2021; Gao et al., 2021), our proposed DCLR expands the negative set by introducing noise-based negatives { h } , and adds a weight term h to punish false negatives.", "Since the noise-based negatives are initialized from a Gaussian distribution and do not correspond to real sentences, they are highly confident negatives to broaden the representation space.", "By learning to contrast with them, the learning of the contrastive objective will not be limited by the anisotropy representations derived from PLMs.", "As a result, the sentence representations can span a broader semantic space, and the uniformity of the representation semantic space can be improved.", "Besides, our instance weighting method also alleviates the false negative problem caused by the randomly sampling strategy.", "With the help of a complementary model, the false negatives with similar semantics as the original sentence will be detected and punished.", "Following previous works (Kim et al., 2021; Gao et al., 2021), we conduct experiments on seven standard STS tasks.", "For all these tasks, we use the SentEval toolkit (Conneau and Kiela, 2018) for evaluation.", "Semantic Textual Similarity Task.", "We evaluate our approach on 7 STS tasks: STS 20122016 (Agirre et al., 2012, 2013, 2014, 2015, 2016), STS Benchmark (Cer et al., 2017) and SICK-Relatedness (Marelli et al., 2014).", "These datasets contain pairs of two sentences, whose similarity scores are labeled from 0 to", "5. The relevance between gold annotations and the scores predicted by sentence representations is measured by the Spearman correlation.", "Following the suggestions from previous works (Gao et al., 2021; Reimers and Gurevych, 2019), we directly compute the cosine similarity between sentence embeddings for all STS tasks.", "Baseline Methods.", "We compare DCLR with competitive unsupervised sentence representation learning methods, consisting of non-BERT and BERT-based methods: (1) GloVe (Pennington et al., 2014) averages GloVe embeddings of words as the sentence representation.", "(2) USE (Cer et al., 2018) utilizes a Transformer model that learns the objective of reconstructing the surrounding sentences within a passage.", "(3) CLS , Mean and First-Last AVG (Devlin et al., 2019) adopt the [CLS] embedding, mean pooling of token representations, average representations of the first and last layers as sentence representations, respectively.", "(4) Flow (Li et al., 2020) applies mean pooling on the layer representations and maps the outputs to the Gaussian space as sentence representations.", "(5) Whitening (Su et al., 2021) uses the whitening operation to refine representations and reduce dimensionality.", "(6) Contrastive (BT) (Fang and Xie, 2020) uses contrastive learning with back-translation for data augmentation to enhance sentence representations.", "(7) ConSERT (Yan et al., 2021) explores various text augmentation strategies for contrastive learning of sentence representations.", "(8) SG-OPT (Kim et al., 2021) proposes a contrastive learning method with a self-guidance mechanism for improving the sentence embeddings of PLMs.", "(9) SimCSE (Gao et al., 2021) proposes a simple contrastive learning framework that utilizes dropout for data augmentation.", "Implementation Details.", "We implement our model based on Huggingface's transformers (Wolf et al., 2020).", "For BERT-base and RoBERTa-base, we start from the pre-trained checkpoints of their original papers.", "For BERT-large and RoBERTa-large, we utilize the checkpoints of SimCSE for stabilizing the convergence process.", "Following Sim-6124 Models STS12 STS13 STS14 STS15 STS16 STS-B SICK-R Avg.", "CSE (Gao et al., 2021), we use 1,000,000 sentences randomly sampled from Wikipedia as the training corpus.", "During training, we train our models for 3 epoch with temperature = 0 .", "05 using an Adam optimizer (Kingma and Ba, 2015).", "For BERT-base and RoBERTa-base, the batch size is 128, the learning rate is 3e-5.", "For BERT-large and RoBERTa-large, the batch size is 256, the learning rate is 3e-5 and 1e-5, respectively.", "For the four backbone models, we set the instance weighting threshold as 0.9, 0.85, 0.9 and 0.85, respectively.", "For each batch, we generate k batch _ size noise-based negatives as the shared negatives of all instance within it, and k is 1, 2.5, 4 and 5 for BERT-base, RoBERTa-base, BERT-large and RoBERTa-large, respectively.", "The standard variance of the noise-based negatives is 1, and we update the noise-based negatives four times with the learning rate of 1e-3.", "We evaluate the model every 150 steps on the development set of STS-B and SICK-R and keep the best checkpoint for evaluation on test sets.", "To verify the effectiveness of our framework on PLMs, we select BERT-base and RoBERTa-base as the base model.", "Table 1 shows the results of different methods on seven STS tasks.", "Based on the results, we can find that the non-BERT methods ( i.e., GloVe and USE) mostly outperform native PLM representation based baselines 6125 ( i.e., CLS, Mean and First-Last AVG).", "The reason is that directly utilizing the PLM native representations is prone to be influenced by the anisotropy issue.", "Among non-BERT methods, USE outperforms Glove.", "A potential reason is that USE encodes the sentence using the Transformer model, which is more effective than simply averaging GloVe embeddings.", "For other PLM-based approaches, first, we can see that flow and whitening achieve similar results and outperform the native representations based methods by a margin.", "These two methods adopt specific improvement strategies to refine the representations of PLMs.", "Second, approaches based on contrastive learning outperform the other baselines in most cases.", "Contrastive learning can enhance both the alignment between semantically related positive pairs and the uniformity of the representation space using negative samples, resulting in better sentence representations.", "Furthermore, SimCSE performs the best among all the baselines.", "It indicates that dropout is a more effective positive augmentation method than others since it rarely hurts the semantics of the sentence.", "Finally, DCLR performs better than all the baselines in most settings, including the approaches based on contrastive learning.", "Since these methods mostly utilize randomly sampled negatives ( e.g., in-batch negatives) to learn the uniformity of all sentence representations, it may lead to sampling bias, such as false negatives and anisotropy representations.", "Different from these methods, our framework adopts an instance weighting method to punish false negatives and a gradient-based algorithm to generate noise-based negatives towards the nonuniform points.", "In this way, the sampling bias problem can be alleviated, and our model can better learn the uniformity to improve the quality of the sentence representations.", "In this section, we continue to study the effectiveness of our proposed DCLR.", "Since our proposed DCLR is a general framework that mainly focuses on negative sampling for contrastive learning of unsupervised sentence representations, it can be applied to other methods that rely on different positive data augmentation strategies.", "Thus, in this part, we conduct experiments to examine whether our framework can bring improvements with the following positive data augmentation strategies: (1) Token Shuffling that randomly shuffles the order of the tokens in the input sequences; (2) Feature/Token/Span Cutoff (Yan et al., 2021) that randomly erases features/tokens/token spans in the input; (3) Dropout that is similar to SimCSE (Gao et al., 2021).", "Note that we only revise the negative sampling strategies to implement these variants of our DCLR.", "As shown in Figure 3, our DCLR can boost the performance of all these augmentation strategies, it demonstrates the effectiveness of our framework with various augmentation strategies.", "Furthermore, the Dropout strategy leads to the best performance among all the variants.", "It indicates that dropout is a more effective approach to augment high-quality positives, and is also more appropriate for our approach.", "Our proposed DCLR incorporates an instance weighting method to punish false negatives and also utilizes noise-based negatives to improve the uniformity of the whole sentence representation space.", "To verify their effectiveness, we conduct an ablation study for each of the two components on seven STS tasks and report the average value 6126 0 2000 4000 6000 8000 10000 12000 Training steps 3.0 2.8 2.6 2.4 2.2 2.0 U n i f o r m i t y l o ss OursSimCSE Figure 4: The uniformity loss of DCLR and SimCSE using BERT-base on the validation set of STS-B during training.", "of the Spearman's correlation metric.", "As shown in Table 2, removing each component would lead to the performance degradation.", "It indicates that the instance weighting method and the noise-based negatives are both important in our framework.", "Besides, removing the instance weighting method results in a larger performance drop.", "The reason may be that the false negatives have a larger effect on sentence representation learning.", "Besides, we prepare three variants for further comparison: (1) Random Noise directly generates noise-based negatives without the gradient-based optimization; (2) Knowledge Distillation (Hinton et al., 2015) utilizes SimCSE as the teacher model to distill knowledge into the student model during training; (3) Self Instance Weighting adopts the model itself as the complementary model to generate the weights.", "From Table 2, we can see that these variations don't perform as well as the original DCLR.", "These results indicate the proposed designs in Section 4 are more suitable for our DCLR framework.", "Uniformity is a desirable characteristic for sentence representations, describing how well the representations are uniformly distributed.", "To validate the improvement of the uniformity of our framework, we compare the uniformity loss curves of DCLR and SimCSE using BERT-base during training.", "where p data is the distribution of all sentence representations, and a smaller value of this loss indicates a better uniformity.", "As shown in Figure 4, the 100% 30% 10% 3% 1% 0.3% Training data proportion 0 10 20 30 40 50 60 70 80 A v er ag e s p e a r m a n c o rre l a t i o n STS-BSICK-R Figure 5: Performance tuning of our DCLR w.r.t. different amounts of training data.", "uniformity loss of DCLR is much lower than that of SimCSE in almost the whole training process.", "Furthermore, we can see that the uniformity loss of DCLR decreases faster as training goes, while the one of SimCSE shows no significant decreasing trend.", "It might be because our DCLR samples noise-based negatives beyond the representation space, which can better improve the uniformity of sentence representations.", "To validate the reliability and the robustness of DCLR under the data scarcity scenarios, we conduct few-shot experiments using BERT-base as the backbone model.", "We train our model via different amounts of available training data from 100% to the extremely small size ( i.e., 0.3%).", "We report the results evaluated on STS-B and SICK-R tasks.", "As shown in Figure 5, our approach achieves stable results under different proportions of the training data.", "Under the most extreme setting with 0.3% data proportion, the performance of our model drops by only 9 and 4 percent on STS-B and SICK-R, respectively.", "The results reveal the robustness and effectiveness of our approach under the data scarcity scenarios.", "Such characteristics are important in real-world application.", "For hyper-parameters analysis, we study the impact of instance weighting threshold and the proportion of noise-based negatives k .", "The is the threshold to punish false negatives, and k is the ratio of the noise-based negatives to the batch size.", "Both hyper-parameters are important in our framework.", "Concretely, we evaluate our model with varying values of and k on the STS-B and SICK-R tasks using the BERT-base model.", "Weighting threshold.", "Figure", "6(a) shows the influence of the instance weighting threshold .", "For the STS-B tasks, has a significant effect on the model performance.", "Too large or too small may lead to a performance drop.", "The reason is that a larger threshold cannot achieve effective punishment and a smaller one may cause misjudgment of true negatives.", "In contrast, the SICK-R is insensitive to the changes of .", "The reason may be that the problem of false negatives is not serious in this task.", "Negative proportion.", "As shown in Figure", "6(b), our DCLR performs better when the number of noise-based negatives is close to the batch size.", "Under these circumstances, the noise-based negatives are more capable to enhance the uniformity of the whole semantic space without hurting the alignment, which is key why DCLR works well.", "In this paper, we proposed DCLR, a debiased contrastive learning framework for unsupervised sentence representation learning.", "Our core idea is to alleviate the sampling bias caused by the random negative sampling strategy.", "To achieve it, in our framework, we incorporated an instance weighting method to punish false negatives during training and generated noise-based negatives to alleviate the influence of anisotropy PLM-derived representation.", "Experimental results on seven STS tasks have shown that our approach outperforms several competitive baselines.", "In the future, we will explore other approaches to reducing the bias in contrastive learning of sentence representations ( e.g., debiased pre-training).", "Besides, we will also consider to apply our method for multilingual or multimodal representation learning.", "In this section, we discuss the ethical considerations of this work from the following two aspects.", "First, for intellectual property protection, the code, data and pre-trained models adopted from previous works are granted for research-purpose usage.", "Second, since PLMs have been shown to capture certain biases from the data they have been pre-trained on (Bender et al., 2021), there is a potential problem about biases that are from the use of PLMs in our approach.", "There are increasing efforts to address this problem in the community (Ross et al., 2020).", "This work was partially supported by Beijing Natural Science Foundation under Grant No. 4222027, and National Natural Science Foundation of China under Grant No. 61872369, Beijing Outstanding Young Scientist Program under Grant No.", "BJJWZYJH012019100020098, the Outstanding Innovative Talents Cultivation Funded Programs 2021 and Public Computing Cloud, Renmin University of China.", "This work is also supported by Beijing Academy of Artificial Intelligence (BAAI).", "Xin Zhao is the corresponding author." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "objective", "objective", "objective", "objective", "method", "abstain", "abstain", "objective", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "result", "objective", "method", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "Understanding tables is an important aspect of natural language understanding.", "Existing models for table understanding require linearization of the table structure, where row or column order is encoded as an unwanted bias.", "Such spurious biases make the model vulnerable to row and column order perturbations.", "Additionally, prior work has not thoroughly modeled the table structures or table-text alignments, hindering the table-text understanding ability.", "In this work, we propose a robust and structurally aware table-text encoding architecture TABLEFORMER , where tabular structural biases are incorporated completely through learnable attention biases.", "TABLEFORMER is (1) strictly invariant to row and column orders, and, (2) could understand tables better due to its tabular inductive biases.", "Our evaluations showed that TABLEFORMER outperforms strong baselines in all settings on SQA, WTQ and TABFACT table reasoning datasets, and achieves state-of-the-art performance on SQA, especially when facing answer-invariant row and column order perturbations (6% improvement over the best baseline), because previous SOTA models' performance drops by 4% 6% when facing such perturbations while TABLEFORMER is not affected.", "1 1 Introduction Recently, semi-structured data (e.g. variable length tables without a fixed data schema) has attracted more attention because of its ubiquitous presence on the web.", "On a wide range of various table reasoning tasks, Transformer based architecture along with pretraining has shown to perform well (Eisen-schlos et al., 2021; Liu et al., 2021).", "(a) TAPAS predicts incorrect answer based on the original table, while it gives the correct answer if the first row is moved to the end of the table.", "Liu et al., 2021), where original position ids are used as positional information.", "Due to the usage of row/column ids and global position ids, prior strategies to linearize table structures introduced spurious row and column order biases (Herzig et al., 2020; Eisenschlos et al., 2020, 2021; Zhang et al., 2020; Yin et al., 2020).", "Therefore, those models are vulnerable to row or column order perturbations.", "But, ideally, the model should make consistent predictions regardless of the row or column ordering for all practical purposes.", "For instance, in Figure 1, the predicted answer of TAPAS model (Herzig et al., 2020) for Question", "(a) Of all song lengths, which one is the longest? based on the original table is 5:00 , which is incorrect.", "However, if the first row is adjusted to the end of the table during inference, the model gives the correct length 5:02 as an-528 swer.", "This probing example shows that the model being aware of row order information is inclined to select length values to the end of the table due to spurious training data bias.", "In our experiments on the SQA dataset, TAPAS models exhibit a 4% 6% (Section 5.2) absolute performance drop when facing such answer-invariant perturbations.", "Besides, most prior work (Chen et al., 2020; Yin et al., 2020) did not incorporate enough structural biases to models to address the limitation of sequential Transformer architecture, while others inductive biases which are either too strict (Zhang et al., 2020; Eisenschlos et al., 2021) or computationally expensive (Yin et al., 2020).", "To this end, we propose TABLEFORMER , a Transformer architecture that is robust to row and column order perturbations, by incorporating structural biases more naturally.", "TABLEFORMER relies on 13 types of task-independent table text attention biases that respect the table structure and table-text relations.", "For Question", "(a) in Figure 1, TABLEFORMER could predict the correct answer regardless of perturbation, because the model could identify the same row information with our same row bias, avoiding spurious biases introduced by row and global positional embeddings.", "For Question", "(b), TAPAS predicted only partially correct answer, while TABLEFORMER could correctly predict Spain, Ukraine as answers.", "That's because our cell to sentence bias could help table cells ground to the paired sentence.", "Detailed attention bias types are discussed in Section 5.2.", "Experiments on 3 table reasoning datasets show that TABLEFORMER consistently outperforms original TAPAS in all pretraining and intermediate pretraining settings with fewer parameters.", "Also, TABLEFORMER 's invariance to row and column perturbations, leads to even larger improvement over those strong baselines when tested on perturbations.", "Our contributions are as follows: We identified the limitation of current table-text encoding models when facing row or column perturbation.", "We propose TABLEFORMER , which is guaranteed to be invariant to row and column order perturbations, unlike current models.", "TABLEFORMER encodes table-text structures better, leading to SoTA performance on SQA dataset, and ablation studies show the effectiveness of the introduced inductive biases.", "In this section, we discuss TAPAS which serves as the backbone of the recent state-of-the-art table-text encoding architectures.", "TAPAS (Herzig et al., 2020) uses Transformer architecture in a BERT like fashion to pretrain and finetune on tabular data for table-text understanding tasks.", "This is achieved by using linearized table and texts for masked language model pre-training.", "In the fine-tuning stage, texts in the linearized table and text pairs are queries or statements in table QA or table-text entailment tasks, respectively.", "Specifically, TAPAS uses the tokenized and flat-tened text and table as input, separated by [SEP] token, and prefixed by [CLS] .", "Besides token, segment, and global positional embedding introduced in BERT (Devlin et al., 2019), it also uses rank embedding for better numerical understanding.", "Moreover, it uses column and row embedding to encode table structures.", "Concretely, for any table-text linearized sequence S = { v 1 , v 2 , , v n } , where n is the length of table-text sequence, the input to TAPAS is summation of embedding of the following: token ids ( W ) = { w v 1 , w v 2 , , w v n } positional ids ( B ) = { b 1 , b 2 , , b n } segment ids ( G ) = { g seg 1 , g seg 2 , , g seg n } column ids ( C ) = { c col 1 , c col 2 , , c col n } row ids ( R ) = { r row 1 , r row 2 , , r row n } rank ids ( Z ) = { z rank 1 , z rank 2 , , z rank n } where seg i , col i , row i , rank i correspond to the segment, column, row, and rank id for the i th token, respectively.", "As for the model, TAPAS uses BERT's self-attention architecture (Vaswani et al., 2017) off-the-shelf.", "Each Transformer layer includes a multihead self-attention sub-layer, where each token attends to all the tokens.", "Let the layer input H = [ h 1 , h 2 , , h n ] (cid:62) R n d corresponding to S , where d is the hidden dimension, and h i R d 1 is the hidden representation at position i .", "For a single-head self-attention sub-layer, the input H is projected by three matrices WQ R d d K , WK R d d K , and WV R d d V to the corresponding representations Q , K , and V : Q = HWQ , V = HWV , K = HWK (1) 529 Transformer (Self Attention) Transformer (Self Attention) Learnable structure enforced attention bias scalars .", "Then, the output of this single-head self-attention sub-layer is calculated as: Attn ( H ) = softmax ( QK (cid:62) d K ) V (2) 3 TABLEFORMER : Robust Structural Table Encoding As shown in Figure 2, TABLEFORMER encodes the general table structure along with the associated text by introducing task-independent relative attention biases for table-text encoding to facilitate the following:", "(a) structural inductive bias for better table understanding and table-text alignment,", "(b) robustness to table row/column perturbation.", "Input of TABLEFORMER .", "TABLEFORMER uses the same token embeddings W , segment embeddings G , and rank embeddings Z as TAPAS .", "However, we make 2 major modifications: 1) No row or column ids.", "We do not use row embeddings R or column embeddings C to avoid any potential spurious row and column order biases.", "2) Per cell positional ids.", "To further remove any inter-cell order information, global positional embeddings B are replaced by per cell positional embeddings P = { p pos 1 , p pos 2 , , p pos n } , where we follow Eisenschlos et al. (2021) to reset the index of positional embeddings at the beginning of each cell, and pos i correspond to the per cell positional id for the i th token.", "Positional Encoding in TABLEFORMER .", "Note that the Transformer model either needs to specify different positions in the input (i.e. absolute positional encoding of Vaswani et al. (2017)) or encode the positional dependency in the layers (i.e. relative positional encoding of Shaw et al. (2018)).", "TABLEFORMER does not consume any sort of column and row order information in the input.", "The main intuition is that, for cells in the table, the only useful positional information is whether two cells are in the same row or column and the column header of each cell, instead of the absolute order of the row and column containing them.", "Thus, inspired by relative positional encoding (Shaw et al., 2018) and graph encoding (Ying et al., 2021), we capture this with a same column/row relation as one kind of relative position between two linearized tokens.", "Similarly, we uses 12 such table-text structure relevant relations (including same cell, cell to header and so on) and one extra type representing all other relations not explicitly defined.", "All of them are introduced in the form of learnable 530 attention bias scalars.", "Formally, we consider a function ( v i , v j ) : V V N , which measures the relation between v i and v j in the sequence ( v i , v j S ).", "The function can be defined by any relations between the tokens in the table-text pair.", "Attention Biases in TABLEFORMER .", "In our work, ( v i , v j ) is chosen from 13 bias types, corresponding to 13 table-text structural biases.", "The attention biases are applicable to any table-text pair and can be used for any downstream task: same row identifies the same row information without ordered row id embedding or global positional embedding, which help the model to be invariant to row perturbations, same column , header to column cell , and cell to column header incorporates the same column information without ordered column id embedding, cell to column header makes each cell aware of its column header without repeated column header as features, header to sentence and cell to sentence help column grounding and cell grounding of the paired text, sentence to header , sentence to cell , and sentence to sentence helps to understand the sentence with the table as context, header to same header and header to other header for better understanding of table schema, and same cell bias for cell content understanding.", "We assign each bias type a learnable scalar, which will serve as a bias term in the self-attention module.", "Specifically, each self-attention head in each layer have a set of learnable scalars { b 1 , b 2 , , b 13 } corresponding to all types of introduced biases.", "For one head in one self-attention sub-layer of TABLEFORMER , Equation 2 in the Transformer is replaced by: A = QK (cid:62) d K , A = A + A (3) Attn ( H ) = softmax ( A ) V (4) where A is a matrix capturing the similarity between queries and keys, A is the Attention Bias Matrix, and A i,j = b ( v i ,v j ) .", "Relation between TABLEFORMER and ETC.", "ETC (Ainslie et al., 2020) uses vectors to represent relative position labels, although not directly applied to table-text pairs due to its large computational overhead (Eisenschlos et al., 2021).", "TABLEFORMER differs from ETC in the following aspects (1) ETC uses relative positional embeddings while TABLEFORMER uses attention bias scalars.", "In practice, we observed that using relative positional embeddings increases training time by more than 7x, (2) ETC uses global memory and local attention, while TABLEFORMER uses pairwise attention without any global memory overhead, (3) ETC uses local sparse attention with masking, limiting its ability to attend to all tokens, (4) ETC did not explore table-text attention bias types exhaustively.", "Another table encoding model MATE (Eisensch-los et al., 2021) is vulnerable to row and column perturbations, and shares limitation (3) and (4).", "Table Question Answering.", "For the table QA task, we conducted experiments on WikiTableQues-tions (WTQ) (Pasupat and Liang, 2015) and Sequential QA (SQA) (Iyyer et al., 2017) datasets.", "WTQ was crowd-sourced based on complex questions on Wikipedia tables.", "SQA is composed of 6 , 066 question sequences (2.9 question per sequence on average), constructed by decomposing a subset of highly compositional WTQ questions.", "Table-Text Entailment.", "For the table-text entailment task, we used TABFACT dataset (Chen et al., 2020), where the tables were extracted from Wikipedia and the sentences were written by crowd workers.", "Among total 118, 000 sentences, each one is a positive (entailed) or negative sentence.", "Perturbation Evaluation Set.", "For SQA and TABFACT , we also created new test sets to measure models' robustness to answer-invariant row and column perturbations during inference.", "Specifically, 531 row and column orders are randomly perturbed for all tables in the standard test sets.", "Pre-training All the models are first tuned on the Wikipidia text-table pretraining dataset (Herzig et al., 2020), optionally tuned on synthetic dataset at an intermediate stage (inter) (Eisenschlos et al., 2020), and finally fine-tuned on the target dataset.", "To get better performance on WTQ, we follow Herzig et al. (2020) to further pretrain on SQA dataset after the intermediate pretraining stage in the inter-sqa setting.", "Evaluation For SQA, we report the cell selection accuracy for all questions (ALL) using the official evaluation script, cell selection accuracy for all sequences (SEQ), and the denotation accuracy for all questions (ALL d ).", "To evaluate the models' robustness in the instance level after perturbations, we also report a lower bound of example prediction variation percentage: V P = ( t2f + f2t ) ( t2t + t2f + f2t + f2f ) (5) where t2t, t2f, f2t, and f2f represents how many example predictions turning from correct to correct, from correct to incorrect, from incorrect to correct and from incorrect to incorrect, respectively, after perturbation.", "We report denotation accuracy on WTQ and binary classification accuracy on TABFACT respectively.", "We use TAPASBASE and TAPASLARGE as baselines, where Transformer architectures are exactly same as BERTBASE and BERTLARGE (Devlin et al., 2019), and parameters are initialized from BERTBASE and BERTLARGE respectively.", "Correspondingly, we have our TABLEFORMERBASE and TABLEFORMERLARGE , where attention bias scalars are initialized to zero, and all other parameters are initialized from BERTBASE and BERTLARGE .", "Could we alleviate the spurious ordering biases by data augmentation alone, without making any modeling changes?", "To answer this, we train another set of models by augmenting the training data 2 We fixed perturbation random seeds to make our results reproducible.", "For each table in the training set, we randomly shuffle all rows and columns (including corresponding column headers), creating a new table with the same content but different orders of rows and columns.", "Multiple perturbed versions of the same table were created by repeating this process { 1 , 2 , 4 , 8 , 16 } times with different random seeds.", "For table QA tasks, selected cell positions are also adjusted as final answers according to the perturbed table.", "The perturbed table-text pairs are then used to augment the data used to train the model.", "During training, the model takes data created by one spe-cific random seed in one epoch in a cyclic manner.", "How robust are existing (near) state-of-the-art table-text encoding models to semantic preserving perturbations in the input?", "How does TABLEFORMER compare with existing table-text encoding models when tested on similar perturbations, both in terms of performance and robustness?", "3 By perturbation, we mean shuffling row and columns instead of changing/swapping content blindly.", "Can we use perturbation based data augmentation to achieve robustness at test time?", "Which attention biases in TABLEFORMER contribute the most to performance?", "Table 1, 2, and 3 shows TABLEFORMER performance on SQA, TABFACT , and WTQ, respectively.", "As can be seen, TABLEFORMER outperforms corresponding TAPAS baseline models in all settings on SQA and WTQ datasets, which shows the general effectiveness of TABLEFORMER 's structural biases in Table QA datasets.", "Specifi-cally, TABLEFORMERLARGE combined with intermediate pretraining achieves new state-of-the-art performance on SQA dataset.", "Similarly, Table 2 shows that TABLEFORMER also outperforms TAPAS baseline models in all settings, which shows the effectiveness of TABLEFORMER in the table entailment task.", "Note that, Liu et al. (2021) is not comparable to our results, because they used different pretraining data, different pretraining objectives, and BART NLG model instead of BERT NLU model.", "But TABLEFORMER attention bias is compatible with BART model.", "One of our major contributions is to systematically evaluate models' performance when facing row and column order perturbation in the testing stage.", "Ideally, model predictions should be consistent on table QA and entailment tasks when facing such perturbation, because the table semantics remains the same after perturbation.", "However, in Table 1 and 2, we can see that in our perturbed test set, performance of all TAPAS models drops significantly in both tasks.", "TAPAS models drops by at least 3.7% and up to 6.5% in all settings on SQA dataset in terms of ALL accuracy, while our TABLEFORMER being strictly invariant to such row and column order perturbation leads to no drop in performance.", "4 Thus, in the perturbation setting, TABLEFORMER outperforms all TAPAS baselines even more significantly, with at least 6.2% and 2.4% improvements on SQA and TABFACT dataset, respectively.", "In the instance level, we can see that, with TAPAS , there are many example predictions changed due to high V P , while there is nearly no example predictions changed with TABLEFORMER (around zero V P ).", "4 In SQA dataset, there is at most absolute 0.1% performance drop because of some bad data point issues.", "Specifically, some columns in certain tables are exactly the same, but the ground-truth selected cells are in only one of such columns.", "TABLEFORMER would select from one column randomly.", "We compare the model sizes of TABLEFORMER and TAPAS in Table", "4. We added only a few attention bias scalar parameters (13 parameters per head per layer) in TABLEFORMER , which is negligible compared with the BERT model size.", "Meanwhile, we delete two large embedding metrics (512 row ids and 512 column ids).", "Thus, TABLEFORMER outperforms TAPAS with fewer parameters.", "In this section, we experiment with several variants of TABLEFORMER to understand the effectiveness of its submodules.", "The performance of all variants of TAPAS and TABLEFORMER that we tried on the SQA development set is shown in Table", "5. Learnable Attention Biases v/s Masking.", "Instead of adding learnable bias scalars, we mask out some attention scores to restrict attention to those tokens in the same columns and rows, as well as the paired sentence, similar to Zhang et al. (2020) (SAT).", "We can see that TAPASBASE-SAT performs worse than TAPASBASE , which means that restricting attention to only same columns and rows by masking reduce the modeling capacity.", "This led to choosing soft bias addition over hard masking.", "Attention Bias Scaling.", "Unlike TABLEFORMER , we also tried to add attention biases before the scaling operation in the self-attention module (SO).", "Specifically, we compute pair-wise attention score by: A ij = ( h (cid:62) i WQ )( h (cid:62) j WK ) (cid:62) + A ij d K (6) instead of using: A ij = ( h (cid:62) i WQ )( h (cid:62) j WK ) (cid:62) d K + A ij , (7) rc-gp c-gp gp pcp TAPASBASE 57.6 47.4 46.4 29.1 TAPASBASE-SAT 45.2 -TABLEFORMERBASE-SO 60.0 60.2 59.8 60.7 TABLEFORMERBASE 62.2 61.5 61.7 61.9 Table 5: ALL questions' cell selection accuracy of TABLEFORMER variants on SQA development set.", "which is the element-wise version of Equation 1 and 3.", "However, Table 5 shows that TABLEFORMERBASE-SO performs worse than TABLEFORMERBASE , showing the necessity of adding attention biases after the scaling operation.", "We think the reason is that the attention bias term does not require scaling, because attention bias scalar magnitude is independent of d K , while the dot products grow large in magnitude for large values of d K .", "Thus, such bias term could play an more important role without scaling, which helps each attention head know clearly what to pay more attention to according to stronger inductive biases.", "Row, Column, & Global Positional IDs.", "With TAPASBASE , TABLEFORMERBASE-SO , and TABLEFORMERBASE , we first tried the full-version where row ids, column ids, and global positional ids exist as input ( rc-gp ).", "Then, we deleted row ids ( c-gp ), and column ids ( gp ) sequentially.", "Finally, we changed global positional ids in gp to per-cell positional ids ( pcp ).", "Table 5 shows that TAPASBASE performs significantly worse from rc-gp c-gp gp pcp , because table structure information are deleted sequentially during such process.", "However, with TABLEFORMERBASE , there is no obvious performance drop during the same process.", "That shows the structural inductive biases in TABLEFORMER can provide complete table structure information.", "Thus, row ids, column ids and global positional ids are not necessary in TABLEFORMER .", "We pick TABLEFORMER pcp setting as our final version to conduct all other experiments in this paper.", "In this way, TABLEFORMER is strictly invariant to row and column order perturbation by avoiding spurious biases in those original ids.", "As stated in Section 4.3, perturbing row and column orders as augmented data during training can serve as another possible solution to alleviate the spurious row/column ids bias.", "Table 6 shows the performance of TABPASBASE model trained with additional {1, 2, 4, 8, 16} perturbed versions of each table as augmented data.", "We can see that the performance of TAPASBASE on SQA dataset improves with such augmentation.", "Also, as the number of perturbed versions of each table increases, model performance first increases and then decreases, reaching the best results with 8 perturbed versions.", "We suspect that too many versions of the same table confuse the model about different row and column ids for the same table, leading to decreased performance from 8p to 16p.", "Despite its usefulness, such data perturbation is still worse than TABLEFORMER , because it could not incorporate other relevant text-table structural inductive biases like TABLEFORMER .", "Although, such data augmentation makes the model more robust to row and column order perturbation with smaller V P compared to standard TAPASBASE , there is still a significant prediction drift after perturbation.", "As shown in Table 6, V P decreases from 1p to 16p, however, the best V P (7.0%) is still much higher than (nearly) no variation (0.1%) of TABLEFORMER .", "To sum up, TABLEFORMER is superior to row and column order perturbation augmentation, because of its additional structural biases and strictly consistent predictions after perturbation.", "We conduct ablation study to demonstrate the utility of all 12 types of defined attention biases.", "For each ablation, we set the corresponding attention bias type id to others bias id.", "Table 7 shows TAPASBASE 's performance SQA dev set.", "Overall, all types of attention biases help the TABLEFORMER performance to some extent, due to certain performance drop after deleting each bias type.", "Amongst all the attention biases, deleting same row bias leads to most significant performance drop, showing its crucial role for encoding table row structures.", "There is little performance drop after deleting same column bias, that's because TABLEFORMER could still infer the same column information through cell to its column header and header to its column cell biases.", "After deleting all same column information ( same column , cell to column header and header to column cell biases), TABLEFORMER performs significantly worse without encoding column structures.", "Similarly, there is little performance drop after deleting same cell bias, because TABLEFORMER can still infer same cell information through same row and same column biases.", "TABLEFORMERTABLEFORMER increases the training time by around 20% , which might not be ideal for very long tables and would require a scoped approach.", "Secondly, with the strict row and column order invariant property, TABLEFORMER cannot deal with questions based on absolute orders of rows in tables.", "This however is not a practical requirement based on the current dataset.", "Doing a manual study of 1800 questions in SQA dataset, we found that 535 there are 4 questions 5 (0.2% percentage) whose answers depend on orders of rows.", "Three of them asked which one is at the top of the table , another asks which one is listed first .", "However, these questions could be potentially answered by adding back row and column order information based on TABLEFORMER .", "Transformers for Tabular Data.", "Yin et al. (2020) prepended corresponding column headers to cells contents, and Chen et al. (2020) used corresponding column headers as features for cells.", "However, such methods encode each table header multiple times, leading to duplicated computing overhead.", "Also, tabular structures (e.g. same row information) are not fully incorporated to such models.", "Meanwhile, Yin et al. (2020) leveraged row encoder and column encoder sequentially, which introduced much computational overhead, thus requiring retrieving some rows as a preprocessing step.", "Finally, SAT (Zhang et al., 2020), Deng et al. (2021) and Wang et al. (2021) restricted attention to same row or columns with attention mask, where such inductive bias is too strict that cells could not directly attend to those cells in different row and columns, hindering the modeling ability according to Table", "5. Liu et al. (2021) used the seq2seq BART generation model with a standard Transformer encoder-decoder architecture.", "In all models mentioned above, spurious inter-cell order biases still exist due to global positional ids of Transformer, leading to the vulnerability to row or column order perturbations, while our TABLEFORMER could avoid such problem.", "Mueller et al. (2019) and Wang et al. (2020) also used relative positional encoding to encode table structures, but they modeled the relations as learnable relation vectors, whose large overhead prevented pretraining and led to poor performance without pretraining, similarly to ETC (Ainslie et al., 2020) explained in Section 3.", "Structural and Relative Attention.", "Modified attention scores has been used to model relative positions (Shaw et al., 2018), long documents (Dai et al., 2019; Beltagy et al., 2020; Ainslie et al., 2020), and graphs (Ying et al., 2021).", "But adding 5 We find such 4 questions by manually looking at all 125 questions where the model predictions turn from correct to incorrect after replacing TAPASLARGE with TABLEFORMERLARGE .", "In this paper, we identified the vulnerability of prior table encoding models along two axes:", "(a) capturing the structural bias, and", "(b) robustness to row and column perturbations.", "To tackle this, we propose TABLEFORMER , where learnable task-independent learnable structural attention biases are introduced, while making it invariant to row/column order at the same time.", "Experimental results showed that TABLEFORMER outperforms strong baselines in 3 table reasoning tasks, achieving state-of-the-art performance on SQA dataset, especially when facing row and column order perturbations, because of its invariance to row and column orders.", "We thank Julian Eisenschlos, Ankur Parikh, and the anonymous reviewers for their feedbacks in improving this paper.", "The authors foresee no ethical concerns with the research presented in this paper." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "abstain", "method", "abstain", "abstain", "objective", "abstain", "other", "method" ]
[ "Recently many efforts have been devoted to interpreting the black-box NMT models, but lit-tle progress has been made on metrics to evaluate explanation methods.", "Word Alignment Error Rate can be used as such a metric that matches human understanding, however, it can not measure explanation methods on those target words that are not aligned to any source word.", "This paper thereby makes an initial attempt to evaluate explanation methods from an alternative viewpoint.", "To this end, it proposes a principled metric based on fidelity in regard to the predictive behavior of the NMT model.", "As the exact computation for this metric is intractable, we employ an efficient approach as its approximation.", "On six standard translation tasks, we quantitatively evaluate several explanation methods in terms of the proposed metric and we reveal some valuable findings for these explanation methods in our experiments.", "Neural machine translation (NMT) has witnessed great success during recent years (Sutskever et al., 2014; Bahdanau et al., 2014; Gehring et al., 2017; Vaswani et al., 2017).", "One of the main reasons is that neural networks possess the powerful ability to model sufficient context by entangling all source words and target words from translation history.", "The downside yet is its poor interpretability: it is unclear which specific words from the entangled context are crucial for NMT to make a translation decision.", "As interpretability is important for understanding and debugging the translation process and particularly to further improve NMT models, many efforts have been devoted to explanation methods for NMT (Ding et al., 2017; Alvarez-Melis and Jaakkola, 2017; Li et al., 2019; This work was done during J.Li & G.Li's internship at Tencent AI Lab. L.Liu is the corresponding author. Ding et al., 2019; He et al., 2019).", "However, lit-tle progress has been made on evaluation metric to study how good these explanation methods are and which method is better than others for NMT.", "Generally speaking, we recognize two orthogonal dimensions for evaluating the explanation methods:", "i) how much the pattern (such as source words) extracted by an explanation method matches human understanding on predicting a target word; or", "ii) how the pattern matches predictive behavior of the NMT model on predicting a target word.", "In terms of", "i), Word Alignment Error Rate (AER) can be used as a metric to evaluate an explanation method by measuring agreement between human-annotated word alignment and that derived from the explanation method.", "However, AER can not measure explanation methods on those target words that are not aligned to any source words according to human annotation.", "In this paper, we thereby make an initial attempt to measure explanation methods for NMT according to the second dimension of interpretability, which covers all target words.", "The key to our approach can be highlighted as fidelity : when extracting the most relevant words with an explanation method, if those relevant words have the potential to construct an optimal proxy model that agrees well with the NMT model on making a translation decision, then this explanation method is good ( 3).", "To this end, we formalize a principled evaluation metric as an optimization problem over the expected disagreement between the optimal proxy model and the NMT model( 3.1).", "Since it is intractable to exactly calculate the principled metric for a given explanation method, we propose an approximate metric to address the optimization problem.", "Specifically, inspired by statistical learning theory (Vapnik, 1999), we cast the optimization problem into a standard machine learning problem which is addressed in a two-step strategy: firstly we follow empirical risk minimization to optimize the empirical risk; then we validate the optimized parameters on a held-out test dataset.", "Moreover, we construct different proxy model architectures by utilizing the most relevant words to make a translation decision, leading to variant approximate metric in implementation ( 3.2).", "We apply the approximate metric to evaluate four explanation methods including attention (Bahdanau et al., 2014; Vaswani et al., 2017), gradient norm (Li et al., 2016), weighted gradient (Ding et al., 2019) and prediction difference (Li et al., 2019).", "We conduct extensive experiments on three standard translation tasks for two popular translation models in terms of the proposed evaluation metric.", "Our experiments reveal valuable findings for these explanation methods:", "1) The evaluation methods (gradient norm and prediction difference) are good to interpret the behavior of NMT;", "2) The prediction difference performs better than other methods.", "This paper makes the following contributions: It presents an attempt at evaluating the explanation methods for neural machine translation from a new viewpoint of fidelity.", "It proposes a principled metric for evaluation, and to put it into practice it derives a simple yet efficient approach to approximately calculate the metric.", "It quantitatively compares several different explanation methods and evaluates their effects in terms of the proposed metric.", "Suppose x = { x 1 , , x | x | } denotes a source sentence with length | x | and y = { y 1 , , y | y | } is a target sentence.", "Most NMT literature models the following conditional probability P ( y | x ) in an encoder-decoder fashion: P ( y | x ) = (cid:81) t P ( y t | y <t , x ) = (cid:81) t P ( y t | s t ) , (1) where y <t = { y 1 , , y t 1 } denotes a prefix of y with length t 1 , and s t is the decoding state vector of timestep t .", "In the encoding stage, the encoder of a NMT model transforms the source sentence x into a sequence of hidden vectors h = { h 1 , , h | x | } .", "In the decoding stage, the decoder module summarizes the hidden vectors h and the history decoding states s <t = { s 1 , , s t 1 } into the decoding state vector s t .", "In this paper, we consider two popular NMT translation architectures, RNN-SEARCH (Bahdanau et al., 2014) and TRANSFORMER (Vaswani et al., 2017).", "RNNSEARCH utilizes a bidirectional RNN to define h and it computes s t by the attention function over h , i.e., s t = Attn( s t 1 , h ) , (2) where Attn is the attention function, which is defined as follows: Attn( q, v ) = (cid:88) i ( q, v i ) v i , ( q, v i ) = exp (cid:0) e ( q, v i ) (cid:1) (cid:80) j exp (cid:0) e ( q, v j ) (cid:1) , (3) where q and v i are vectors, e is a similarity function over a pair of vectors and is its normalized function.", "Different from RNN-SEARCH , which relies on RNN , TRANSFORMER employs an attention network to define h , and two additional attention networks to define s t as follows: 1 s t = Attn( s t + 12 , h ) , s t + 12 = Attn( s t 1 , s <t ) .", "In this section, we describe several popular explanation methods that will be evaluated with our proposed metric.", "Suppose c t = (cid:104) y <t , x (cid:105) denotes the context at timestep t , w (or w (cid:48) ) denotes either a source or a target word in the context c t .", "According to Poerner et al. (2018), each explanation method for NMT could be regarded as a word relevance score function ( w ; y, c t ) , where ( w ; y, c t ) > ( w (cid:48) ; y, c t ) indicates that w is more useful for the translation decision P ( y t | c t ) than word w (cid:48) .", "Attention Since Bahdanau et al. (2014) propose the attention mechanism for NMT, it has been the most popular explanation method for NMT (Tu et al., 2016; Mi et al., 2016; Liu et al., 2016; Zenkel et al., 2019).", "1 Due to space limitation, we present the notations for a single layer NMT models, and for TRANSFORMER we only keep the attention (with a single head) block while skipping other blocks such as resNet and layer normalization.", "More details can be found in the references (Vaswani et al., 2017).", "To interpret RNN-SEARCH and TRANSFORMER , we define different for them based on attention.", "For RNN-SEARCH , since attention is only defined on source side, ( w ; y, c t ) can be defined only for the source words: ( x i ; y, c t ) = ( s t 1 , h i ) where is the attention weight defined in", "Eq.(3), and s t 1 is the decoding state of RNN-SEARCH defined in", "Eq.(2).", "In contrast, TRANSFORMER defines the attention on both sides and thus ( w ; y, c t ) is not constrained to source words: ( w ; y, c t ) = (cid:40) ( s t + 12 , h i ) if w = x i , ( s t 1 , s j ) if w = y j and j < t , where s t 1 and s t + 12 are defined in", "Eq.(4).", "Gradient Different from attention that is restricted to a specific family of networks, the explanation methods based on gradient are more general.", "Suppose g ( w, y ) denotes the gradient of P ( y | c t ) w.r.t to the variable w in c t : g ( w, y ) = P ( y | c t ) w (5) where w denotes the gradient w.r.t the embedding of the word w , since a word itself is discrete and can not be taken gradient.", "Therefore, g ( w, y ) returns a vector with the same shape as the embedding of w .", "In this paper, we implement two different gradient-based explanation methods and derive different definitions of ( w ; y, c t ) as follows.", "Gradient Norm (Li et al., 2016): The first definition of is the (cid:96) 1 norm of g : ( w ; y, c t ) = | g ( w, y ) | (cid:96) 1 .", "Weighted Gradient (Ding et al., 2019): The second one is defined as the weighted sum of the embedding of w , with the return of g as the weight: ( w ; y, c t ) = g ( w, y ) (cid:62) w.", "It is worth noting that for each sentence (cid:104) x , y (cid:105) , one has to independently calculate P ( y | c t ) w for each timestep t .", "Therefore, one has to calculate | y | times of gradient for each sentence.", "In contrast, when training NMT, one only requires calculating sentence level gradient and it only calculates one gradient thanks to gradient accumulation in back propagation algorithm.", "Prediction Difference Li et al. (2019) propose a prediction difference (PD ) method, which defines the contribution of the word w by evaluating the change in the probability after removing w from c t .", "Formally, ( w ; y, c t ) based on prediction difference is defined as follows: ( w ; y, c t ) = P ( y | c t ) P ( y | c t \\ w ) where P ( y | c t ) is the NMT probability of y defined in", "Eq.(1), and P ( y | c t \\ w ) denotes the NMT probability of y after excluding w from its context c t .", "To achieve the effect of excluding w from c t , it simply replaces the word embedding of w with zero vector before feeding it into the NMT model.", "The key to our metric is described as follow: to define an explanation method good enough in terms of our metric, the relevant words selected by from the context c t should have the potential to construct an optimal model that exhibits similar behavior to the target model P ( y | c t ) .", "To formalize this metric, we first specify some necessary notations.", "Assume that f ( c t ) is the target word predicted by P ( y | c t ) , i.e., f ( c t ) = arg max y P ( y | c t ) .", "In addition, let W k ( c t ) be the topk relevant words on the source side and target side of the context c t : W k ( c t ) = top kw x (cid:0) w ; f ( c t ) , c t (cid:1) top kw y <t (cid:0) w ; f ( c t ) , c t (cid:1) where denotes the union of two sets, and top kw x ( w ; f ( c t ) , c t ) returns words corresponding to the k largest values.", "2 In addition, suppose Q ( y | W k ( c t ); ) ( Q ( ) or Q for brevity) is a proxy model that makes a translation decision on top of W k ( c t ) rather than the entire context c t like a standard NMT model.", "Formally, we define a principled metric as follows: Definition 1 The metric of is defined by min Q min E c t (cid:104) log Q (cid:0) f ( c t ) | W k ( c t ); (cid:1)(cid:105) (6) 2 In fact, W k ( c t ) f ( c t ) can be considered as generalized translation rules obtained by .", "In other words, the rules are extracted under teacher forcing decoding.", "In particular, if k = 1 , this is similar to the statistical machine translation (SMT) with word level rules (Koehn, 2009), except that a generalized translation rule also involves a word from y <t which simulates the role of language modeling in SMT.", "where E c t [ ] denotes the expectation with respect to the data distribution of c t , and Q is minimized over all possible proxy models.", "The underlying idea of the above metric is to measure the expectation of the disagreement between an optimal proxy model Q constructed from and the NMT model P .", "Here the disagreement is measured by the minus log-likelihood of Q over the data (cid:104)W k ( c t ) , f ( c t ) (cid:105) whose label f ( c t ) is generated from P .", "3 Definition of Fidelity The metric of actually defines fidelity by measuring how much the optimal proxy model defined on W k ( c t ) disagrees with P ( y | c t ) .", "The mention of fidelity is widely used in model compression (Bucilua et al., 2006; Polino et al., 2018), model distillation (Hinton et al., 2015; Liu et al., 2018), and particularly in evaluating the explanation models for black-box neural networks (Lakkaraju et al., 2016; Bastani et al., 2017).", "These works focus on learning a specific model Q on which fidelity can be directly defined.", "However, we are interested in evaluating explanation methods where Q is a latent variable that we have to minimize.", "By doing this, fidelity in our metric is defined on as shown in Eq (6).", "Generally, it is intractable to exactly calculate the principled metric due to two main challenges.", "On one hand, the real data distribution of c t is un-knowable, making it impossible to exactly define the expectation with respect to an unknown distribution.", "On the other hand, the domain of a proxy model Q is not bounded, and it is difficult to minimize a model Q within an unbounded domain.", "Empirical Risk Minimization Inspired by the statistical learning theory (Vapnik, 1999), we calculate the expected disagreement over c t by a two-step strategy: we minimize the empirical risk to obtain an optimized for a given Q ; and then we estimate the risk defined on a held-out test set by using the optimized .", "In this way, we cast the principled metric into a standard machine learning task.", "For a given model architecture Q , to optimize , we first collect the training set as 3 It is natural to extend our definition by using other similar disagreement measures such as the KL distance.", "Since the KL distance requires additional GPU memory to restore the distribution P in the implementation, we employ the minus log-likelihood for efficiency in our experiments.", "{(cid:104)W k ( c t ) , f ( c t ) (cid:105)} for each sentence pair (cid:104) x , y (cid:105) at every time step t , where (cid:104) x , y (cid:105) is a sentence pair from a given bilingual corpus D train = {(cid:104) x n , y n (cid:105) | n = 1 , , N } .", "Then we optimize by the empirical risk minimization: min (cid:88) (cid:104) x , y (cid:105)D train (cid:88) c t log Q ( f ( c t ) | W k ( c t ); ) (7) Proxy Model Selection In response to the second challenge of the unbounded domain, we define a surrogate distribution family Q , and then approximately calculate", "Eq.(6) within Q instead: min Q Q min E c t (cid:104) log Q (cid:0) f ( c t ) | W k ( c t ); (cid:1)(cid:105) (8) We consider three different proxy models including multi-layer feedforward network (FN), recurrent network (RN) and self-attention network (SA).", "In details, for different networks (cid:15) { FN , RN , SA } , the proxy model Q (cid:15) is defined as follows: Q (cid:15) ( y | W k ( c t )) = P ( y | s (cid:15)t ) where s (cid:15)t is the decoding state regarding different architecture (cid:15) .", "Specifically, for feedforward network, the decoding state is defined by s FN t = FNN ( x 1 , , x k , y 1 , , y k ) .", "For (cid:15) { RN , SA } , the decoding state s (cid:15)t is defined by s (cid:15)t = Attn (cid:0) s 0 , { h x 1 , , h x k , h y 1 , h y k } (cid:1) , where x and y are source and target side words from W k ( c t ) , s 0 is the query of init state, h is the position-aware representations of words, generated by the encoder of RN or SA as defined in", "Eq.(3) and", "Eq.(4).", "For RN, s RN t is the weight-sum vectors of a bidirectional LSTM over all selected top k source and target words; while for SA, s SA t is the weight-sum of vectors over the SA networks.", "Given a bilingual training set D train and a bilingual test set D test , we evaluate an explanation method w.r.t the NMT model P ( y | c t ) by setting the proxy model family Q ( ) to include three neural networks as defined before.", "Following the Algorithm 1 Calculating the evaluation metric Require: , Q ( ) , D train , D test Ensure: the metric score m of over D test 1: Q = {} 2: Collect (cid:104) f ( c t ) , W k ( c t ) (cid:105) from D train and D test to obtain two sets FW train and FW test 3: for Q ( ) Q ( ) do 4: Optimize over FW train w.r.t", "standard process of addressing a machine learning problem, Algorithm 1 summarizes the procedure to approximately calculate the metric of on the test dataset D test , which returns the preplexity (PPL) on FW .", "4 In this paper, we try four different choices to specify the surrogate family, i.e., Q = { QFN } , Q = { QRN } , Q = { QSA } , and Q = { QFN , QRN , QSA } , leading to four instances of our metric respectively denoted as FN , RN , SA and Comb .", "In addition, as the baseline metric, we employ the well-trained NMT model P as the proxy model Q by masking out the input words that do not appear in the rule set W k ( c t )) .", "For the baseline metric, it doesn't require to train Q (cid:48) s parameter and tests on D test only.", "Since P is trained with the entire context c t whereas it is testified on W k ( c t ) , this mismatch may lead to poor performance and is thus less trusted.", "This baseline metric extends the idea of Arras et al. (2016); Denil et al. (2014) from classification tasks to structured prediction tasks like machine translation which are highly dependent on context rather than just keywords.", "In this section, we conduct experiments to prove the effectiveness of our metric from two viewpoints: how good an explanation method is and", "4 Note that the negative log-likelihood in Eq.", "6 is proportional to PPL and thus we use PPL as the metric value in this paper.", "Datasets We carry out our experiments on three standard IWSLT translation tasks including IWSLT14 De En (167k sentence pairs), IWSLT17 Zh En (237k sentence pairs) and IWSLT17 Fr En (229k sentence pairs).", "All these datasets are tokenized and applied BPE (Byte-Pair Encoding) following Ott et al. (2019).", "The target side vocabulary sizes of the three datasets are 8876, 11632, and 9844 respectively.", "In addition, we carry out extended experiments on three large-scale WMT translation tasks including WMT14 De En (4.5m sentence pairs), WMT17 Zh En (22m sentence pairs) and WMT14 Fr En (40.8m sentence pairs), with vocabulary sizes 22568, 29832, 27168 respectively.", "NMT Systems To examine the generality of our evaluation method, we conduct experiments on two NMT systems, i.e. RNN-SEARCH (de-noted by RNN ) and TRANSFORMER (denoted by Trans. ), both of which are implemented with fairseq (Ott et al., 2019).", "For RNN, we adopt the 1-layer RNN with LSTM cells whose encoder (bi-directional) and decoder hidden units are 256 and 512 respectively.", "For TRANSFORMER on the IWSLT datasets, the number of layers and attention heads are 2 and 4 respectively.", "For both models, we set the embedding dimensions as 256.", "On WMT datasets, we simply use TRANSFORMERBASE with 4 attention heads.", "The performances of our NMT models are comparable to those reported in recent literature (Tan et al., 2019).", "Explanation Methods On both NMT systems, we implement four explanation methods, i.e. Attention (ATTN ), gradient norm (NGRAD ), weighted gradient (WGRAD ), and prediction difference (PD ) as mentioned in Section 2.", "Our metric We implemented five instantiations of the proposed metric including FN, RN, SA, Comb, and Baseline (Base for brevity) as presented in section 3.3.", "To configurate them, we adopt the same settings from NMT systems to train SA and RN.", "FN is implemented with feeding the features of bag of words through a 3-layer fully connected network.", "As given in algorithm 1, the approximate fidelity is estimated through Q with the lowest PPL, therefore the best metric is NMT Metric ATTNPDNGRADWGRAD Trans Base 196.9 54.3 193.4 13400 FN 13.9 5.8 11.3 131.2 RN 13.8 5.7 10.7 126.7 SA 13.9 5.5 10.8 119.5 Comb 13.8 5.5 10.7 119.5 RNN Base -54.2 90.3 28587 FN -6.7 8.3 170.8 RN -6.5 7.8 163.2 SA -6.5 8.1 154.9 Comb -6.5 7.8 154.9 Table 1: The PPL comparison for the five metric instantiations on the IWSLT De En dataset.", "that achieves the lowest PPL since it results in a closer approximation to the real fidelity.", "In this subsection, we first conduct experiments and analysis on the IWSLT De En task to configurate fidelity-based metric and then extend the experiments to other IWSLT tasks.", "Comparison of metric instantiations We calculate PPL on the IWSLT De En dataset for four metric instantiations (FN, RN, SA, Comb) and Baseline (Base) with k = 1 to extract the most relevant words.", "Table 1 summarizes the results for two translation systems (TRANSFORMER annotated as Trans and RNN-SEARCH annotated as RNN), respectively.", "Note that since there is no target-side attention in RNN-SEARCH , we can not extract the best relevant target word, so Table 1 does not include the results of ATTN method for RNN-SEARCH .", "The baseline (Base) achieves undesirable PPL which indicates the relevant words identified by PD failed to make the same decision as the NMT system.", "The main reason is that the mismatch between training and testing leads to the issue as presented in section 3.3.", "On the contrary, the other four metric instantiations attain much lower PPL than the Baseline.", "In addition, the PPLs on PD , NGRAD , and ATTN are much better than those on WGRAD .", "This finding shows that all PD , NGRAD , and ATTN are good explanation methods except WGRAD in terms of fidelity.", "possible reasons for why one explanation method is better under our metric, we make a naive conjecture: when it tries to reveal the patterns that the", "well-trained NMT has captured, it extracted more concentrated patterns.", "In other words, a generalized rule W k ( c t ) f ( c t ) from one sentence pair can often be observed among other examples.", "To measure the density of the extracted rules, we first divide all extracted rules into five bins according to their frequencies.", "Then we collect the number of rules in each bin as well as the total number of rules.", "Table 2 shows the statistics to measure the density of rules obtained from different evaluation methods.", "From this table, we can see that the density for PD is the highest among those for all explanation methods, because it contains fewer infrequent rules in B 1 , whereas there are more frequent rules in other bins.", "This might be one possible reason that PD is better under our fidelity-based evaluation metric.", "Stability of ranking order In Table 1 the ranking order is PD > NGRAD > ATTN > WGRAD regarding all five metric instantiations.", "Generally, a good metric should preserve the ranking order of explanation methods independent of the test dataset.", "Regarding this criterion of order-preserving property, we analyze the stability of different fidelity-based metric instantiations.", "To this end, we randomly sample one thousand test data with replacement whose sizes are variant from 1% to 100% and then calculate the rate whether the ranking order is preserved on these test datasets.", "The results in Table 3 indicate that FN, RN, SA, Comb are more stable than Base to the change of distribution of test sets.", "According to Table 1 and Table 3, SA performs similar to the best metric Comb and it is faster than Comb or RN for training and testing, thereby, in the rest of experiments, we mainly employ SA to measure evaluation methods.", "Effects on different k In this experiment, we examine the effects of explanation methods on larger k with respect to SA.", "Figure 1 depicts the effects of k for TRANSFORMER on De En task.", "One can clearly observe two findings:", "1) the ranking order of explanation methods is invariant for different k .", "2) as k is larger, the PPL is much better for each explanation method.", "3) the PPL improvement for PD , ATTN , and NGRAD is less after k > 2 , which further validates that they are powerful in explaining NMT using only a few words.", "Testing on other scenarios In the previous experiments, our metric instantiations are trained and evaluated under the same scenario, where c t used to extract relevant words is obtained from gold data and its label f ( c t ) is the prediction from NMT f , namely Teacher Forcing Decode.", "To examine the robustness of our metric, we apply the trained metric to two different scenarios: real decoding scenario (Real-Decode) where both c t and its label f ( c t ) are from the NMT output; and golden data scenario (Golden-Data) where both c t and its label are from golden test data.", "The results for both scenarios are shown in Table 5.", "both scenarios is the same as before.", "To our surprise, the results in Real-Decode are even better than those in the matched Teacher Forcing Decode scenario.", "One possible reason is that the labels generated by a NMT system in the Real-Decode tend to be high-frequency words, which leads to better PPL.", "In contrast, our metric instantiation in the Golden-Data results in much higher PPL due to the mismatch between training and testing.", "The performance of experimenting training and testing in the same scenario like Golden-Data can be experimented in future works, however, it's not the focus of this paper.", "Since our metric such as SA requires to extract generalized rules for each explanation method from the entire training dataset, it is computationally expensive for some explanation methods such as gradient methods to directly run on WMT tasks with large scale training data.", "Effects on sample size We randomly sample some subsets over WMT Zh En training data that includes 22 million sentence pairs to form several new training sets.", "The sample sizes of the new training sets are set up to 2 million and the results are illustrated in Figure 2.", "The following facts are revealed.", "Firstly, the ranking order of \u0000\u0014 \u0000\u0013 \u0000N \u0000\u0018 \u0000\u0013 \u0000N \u0000\u0014 \u0000\u0013 \u0000\u0013 \u0000N \u0000\u0018 \u0000\u0013 \u0000\u0013 \u0000N \u0000\u0014 \u0000P \u0000\u0015 \u0000P \u00006\u0000D\u0000P\u0000S\u0000O\u0000H\u0000\u0003\u00006\u0000L\u0000]\u0000H \u0000\u0014 \u0000\u0019 \u0000\u0016\u0000\u0019 \u0000\u0015\u0000\u0014\u0000\u0019 \u0000\u0014\u0000\u0015\u0000\u001c\u0000\u0019 \u00003\u00003 \u0000/ \u0000$\u0000W\u0000W\u0000Q\u00003\u0000G \u00001\u0000J\u0000U\u0000D\u0000G\u0000:\u0000J\u0000U\u0000D\u0000G Figure 2: PPL for each explanation method on TRANSFORMER over WMT Zh En task with different sample sizes.", "four explanation methods remains unchanged with respect to different sample sizes.", "Secondly, with the increase of the sample size, the metric score decreases slower and slower and there is no significant drop from sampling 2 million sentence pairs to sampling 1 million.", "Results on WMT With the analysis of effects on various sample sizes, we choose a sample size of 1 million for the following scaling experiments.", "The PPL results for WMT De En , Zh En ,and Fr En are listed in Table 6.", "We can see that the order PD > NGRAD > ATTN > WGRAD evaluated by SA still remains unchanged on these three datasets as before.", "One can observe that the ranking order under the baseline doesn't agree with SA on WMT De En and Zh En .", "Since the baseline yields in high PPL due to the mismatch we mentioned in section 3.3 ,in this case, we tend to trust Datasets Methods SA Alignment PPL Rank AER Rank IWSLT Zh En ATTN 30.8 3 55.0 3 PD 10.8 1 50.6 1 NGRAD 19 2 52.9 2 WGRAD 180.9 4 79.2 4 WMT Zh En ATTN 27.3 3 42.1 2 PD 7.7 1 32.7 1 NGRAD 16.5 2 49.3 3 WGRAD 263.5 4 79.2 4 WMT De En ATTN 17.0 3 48.7 3 PD 5.4 1 34.1 1 NGRAD 15.1 2 48.1 2 WGRAD 194.7 4 73.5 4 Table 7: Relation with word alignment.", ",", "The airfields were crowded with airplanes as a result of many flight delays.", "Since the calculation of the Alignment Error Rate (AER) requires manually annotated test datasets with ground-truth word alignments, we select three different test datasets contained such alignments for experiments, namely, IWSLT Zh En , NIST05 Zh En 5 and Zenkel De En (Zenkel et al., 2019).", "Note that unaligned target words account for 7.8%, 4.7%, and 9.2% on these three test sets respectively, which are skipped by AER for evaluating explanation methods.", "For example, in Figure 3, those target words as a result cannot be covered by AER due to the impossibility of human annotation, but for a fidelity-based metric, they can be analyzed as well. Table 7 demonstrates that our fidelity-based metric does not agree very well with AER on the WMT Zh En task: NGRAD is better than ATTN in terms of SA but the result is opposite in terms of AER. Since the evaluation criteria of SA and AER are different, it is reasonable that their evaluation results are different. This finding is in line with 5 https://www.ldc.upenn.edu/ collaborations/evaluations/nist the standpoint by Jacovi and Goldberg (2020): SA is an objective metric that reflects fidelity of models while AER is a subject metric based on human evaluation. However, it is observed that the ranking by SA is consistent on all three tasks but that by AER is highly dependent on different tasks. 5 Related Work In recent years, explaining deep neural models has been a growing interest in the deep learning community, aiming at more comprehensible and trustworthy neural models. In this section, we mainly discuss two dominating ways towards it. One way is to develop explanation methods to interpret a target black-box neural network (Bach et al., 2015; Zintgraf et al., 2017). For example, on classification tasks, Bach et al. (2015) propose layer-wise relevance propagation to visualize the relationship between a pair of neurons within networks, and Li et al. (2016) introduce a gradient-based approach to understanding the composition-ality in neural networks for NLP. In particular, on structured prediction tasks, many research works design similar methods to understand NMT models (Ding et al., 2017; Alvarez-Melis and Jaakkola, 2017; Ding et al., 2019; He et al., 2019). The other way is to construct an interpretable model for the target network and then indirectly interpret its behavior to understand the target network on classification tasks (Lei et al., 2016; Murdoch and Szlam, 2017; Arras et al., 2017; Wang et al., 2019). The interpretable model is defined on top of extracted rational evidence and learned by model distillation from the target network. To extract rational evidence from the entire inputs, one either leverages a particular explanation method (Lei et al., 2016; Wang et al., 2019) or an auxiliary evidence extraction model (Murdoch and Szlam, 2017; Arras et al., 2017). Although our work focuses on evaluating explanation methods and does not aim to construct an interpretable model, we draw inspiration from their ideas to design Q Q in Eq. (6) for our evaluation metric. With the increasing efforts on designing new explanation methods, yet there are only a few works proposed to evaluate them. Mohseni and Ragan (2018) propose a paradigm to evaluate explanation methods for document classification that involves human judgment for evaluation. Poerner et al. (2018) conduct the first human-independent comprehensive evaluation of explanation methods for NLP tasks. However, their metrics are task-specific because they make some assumptions for a specific task. Our work proposes a principled metric to evaluate explanation methods for NMT and our evaluation paradigm is independent of any assumptions as well as humans. It is worth noting that Arras et al. (2016); Denil et al. (2014) directly measure the performance of the target model P on the extracted words without constructing Q to evaluate explanation methods for classification tasks. However, since translation is more complex than classification tasks, P trained on the entire context c t typically makes a terrible prediction when testing on the compressed context W k ( c t ) . As a result, the poor prediction performance makes it difficult to discriminate one explanation method from others, as observed in our internal experiments. Concurrently, Jacovi and Goldberg (2020) make a proposition to evaluate faithfulness of an explanation method separately from readability and plausibility (i.e., human-interpretability), which is similar to our definition of fidelity, but they do not formalize a metric or propose algorithms to measure it. 6 Conclusions This paper has made an initial attempt to evaluate explanation methods from a new viewpoint. It has presented a principled metric based on fidelity in regard to the predictive behavior of the NMT model. Since it is intractable to exactly calculate the principled metric for a given explanation method, it thereby proposes an approximate approach to address the minimization problem. The proposed approach does not rely on human annotation and can be used to evaluate explanation methods on all target words. On six standard translation tasks, the metric quantitatively evaluates and compares four different explanation methods for two popular translation models. Experiments reveal that PD , NGRAD , and ATTN are all good explanation methods that are able to construct the NMT model's predictions with relatively low perplexity and PD shows the best fidelity among them.", "We would like to thank all anonymous reviews for their valuable suggestions.", "This research was supported by Tencent AI Lab." ]
[ "abstain", "abstain", "method", "abstain", "method", "objective", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "objective", "objective", "method", "method", "method", "objective", "result", "abstain", "abstain", "objective", "objective", "objective", "abstain", "other", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]